text
stringlengths
112
2.78M
meta
dict
--- author: - | [^1]\ Dipartimento di Fisica [*Ettore Pancini*]{}, Università di Napoli [*Federico II*]{}, and INFN, Sezione di Napoli, Complesso Universitario di Monte S. Angelo, I-80126 Napoli, Italy\ E-mail: title: Dark Matter scenarios at IceCube --- Introduction ============ The recent discovery of a diffuse neutrino flux at the TeV–PeV range by the IceCube collaboration [@IceCube] has ushered us into a new era for astroparticle physics. Due to the very feeble interaction with the other Standard Model particles, neutrinos represent the best messenger for observing and studying the cosmos. Indeed, the observation of extraterrestrial neutrinos provides an important tool that can be adopted to examine the acceleration mechanisms of hadronic cosmic-rays and the properties of both galactic and extragalactic astrophysical environments. Moreover, the IceCube Neutrino Telescope has measured the most energetic neutrinos, offering us the possibility to explore the neutrino physics at energies where phenomena beyond the Standard Model can be relevant.\ The origin of the diffuse neutrino flux is still unknown. In the last few years, the scientific community has proposed a variety of astrophysical sources (extragalactic Supernovae and Hypernovae remnants [@Chakraborty:2015sta], blazars [@Kalashev:2014vya], and gamma-ray bursts [@Waxman:1997ti]) as potential candidates for the IceCube observations. In general, under the reasonable assumption of a correlation with the spectrum of hadronic cosmic-rays, one expects that the differential neutrino flux has a power-law behavior $$\frac{{\rm d}\phi^{\rm Astro}}{{\rm d}E_\nu {\rm d}\Omega} = \phi^{\rm Astro}_0 \left( \frac{E_\nu}{100~{\rm TeV}} \right)^{-\gamma}\,, \label{eq:astro}$$ where $\gamma$ is called [*spectral index*]{} and $\phi^{\rm Astro}_0$ is the normalization of the neutrino flux at 100 TeV. The astrophysical neutrino flux is assumed to be isotropic and it is characterized by an equal flavour ratio $\left(1:1:1\right)$ at the Earth. While the standard Fermi acceleration mechanism at shock fronts implies $\gamma = 2.0$ at first order, the spectral index can assume larger values depending on the astrophysical source considered. Indeed, in case of neutrinos produced by hadronuclear $p$–$p$ interactions then $\gamma \lesssim 2.2$ [@ppsources] (see also Ref. [@Murase:2013rfa]), while for photohadronic $p$–$\gamma$ interactions the spectral index has to be larger than $2.3$ [@Winter:2013cla].\ The IceCube observations of different data samples have revealed a preference for a soft power-law spectrum and the IceCube combined analysis has provided the best-fit $\gamma_{\rm best}=2.50\pm0.09$ [@IceCube]. However, very recently, the analysis on the 6-year up-going muon neutrinos shows that at high energy $\left(E_\nu \geq 100\,{\rm TeV}\right)$ the best-fit spectral index is $\gamma = 2.13$, value that is disfavoured by 3.3$\sigma$ with respect to $\gamma_{\rm best}$. As also stated by the IceCube collaboration, this tension may indicate the presence of a second galactic component in the diffuse neutrino flux.\ The tension with the assumption of a single power-law is further strengthened by considering the [*multi-messenger*]{} analyses. Indeed, the contributions of different astrophysical sources to the IceCube spectrum are strongly constrained if one assumes a correlation between the diffuse neutrino flux and the isotropic diffuse gamma-ray background measured by Fermi-LAT [@Ackermann:2014usa]. For instance, it has been pointed out that the contribution of star-forming galaxies ($p$–$p$ sources) has to be smaller than $\sim 30\%$ at $100$ TeV and $\sim 60\%$ at $1$ PeV [@Bechtol:2015uqb] (see also Ref. [@Murase:2015xka]). On the other hand, gamma-ray bursts [@Aartsen:2014aqy] and blazars [@blazarMulti] ($p$–$\gamma$ sources) can only provide a contribution of $\sim1\%$ and $\sim20\%$ to the IceCube neutrino spectrum, respectively. Therefore, the multi-messenger analyses suggest that a hard neutrino spectrum is preferred with respect to a soft one. ![\[fig:1\]Residuals in the number of neutrino events (2-year MESE) with respect to a single astrophysical power-law with spectral index 2.0, in the southern and northern hemispheres (see Ref. [@Chianese:2016kpu]).](res_s20p.pdf "fig:"){width="42.00000%"} 3.mm ![\[fig:1\]Residuals in the number of neutrino events (2-year MESE) with respect to a single astrophysical power-law with spectral index 2.0, in the southern and northern hemispheres (see Ref. [@Chianese:2016kpu]).](res_n20p.pdf "fig:"){width="42.00000%"} \ As discussed in Ref.s [@Chianese:2016kpu; @Chianese:2016opp], once a hard neutrino spectrum $E_\nu^{-2.0}$ is considered according to the multi-messenger analyses, the IceCube data show an excess in the 10–100 TeV range ([*low-energy excess*]{}) with a maximum local statistical significance of 2.3$\sigma$ (2.0$\sigma$) in the 2-year MESE (4-year HESE) data (see Fig. \[fig:1\] for the residuals in the 2-year MESE data). However, as the spectral index increases, the excess moves towards PeV energies ([*high-energy excess*]{}) [@Boucenna:2015tra].\ All the above considerations lead to the conclusion that the IceCube measurements are explained in terms of a [*two-component*]{} neutrino flux. In particular, we have studied in detail the intriguing two-component scenario where, in addition to the neutrino background, one component is related to astrophysical sources and one is originated from Dark Matter [@Chianese:2016kpu; @Chianese:2016opp; @Boucenna:2015tra] (see also Ref. [@Chen:2014gxa]). In this case, the total differential neutrino flux takes the following expression $$\frac{{\rm d}\phi}{{\rm d}E_\nu {\rm d}\Omega} = \frac{{\rm d}\phi^{\rm Astro}}{{\rm d}E_\nu {\rm d}\Omega} + \frac{{\rm d}\phi^{\rm DM}}{{\rm d}E_\nu {\rm d}\Omega}\,. \label{eq:tot_flux}$$ Here, the first term is given by the astrophysical power-law of Eq. (\[eq:astro\]). The Dark Matter second term (see Ref. [@Chianese:2016kpu] for its detailed expression) depends on the particular interaction with the Standard Model particles (Dark Matter particles decaying or annihilating into leptonic or hadronic final states) and on the halo density distribution of our galaxy. Two-component Neutrino Flux and Dark Matter =========================================== ![\[fig:2\]Best-fit of the two-component hypothesis for the IceCube diffuse neutrino flux in case of 2-year MESE (left panel) and 3-year HESE (right panel) data (see Ref.s [@Chianese:2016kpu] and [@Boucenna:2015tra], respectively). The Dark Matter contribution is respectively obtained by considering Dark Matter decaying into tau leptons (low-energy excess) and leptophilic three-body decays at PeV energies (high-energy excess).](pMESE.pdf "fig:"){width="40.00000%"} 3.mm ![\[fig:2\]Best-fit of the two-component hypothesis for the IceCube diffuse neutrino flux in case of 2-year MESE (left panel) and 3-year HESE (right panel) data (see Ref.s [@Chianese:2016kpu] and [@Boucenna:2015tra], respectively). The Dark Matter contribution is respectively obtained by considering Dark Matter decaying into tau leptons (low-energy excess) and leptophilic three-body decays at PeV energies (high-energy excess).](pHESE.pdf "fig:"){width="40.00000%"} The proposed two-component interpretation of the IceCube data is depicted in left and right panels of Fig. \[fig:2\] for the low-energy excess [@Chianese:2016kpu; @Chianese:2016opp] and the high-energy one [@Boucenna:2015tra], respectively.[^2] The low-energy excess is explained in terms of the decay channel ${\rm DM} \to \tau^+\tau^-$, while the excess at high energy is fitted by the leptophilic model predicting ${\rm DM} \to \ell^+\ell^-\nu$. In both cases, the Navarro-Frenk-White halo density distribution has been considered. The low-energy excess has been characterized by analyzing the 2-year MESE data, since they have a lower energy threshold with respect to the HESE ones (see the different energy scale in the two plots).\ Regarding the PeV neutrinos, the leptophilic model proposed in Ref. [@Boucenna:2015tra] has two peculiar characteristics: the neutrino spectrum is spread differently from the case of two-body decays and it is peaked in a particular energy range due to the absence of quarks in the final states. Such a model is also able to account for the Dark Matter production in the early Universe through a freeze-in mechanism [@Chianese:2016smc]. It is worth observing that considering the unrealistic behavior $E_\nu^{-3.0}$ of the astrophysical component is equivalent to assume a power-law with $\gamma=2.0$ exponentially suppressed for $E_\nu \geq 100$ TeV (neutrino spectrum expected for extragalactic Supernovae remnants [@Chakraborty:2015sta]). Conclusions =========== The tension of the IceCube observations with the assumption of a single power-law flux indicates the presence of a second contribution to the diffuse neutrino flux. Therefore, we have examined the case where this second component is related to Dark Matter. A likelihood-ratio statistical test has shown that two-component scenario of Eq. (\[eq:tot\_flux\]) is favoured by 2$\sigma$–4$\sigma$ with respect to a single power-law [@Chianese:2016kpu]. The statistical significance depends on the Dark Matter model and the slope of the astrophysical contribution. In general, the leptophilic decaying Dark Matter models are preferred by both IceCube and Fermi-LAT measurements, while the other cases are disfavoured by multi-messenger studies. Better statistics could confirm the presence of an excess in the neutrino spectrum and this would potentially shed light on some of the deepest mysteries in contemporary physics: the nature of Dark Matter. [99]{} M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], [*Evidence for High-Energy Extraterrestrial Neutrinos at the IceCube Detector*]{}, Science [**342**]{} (2013) 1242856 \[[arXiv:1311.5238](https://arxiv.org/abs/1311.5238)\].\ M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], [*Observation of High-Energy Astrophysical Neutrinos in Three Years of IceCube Data*]{}, Phys. Rev. Lett.  [**113**]{} (2014) 101101 \[[arXiv:1405.5303](https://arxiv.org/abs/1405.5303)\].\ M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], [*Atmospheric and astrophysical neutrinos above 1 TeV interacting in IceCube*]{}, Phys. Rev. D [**91**]{} (2015) no.2, 022001 \[[arXiv:1410.1749](https://arxiv.org/abs/1410.1749)\].\ M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], [*A combined maximum-likelihood analysis of the high-energy astrophysical neutrino flux measured with IceCube*]{}, Astrophys. J.  [**809**]{} (2015) no.1, 98 \[[arXiv:1507.03991](https://arxiv.org/abs/1507.03991)\].\ M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], [*The IceCube Neutrino Observatory - Contributions to ICRC 2015 Part II: Atmospheric and Astrophysical Diffuse Neutrino Searches of All Flavors*]{}, \[[arXiv:1510.05223](https://arxiv.org/abs/1510.05223)\].\ M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], [*Observation and Characterization of a Cosmic Muon Neutrino Flux from the Northern Hemisphere using six years of IceCube data*]{}, \[[arXiv:1607.08006](https://arxiv.org/abs/1607.08006)\]. S. Chakraborty and I. Izaguirre, [*Diffuse neutrinos from extragalactic supernova remnants: Dominating the 100 TeV IceCube flux*]{}, Phys. Lett. B [**745**]{} (2015) 35 \[[arXiv:1501.02615](https://arxiv.org/abs/1501.02615)\]. O. Kalashev, D. Semikoz and I. Tkachev, [*Neutrinos in IceCube from active galactic nuclei*]{}, J. Exp. Theor. Phys.  [**120**]{} (2015) no.3, 541 \[[arXiv:1410.8124](https://arxiv.org/abs/1410.8124)\]. E. Waxman and J. N. Bahcall, [*High-energy neutrinos from cosmological gamma-ray burst fireballs*]{}, Phys. Rev. Lett.  [**78**]{} (1997) 2292 \[[astro-ph/9701231](https://arxiv.org/abs/astro-ph/9701231)\]. A. Loeb and E. Waxman, [*The Cumulative background of high energy neutrinos from starburst galaxies*]{}, JCAP [**0605**]{} (2006) 003 \[[astro-ph/0601695](https://arxiv.org/abs/astro-ph/0601695)\].\ S. R. Kelner, F. A. Aharonian and V. V. Bugayov, [*Energy spectra of gamma-rays, electrons and neutrinos produced at proton-proton interactions in the very high energy regime*]{}, Phys. Rev. D [**74**]{} (2006) 034018 Erratum: \[Phys. Rev. D [**79**]{} (2009) 039901\] \[[astro-ph/0606058](https://arxiv.org/abs/astro-ph/0606058)\]. K. Murase, M. Ahlers and B. C. Lacki, [*Testing the Hadronuclear Origin of PeV Neutrinos Observed with IceCube*]{}, Phys. Rev. D [**88**]{}, no. 12, 121301 (2013) \[[arXiv:1306.3417](http://arxiv.org/abs/1306.3417)\]. W. Winter, [*Photohadronic Origin of the TeV-PeV Neutrinos Observed in IceCube*]{}, Phys. Rev. D [**88**]{}, 083007 (2013) \[[arXiv:1307.2793](http://arxiv.org/abs/1307.2793)\]. M. Ackermann [*et al.*]{} \[Fermi-LAT Collaboration\], [*The spectrum of isotropic diffuse gamma-ray emission between 100 MeV and 820 GeV*]{}, Astrophys. J.  [**799**]{} (2015) 86 \[[arXiv:1410.3696](https://arxiv.org/abs/1410.3696)\]. K. Bechtol, M. Ahlers, M. Di Mauro, M. Ajello and J. Vandenbroucke, [*Evidence against star-forming galaxies as the dominant source of IceCube neutrinos*]{}, \[[arXiv:1511.00688](https://arxiv.org/abs/1511.00688)\]. K. Murase, D. Guetta and M. Ahlers, [*Hidden Cosmic-Ray Accelerators as an Origin of TeV-PeV Cosmic Neutrinos*]{}, Phys. Rev. Lett.  [**116**]{} (2016) no.7, 071101 \[[arXiv:1509.00805](https://arxiv.org/abs/1509.00805)\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], [*Search for Prompt Neutrino Emission from Gamma-Ray Bursts with IceCube*]{}, Astrophys. J.  [**805**]{} (2015) no.1, L5 \[[arXiv:1412.6510](https://arxiv.org/abs/1412.6510)\]. T. Glüsenkamp \[IceCube Collaboration\], [*Analysis of the cumulative neutrino flux from Fermi-LAT blazar populations using 3 years of IceCube data*]{}, EPJ Web Conf.  [**121**]{} (2016) 05006 \[[arXiv:1502.03104](https://arxiv.org/abs/1502.03104)\].\ M. Schimp [*et al.*]{} \[IceCube Collaboration\], [*Astrophysical interpretation of small-scale neutrino angular correlation searches with IceCube*]{}, PoS ICRC [**2015**]{} (2016) 1085 \[[arXiv:1509.02980](https://arxiv.org/abs/1509.02980)\].\ M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], [*The contribution of Fermi-2LAC blazars to the diffuse TeV-PeV neutrino flux*]{}, \[[arXiv:1611.03874](https://arxiv.org/abs/1611.03874)\]. M. Chianese, G. Miele and S. Morisi, [*Dark Matter interpretation of low energy IceCube MESE excess*]{}, JCAP [**1701**]{} (2017) no.01, 007 \[[arXiv:1610.04612](https://arxiv.org/abs/1610.04612)\]. M. Chianese, G. Miele, S. Morisi and E. Vitagliano, [*Low energy IceCube data and a possible Dark Matter related excess*]{}, Phys. Lett. B [**757**]{} (2016) 251 \[[arXiv:1601.02934](https://arxiv.org/abs/1601.02934)\]. S. M. Boucenna, M. Chianese, G. Mangano, G. Miele, S. Morisi, O. Pisanti and E. Vitagliano, [*Decaying Leptophilic Dark Matter at IceCube*]{}, JCAP [**1512**]{} (2015) no.12, 055 \[[arXiv:1507.01000](https://arxiv.org/abs/1507.01000)\]. C. Y. Chen, P. S. Bhupal Dev and A. Soni, [*Two-component flux explanation for the high energy neutrino events at IceCube*]{}, Phys. Rev. D [**92**]{} (2015) no.7, 073001 \[[arXiv:1411.5658](https://arxiv.org/abs/1411.5658)\]. M. Chianese and A. Merle, [*A Consistent Theory of Decaying Dark Matter Connecting IceCube to the Sesame Street*]{}, \[[arXiv:1607.05283](https://arxiv.org/abs/1607.05283)\]. [^1]: I thank the organizers of the Neutrino Oscillation Workshop and the conveners of my session. I acknowledge the financial support by the Instituto Nazionale di Fisica Nucleare I.S. TASP and the PRIN 2012 “Theoretical Astroparticle Physics" of the Italian Ministero dell’Istruzione, Università e Ricerca. [^2]: See references cited in Ref.s [@Chianese:2016kpu; @Chianese:2016opp; @Boucenna:2015tra] for other Dark Matter interpretations of IceCube measurements.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We describe a new mechanism that leads to the destabilisation of non-axisymmetric waves in astrophysical discs with an imposed radial temperature gradient. This might apply, for example, to the outer parts of protoplanetary discs. We use linear density wave theory to show that non-axisymmetric perturbations generally do not conserve their angular momentum in the presence of a forced temperature gradient. This implies an exchange of angular momentum between linear perturbations and the background disc. In particular, when the disturbance is a low-frequency trailing wave and the disc temperature decreases outwards, this interaction is unstable and leads to the growth of the wave. We demonstrate this phenomenon through numerical hydrodynamic simulations of locally isothermal discs in 2D using the FARGO code and in 3D with the ZEUS-MP and PLUTO codes. We consider radially structured discs with a self-gravitating region which remains stable in the absence of a temperature gradient. However, when a temperature gradient is imposed we observe exponential growth of a one-armed spiral mode (azimuthal wavenumber $m=1$) with co-rotation radius outside the bulk of the spiral arm, resulting in a nearly-stationary one-armed spiral pattern. The development of this one-armed spiral does not require the movement of the central star, as found in previous studies. Because destabilisation by a forced temperature gradient does not explicitly require disc self-gravity, we suggest this mechanism may also affect low-frequency one-armed oscillations in non-self-gravitating discs.' author: - | Min-Kai Lin$^{1}$ [^1]\ $^1$Department of Astronomy and Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721, USA bibliography: - 'ref.bib' title: 'One-armed spirals in locally isothermal, radially structured self-gravitating discs' --- accretion, accretion discs, hydrodynamics, instabilities, methods: numerical, protoplanetary discs Introduction {#intro} ============ An exciting development in the study of circumstellar discs is the direct observation of large-scale, non-axisymmetric structures within them. These include lopsided dust distributions [@marel13; @fukagawa13; @casassus13; @isella13; @perez14; @follette14; @plas14] and spiral arms [@hashimoto11; @muto12; @boccaletti14; @grady13; @christiaens14; @avenhaus14]. The attractive explanation for asymmetries in circumstellar discs is disc-planet interaction. In particular, spiral structures naturally arise from the gravitational interaction between a planet and the gaseous protoplanetary disc it is embedded in [see, e.g. @baruteau13b for a recent review]. Thus, the presence of spiral arms in circumstellar discs could be signposts of planet formation [but see @juhasz14]. Spiral arms are also characteristic of global gravitational instability (GI) in differentially rotating discs [@goldreich65; @laughlin96b; @laughlin98; @nelson98; @lodato05; @forgan11]. Large-scale spiral arms can provide significant angular momentum transport necessary for mass accretion [@lynden-bell72; @papaloizou91; @balbus99; @lodato04], and spiral structures due to GI are potentially observable with the Atacama Large Millimeter/sub-millimeter Array [@cossins10; @dipierror14]. GI can be expected in the earliest stages of circumstellar disc formation [@kratter10b; @inutsuka10; @tsukamoto13], and may be possible in the outer parts of the disc [@rafikov05; @matzner05; @kimura12]. Single-arm spirals, or eccentric modes, corresponding to perturbations with azimuthal wavenumber $m=1$, have received interest in the context of protoplanetary discs because of their global nature [@adams89; @heemskerk92; @laughlin96; @tremaine01; @papaloizou02; @hopkins10]. In the ‘SLING’ mechanism proposed by [@shu90], an $m=1$ gravitational instability arises from the motion of the central star induced by the one-armed perturbation, and requires a massive disc [the former may have observable consequences, @michael10]. In this work we identify a new mechanism that leads to the growth of one-armed spirals in astrophysical discs. We show that when the disc temperature is prescribed (called locally isothermal discs), the usual statement of the conservation of angular momentum for linear perturbations acquires a source term proportional to the temperature gradient. This permits angular momentum exchange between linear perturbations and the background disc. This ‘background torque’ can destabilise low-frequency non-axisymmetric trailing waves when the disc temperature decreases outwards. We employ direct hydrodynamic simulations using three different grid-based codes to demonstrate how this ‘background torque’ can lead to the growth of one-armed spirals in radially structured, self-gravitating discs. This is despite the fact that our disc models do not meet the requirements for the ‘SLING’ mechanism. Although our numerical simulations consider self-gravitating discs, this ‘background torque’ is generic for locally isothermal discs and its existence does not require disc self-gravity. Thus, the destabilisation effect we describe should also be applicable to non-self-gravitating discs. This paper is organised as follows. In §\[model\] we describe the system of interest and list the governing equations. In §\[wkb\] we use linear theory to show how a fixed temperature profile can destabilise non-axisymmetric waves in discs. §\[methods\] describes the numerical setup and hydrodynamic codes we use to demonstrate the growth of one-armed spirals due to an imposed temperature gradient. Our simulation results are presented in §\[results2d\] for two-dimensional (2D) discs and in §\[results3d\] for three-dimensional (3D) discs, and we further discuss them in §\[discussions\]. We summarise in §\[summary\] with some speculations for future work. Acknowledgments {#acknowledgments .unnumbered} =============== I thank K. Kratter, Y. Wu and A. Youdin for valuable discussions, and the anonymous referee for comments that significantly improved this paper. All computations were performed on the El Gato cluster at the University of Arizona. This material is based upon work supported by the National Science Foundation under Grant No. 1228509. [^1]: minkailin@email.arizona.edu
{ "pile_set_name": "ArXiv" }
--- abstract: 'The property of desynchronization in an all-to-all network of homogeneous impulse-coupled oscillators is studied. Each impulse-coupled oscillator is modeled as a hybrid system with a single timer state that self-resets to zero when it reaches a threshold, at which event all other impulse-coupled oscillators adjust their timers following a common reset law. In this setting, desynchronization is considered as each impulse-coupled oscillator’s timer having equal separation between successive resets. We show that, for the considered model, desynchronization is an asymptotically stable property. For this purpose, we recast desynchronization as a set stabilization problem and employ Lyapunov stability tools for hybrid systems. Furthermore, several perturbations are considered showing that desynchronization is a robust property. Perturbations on both the continuous and discrete dynamics are considered. Numerical results are presented to illustrate the main contributions.' author: - 'Sean Phillips and Ricardo G. Sanfelice[^1]' bibliography: - 'Biblio.bib' - 'RGS.bib' - 'SAP.bib' --- Introduction ============ Impulse-coupled oscillators are multi-agent systems with state variables consisting of timers that evolve continuously until a state-dependent event triggers an instantaneous update of their values. Networks of such oscillators have been employed to model the dynamics of a wide range of biological and engineering systems. In fact, impulse-coupled oscillators have been used to model groups of fireflies [@Mirollo.90.SIAMJAM.BiologicalOscillators], spiking neurons [@Pikovsky.ea.03.BiologicalOscillators; @gerstner2002spiking], muscle cells [@Peskin.75.BiologicalOscillators], wireless networks [@hong2010cooperative], and sensor networks [@Liu05adynamic]. With synchronization being a property of particular interest, such complex networks have been found to coordinate the values of their state variables by sharing information only at the times the events/impulses occur [@Mirollo.90.SIAMJAM.BiologicalOscillators; @Abbott1993]. The opposite of synchronization is [*desynchronization*]{}. In simple words, desynchronization in multi-agent systems is the notion that the agents’ periodic actions are separated “as far apart” as possible in time. Desynchronization is similar to clustering or splay-state configurations, and is sometimes referred in the literature as inhibited behavior [@mauroy:037122; @glass1988clocks]. For impulse-coupled oscillators, desynchronization is given as the behavior in which the separation between all of the timers impulses is equal [@Patel.Desync2007]. This behavior has been found to be present in communication schemes in fish [@Benda.Neuron] and in networks of spiking neurons [@Pfurtscheller19991842; @1997Natur.390.70S]. Desynchronization of oscillators has recently been shown to be of importance in the understanding of Parkinson’s disease [@Mabi.Moehlis2010; @Majtanik.Dolan2004], in the design of algorithms that limit the amount of overlapping data transfer and data loss in wireless digital networks [@hong2010cooperative], and in the design of round-robin scheduling schemes for sensor networks [@Liu05adynamic]. Motivated by the applications mentioned above and the lack of a full understanding of desynchronization in multi-agent systems, this paper pertains to the study of the dynamical properties of desynchronization in a network of impulse-coupled oscillators with an all-to-all communication graph. The uniqueness of the approach emerges from the use of hybrid systems tools, which not only conveniently capture the continuous and impulsive behavior in the networks of interest, but also are suitable for analytical study of asymptotic stability and robustness to perturbations. More precisely, the dynamics of the proposed hybrid system capture the (linear) continuous evolution of the states as well their impulsive/discontinuous behavior due to state triggered events. Analysis of the asymptotic behavior of the trajectories (or solutions) to these systems is performed using the framework of hybrid systems introduced in [@teel2012hybrid; @Goebel.ea.09.CSM]. To this end, we recast the study of desynchronization as a set stabilization problem. Unlike synchronization, for which the set of points to stabilize is obvious, the complexity of desynchronization requires first to determine such a collection of points, which we refer to as the [*desynchronization set*]{}. We propose an algorithm to compute such set of points. Then, using Lyapunov stability theory for hybrid systems, we prove that the desynchronization set is asymptotically stable by defining a Lyapunov-like function as the distance between the state and (an inflated version of) the desynchronization set. In our context, asymptotic stability of the desynchronization set implies that the distance between the state and the desynchronization set converges to zero as the amount of time and the number of jumps get large. Using the proposed Lyapunov-like function and invoking an invariance principle, the basin of attraction is characterized and shown to be the entire state space minus a set of measure zero, which turns out to actually be an exact estimate of the basin of attraction. Furthermore, also exploiting the availability of a Lyapunov-like function, we analytically characterize the time for the solutions to reach a neighborhood of the desynchronization set. In particular, this characterization provides key insight for the design of algorithms used in applications in which desynchronization is crucial, such as wireless digital networks and sensor networks. The asymptotic stability property of the desynchronization configuration is shown to be robust to several types of perturbations. The perturbations studied here include a generic perturbation in the form of an inflation of the dynamics of the proposed hybrid system model of the network of interest and several kinds of perturbations on the timer rates. Using the tools presented in [@teel2012hybrid; @Goebel.ea.09.CSM], we analytically characterize the effect of these perturbations on the already established asymptotic stability property of the desynchronization set. In particular, these perturbations capture situations where the agents in the network are heterogeneous due to having differing timer rates, threshold values, and update laws. To verify the analytical results, we simulate networks of impulse-coupled oscillators under several classes of perturbations. Specifically, we show numerical results when perturbations affect the update laws and the timer rates. The remainder of this paper is organized as follows. Section \[sec:hs\] is devoted to hybrid modeling of networks of impulse-coupled oscillators. Section \[sec:AN\] introduces an algorithm to determine the desynchronization set. Section \[sec:lyapunov\] presents the stability results while the time to convergence is characterized in Section \[sec:timetoconverge\]. The robustness results are in Section \[sec:robustness\]. Section \[sec:numerics\] presents numerical results illustrating our results. Final remarks are given in Section \[sec:conclusion\]. [**Notation**]{} Hybrid System Model of Impulse-Coupled Oscillators {#sec:hs} ================================================== Mathematical Model {#sec:hybridmodel} ------------------ In this paper, we consider a model of $N$ impulse-coupled oscillators. Each impulse-coupled oscillator has a continuous state ($\tau_i$ for the $i$-th oscillator) defining its internal timer. Once the timer of any oscillator reaches a threshold (${\bar{\tau}}$), it triggers an impulse and is reset to zero. At such an event, all the other impulse-coupled oscillators rescale their timer by a factor given by $(1+ \varepsilon)$ times the value of their timer, where $\varepsilon \in (-1,0)$.[^2] Figure \[fig:example\] shows a trajectory of two impulse-coupled oscillators [with states]{} ${\tau_1}$ and ${\tau_2}$. In this figure, the dark red circles indicate when a timer state has reached the threshold and, thus, resets to zero. The light green circles indicate when an oscillator is externally reset and, hence, decreases its timer by $(1+\varepsilon)$ times its current state. According to this outline of the model, the dynamics of the impulse-coupled oscillators involve impulses and timer resets, which are treated as true discrete events and instantaneous updates, while the smooth evolution of the timers before/after these events define the continuous dynamics. We follow the hybrid formalism of [@teel2012hybrid; @Goebel.ea.09.CSM], where a hybrid system is given by four objects $(C,f,D,G)$ defining its *data*: - *Flow set:* a set $C \subset {\mathbb{R}}^{N}$ specifying the points where flows are possible (or continuous evolution). - *Flow map:* a single-valued map $f: {\mathbb{R}}^{N} \to {\mathbb{R}}^{N}$ defining the flows. - *Jump set:* a set $D \subset {\mathbb{R}}^{N}$ specifying the points where jumps are possible (or discrete evolution). - *Jump map:* a set-valued map $G: {\mathbb{R}}^{N} \rightrightarrows{\mathbb{R}}^{N}$ defining the jumps. A hybrid system capturing the dynamics of the impulse-coupled oscillators is denoted as ${{{\mathcal}H}}_N := (C,f,D,G)$ and can be written in the compact form $${{{\mathcal}H}}_N: \qquad \tau \in {\mathbb{R}}^{N} \qquad \left\{ \begin{array}{llll} \dot{\tau} &=& f(\tau) &\quad \tau \in C \\ \tau^{+} &\in& G(\tau) & \quad \tau \in D \end{array}\right. , \label{eqn:HS}$$ where $N \in {\mathbb{N}}\setminus \{0,1\}$ is the number of impulse-coupled oscillators. The state of ${{{\mathcal}H}}_{N}$ is given by The flow and jump sets are defined to constrain the evolution of the timers. The flow set is defined by $$\startmodif C := P_N, \stopmodif \label{eqn:flawiest}$$ where $I := \{1,2, \ldots , N\}$ and ${\bar{\tau}}> 0$ is the threshold. During flows, an internal clock gradually increases based on the homogeneous rate, $\omega$. Then, the flow map is defined as with $\omega > 0$ [[ defining the natural frequency of each impulse-coupled oscillator]{}]{}. The impulsive events are captured by a jump [[ set $D$ and a jump]{}]{} map $G$. Jumps occur when the state is in the jump set $D$ defined as $$D := \left\{ \tau \in P_N : \ \exists i \in I \ \mbox{s.t.} \ \tau_i = {\bar{\tau}}\right\} . \label{eqn:D}$$ From such points, the $i$-th timer is reset to zero and forces a jump of all other timers. Such discrete dynamics are captured by the following jump map: for each $\tau \in D$ define $ G(\tau) = \left[ g_1(\tau) \ \ g_2(\tau)\ \ \ldots \ \ g_N(\tau) \right]^\top, $ where, for each $i \in I$, $$g_i (\tau) = \left\{ \begin{array}{l} 0 \qquad \qquad \ \ \ \ \mbox{if } \tau_{i} = {\bar{\tau}}, \tau_r < {\bar{\tau}}\ \ \forall r \in I\setminus \{i\} \\ \{0 , \tau_{i}(1+\varepsilon) \} \ \mbox{if } \tau_{i} = {\bar{\tau}}\ \exists r \in I \setminus \{i\} \ \mbox{s.t.} \ \tau_r = {\bar{\tau}}\\ (1+\varepsilon)\tau_{i} \qquad \ \mbox{if } \tau_i < {\bar{\tau}}\ \exists r \in I \setminus \{i\} \ \mbox{s.t.} \ \tau_r = {\bar{\tau}}\end{array} \right. \label{eqn:gi}$$ with parameters $\varepsilon \in (-1,0)$ and ${\bar{\tau}}> 0$; for $\tau \in D$, $g_i$ is not empty. When a jump is triggered, the state $\tau_i$ jumps according to the $i$-th component of the jump map $g_i$. When a state reaches the threshold ${\bar{\tau}}$, it is reset to zero only when all other states are less than that threshold; otherwise, if multiple timers reach the threshold simultaneously, the jump map is set valued to indicate that either $g_i(\tau) = 0$ or $g_i(\tau) = (1+\varepsilon)\tau_{i}$ is possible. This is to ensure that the jump map satisfies the regularity conditions outlined in Section \[sec:ModelForAnalysis\].[^3] For example, consider the case $N=2$ the hybrid system ${{{\mathcal}H}}_N = (C,f,D,G)$ has state given by $$\tau=\left[ \begin{array}{c} \tau_{1} \\ \tau_{2}\end{array} \right] \in P_2 := [0,{\bar{\tau}}]\times[0,{\bar{\tau}}] .$$ The states $\tau_{1}$ and $\tau_{2}$ are the timers for both of the oscillators. The hybrid system ${{{\mathcal}H}}_2$ has the following data: $${{{\mathcal}H}}_2 = \left\{\begin{array}{ll} C = P_2, & \qquad f(\tau) = \left[ \begin{array}{c} 1 \\ 1 \end{array}\right] \forall \tau \in C, \\ \noalign{\medskip} D = \left\{ \tau \in P_2 \ : \ \exists i \in \{1,2\} \ s.t. \ \tau_{i} = {\bar{\tau}}\right\}, & \qquad G(\tau) = \left[ \begin{array}{c} g_1(\tau) \\ g_2(\tau) \end{array} \right] \forall \tau \in D , \end{array} \right.$$ where the functions $g_1$ and $g_2$ are defined as $$g_1 (\tau) = \left\{ \begin{array}{ll} 0 & \mbox{if } \tau_{1} = {\bar{\tau}}, \tau_2 < {\bar{\tau}}\\ \{0 , \tau_{1}(1+\varepsilon) \} & \mbox{if } \tau_{1} = {\bar{\tau}}, \tau_2 = {\bar{\tau}}\\ (1+\varepsilon)\tau_{1} & \mbox{if } \tau_1 < {\bar{\tau}}, \tau_2 = {\bar{\tau}}\end{array} \right. \qquad \qquad g_2 (\tau) = \left\{ \begin{array}{ll} 0 & \mbox{if } \tau_{2} = {\bar{\tau}}, \tau_1 < {\bar{\tau}}\\ \{0 , \tau_{2}(1+\varepsilon) \} & \mbox{if } \tau_{2} = {\bar{\tau}}, \tau_1 = {\bar{\tau}}\\ (1+\varepsilon)\tau_{2} & \mbox{if } \tau_2 < {\bar{\tau}}, \tau_1 = {\bar{\tau}}\end{array} \right. .$$ Basic Properties of ${{{\mathcal}H}}_{N}$ {#sec:BasicCond} ----------------------------------------- \[sec:ModelForAnalysis\] ### Hybrid Basic Conditions To apply analysis tools for hybrid systems in [@teel2012hybrid], which will be summarized in Section \[sec:analysis\], the data of the hybrid system ${{{\mathcal}H}}_{N}$ must meet certain mild conditions. These conditions, referred to as the [*hybrid basic conditions*]{}, are as follows: 1. $C$ and $D$ are closed sets in ${\mathbb{R}}^N$. 2. $f: {\mathbb{R}}^N\to{\mathbb{R}}^N$ is continuous on $C$. 3. $G :{\mathbb{R}}^N{\rightrightarrows}{\mathbb{R}}^N$ is an outer semicontinuous[^4] set-valued mapping, locally bounded on $D$, and such that $G(x)$ is nonempty for each $x \in D$. \[lem:BasicConds\] ${{{\mathcal}H}}_{N}$ satisfies the hybrid basic conditions. Note that satisfying the hybrid basic conditions [[ implies that ${{{\mathcal}H}}_N$ is well-posed [@teel2012hybrid Theorem 6.30], which automatically gives robustness to vanishing state disturbances; see [@teel2012hybrid; @Goebel.ea.09.CSM]. Section \[sec:robustness\] considers different types of perturbations that ${{{\mathcal}H}}_N$ can withstand. ]{}]{} ### Solutions to ${{{\mathcal}H}}_N$ \[lem:solutions2HS\] From every point in $C \cup D$, there exists a solution and every maximal solution to ${{{\mathcal}H}}_{N}$ is complete and bounded. Due to the jump map $G$, if the elements of the solution are initially equal (denote this set as ${\mathcal{S}}:= \{\tau \in P_N : \exists i,r \in I, i \neq r, \tau_{i} = \tau_r \}$) it is possible for them to remain equal for all time. Furthermore, it is also possible for solutions to be initialized on the jump set such that one element is at the threshold and another is equal to zero then after the jump they will be equal, e.g. let $\tau_{1} = {\bar{\tau}}$, $\tau_{2} = 0$ then $\tau_{1}^{+} = \tau_{2}^{+} = 0$. We denote this set as ${\mathcal{G}}:= \{\tau \in D\setminus {\mathcal{S}}: \exists i,r \in I, i \neq r, \tau_{i} = 0, \tau_r = {\bar{\tau}}\}$. The next result considers solutions initialized on the set ${\mathcal{X}}:= {\mathcal{S}}\cup {\mathcal{G}}$. \[lem:defX\] For each $\tau(0,0) \in {\mathcal{X}}$, there exists a solution $\tau$ to ${{{\mathcal}H}}_{N}$ from $\tau(0,0)$ such that, for some $M \in \{0,1\}$, $\tau(t,j) \in {\mathcal{S}}$ for all $t+j \geq M$, $(t,j) \in {\mathop{\rm dom}\nolimits}\tau$. Consider a solution $\tau$ to the hybrid system ${{{\mathcal}H}}_N$ with initial condition $\tau(0,0) \in \startmodif {\mathcal{S}}\stopmodif$. Due to the flow map for each state being equal, $\tau$ remains in ${\mathcal{S}}$ during flows. Furthermore, at points $\tau \in {\mathcal{S}}\cap D$, the jump map $G$ is set valued by the definition of $g_i$ in . From these points, $G(\tau) \cap {\mathcal{S}}\neq \emptyset$. In fact, for each $\tau(0,0) \in {\mathcal{S}}$, there exists at least one solution such that $\tau(t,j) \in {\mathcal{S}}$ for all $t + j \geq 0$, with $(t,j) \in {\mathop{\rm dom}\nolimits}\tau$. Consider the case of solutions initialized at $\tau(0,0) \in \startmodif {\mathcal{G}}\stopmodif$ (Note that $\tau(0,0) \in D$). It follows that for some $r \in I$, $\tau_r(0,0) = {\bar{\tau}}$ and $g_r(\tau(0,0)) = 0$. Therefore, after the initial jump, we have that $G(\tau(0,0)) \cap {\mathcal{S}}\neq \emptyset,$ by which using previous arguments implies that $\tau(t,j) \in {\mathcal{S}}$ for all $t + j \geq 1$. Furthermore, there is a distinct ordering to the jumps. If $\tau$ is such that $\tau_i \neq \tau_r$ for all $i\neq r$ then the ordering of each $\tau_i$ is preserved after $N$ jumps. More specifically, we have the following result. \[lem:ordering\] For every solution $\tau$ to ${{{\mathcal}H}}_N$ with $\tau(0,0) \notin {\mathcal{X}}$, if at $(t_{j},j) \in {\mathop{\rm dom}\nolimits}\tau$ we have for some sequence of nonrepeated elements $\{i_m\}^N_{m = 1}$ of $I$ (that is, a reordering of the elements of the set $I = \{1,2,\ldots,N\}$) then, after $N$ jumps, it follows that Let $\tau$ be a solution to ${{{\mathcal}H}}_{N}$ from $P_{N}\setminus {\mathcal{X}}$. There exists a sequence $i_{k}$ of distinct elements with $i_{k} \in I$ for each $k \in I$, such that $0 \leq \tau_{i_{1}}(t,j) < \tau_{i_{2}}(t,j) < \ldots < \tau_{i_{N}}(t,j) \leq {\bar{\tau}}$ over $[t_{0},t_{1}]\times \{0\}$. After the jump at $(t,j) = (t_{1},0)$ we have $0 = \tau_{i_{N}}(t,j+1) < \tau_{i_{1}}(t,j+1) < \tau_{i_{2}}(t,j+1) < \ldots < \tau_{i_{N-1}}(t,j+1) < {\bar{\tau}}$. Continuing this way for each jump, it follows that after $N-1$ more jumps, the solution is such that $0 \leq \tau_{i_{1}}(t_{N},j+N) < \tau_{i_{2}}(t_{N},j+N) < \ldots < \tau_{i_{N}}(t_{N},j+N) \leq {\bar{\tau}}$ and the order at time $(t,j)$ is preserved. Using these properties of solutions to ${{{\mathcal}H}}_N$, the next section defines the set to which these solutions converge and establishes its stability properties. Dynamical Properties of ${{{\mathcal}H}}_{N}$ {#sec:analysis} ============================================= Our goal is to show that the desynchronization configuration [[ of ${{{\mathcal}H}}_N$, which is defined in Section \[sec:AN\]]{}]{}, is asymptotically stable. We recall from [@teel2012hybrid; @Goebel.ea.09.CSM] the following definition of asymptotic stability for general hybrid systems with state $x \in {\mathbb{R}}^n$. \[def:AS\] A closed set ${\mathcal{A}}\subset {\mathbb{R}}^n$ is said to be - [*stable*]{} if for each $\varepsilon>0$ there exists $\delta>0$ such that each solution $x$ with $|x(0,0)|_{{\mathcal{A}}}\leq \delta$ satisfies $|x(t,j)|_{{\mathcal{A}}} \leq \varepsilon$ for all $(t,j)\in{\mathop{\rm dom}\nolimits}x$; - [*attractive*]{} if there exists $\mu > 0$ such that every maximal solution $x$ with $|x(0,0)|_{{\mathcal{A}}}\leq \mu$ is complete and satisfies\ $\lim_{(t,j) \in {\mathop{\rm dom}\nolimits}x, t+j\to\infty} |x(t,j)|_{{\mathcal{A}}}=0$; - [*asymptotically stable*]{} if stable and attractive; - [*weakly globally asymptotically stable*]{} if ${\mathcal{A}}$ is stable and if, for every initial condition, there exists a maximal solution that is complete and satisfies $\lim_{(t,j) \in {\mathop{\rm dom}\nolimits}x, t+j\to\infty} |x(t,j)|_{{\mathcal{A}}}=0$. The set of points from where the attractivity property holds is the basin of attraction and excludes all points where the system trajectories may never converge to ${\mathcal{A}}$. In fact, it will be established in Section \[sec:lyapunov\] that the basin of attraction for asymptotic stability of desynchronization of ${{{\mathcal}H}}_N$ does not include any point $\tau$ such that any two or more timers are equal or become equal after a jump, which is the set ${\mathcal{X}}$ defined in Lemma \[lem:defX\]. Construction of the set ${\mathcal{A}}$ for ${{{\mathcal}H}}_N$ {#sec:AN} --------------------------------------------------------------- In this section, we identify the set of points corresponding to the impulse-coupled oscillators being desynchronized, namely, we define the [*desynchronization set*]{}. We define desynchronization as the behavior in which the separation between all of the timers’ impulses is equal (and nonzero), see Figure \[fig:example\]. More specifically desynchronization is defined as follows: \[def:desynch\] A solution $\tau$ to ${{{\mathcal}H}}_N$ is desynchronized if there exists $\Delta > 0$ and a sequence of non-repeated elements $\{i_m\}^N_{m = 1}$ of $I$ (that is, a reordering of the elements of the set $I = \{1,2, \ldots, N \}$) such that $\lim_{j \to \infty} (t_j^{i_m} - t_j^{i_{m+1}} )= \Delta$ for all $m \in \{1,2, \ldots, N-1\}$ and $\lim_{j \to \infty} (t_j^{N} - t_j^{i_{1}}) = \Delta,$ where $\{t_j^{i_m}\}_{j = 0}^\infty$ is the sequence of jump times of the state $\tau_{i_m}$. In fact, this separation between impulses leads to an ordered sequence of impulse times with equal separation. The desynchronization set ${\mathcal{A}}$ for the hybrid system ${{{\mathcal}H}}_N$ captures such a behavior and is parameterized by $\varepsilon$, the threshold ${\bar{\tau}}$, and the number of impulse-coupled oscillators $N$. To define this set, first we provide some basic intuition about the dynamics of ${{{\mathcal}H}}_N$ when desynchronized. The set ${\mathcal{A}}$ must be forward invariant and such that trajectories staying in it satisfy the property in Definition \[def:desynch\]. Due to the definition of the flow map $f$, there exist sets in the form of “lines" $\ell_k$, each of them in the direction ${\mathbf 1}$, which is the direction of the flow map, intersecting the jump set at a point which, for the $k$-th line, we denote as ${\widetilde}\tau^{k}$. We define the desynchronization set as the union of sets $\ell_{k}$ collecting points $\tau = {\widetilde}\tau^k + {\mathbf 1}s \in P_N$ parameterized by $s \in {\mathbb{R}}$. To identify ${\widetilde}\tau^k$, consider a point ${\widetilde}\tau^{k} \in D\setminus {\mathcal{X}}$ with components satisfying ${\widetilde}\tau_{1}^{k} = {\bar{\tau}}> {\widetilde}\tau_{2}^{k} > {\widetilde}\tau_{3}^{k} > ... > {\widetilde}\tau_{N}^{k}$. Due to Definition \[def:desynch\], it must be true that the difference between jump times are constant. This means that there must be some correlation between $\Delta$ and the difference between, in this case, $\tau_{1}^{k}$ and $\tau_{2}^{k}$. Moreover, there must be a correlation between $\tau_{1}^{k}$ and all other states at jumps. It follows that this point belongs to ${\mathcal{A}}$ only if the distance between the expiring timer (${\widetilde}\tau_{1}^{k}$) and each of its other components (${\widetilde}\tau_{i}^{k}$, $i \in I \setminus \{1\}$) is equal to the distance between the value after the jump of the timer expiring next (${\widetilde}\tau_{2}^{k}\null^{+}$) and the value after the jump of its other components (${\widetilde}\tau_{i}^{k}\null^{+}$, $i \in I \setminus \{2\}$), respectively. This property ensures that, when in the desynchronization set, the relative distance between the leading timer and each of the other timers is equal, before and after jumps. More precisely, $$\begin{aligned} {\widetilde}\tau_{1}^{k} - \widetilde{\tau}_{i}^{k} = {\widetilde}\tau_{2}^{k}\null^{+} - {\widetilde}\tau_{\mbox{\scriptsize next}(i)}^{k}\hspace{-0.21in}\null^{+} \qquad \qquad \forall \ i \in I \setminus \{1\} \label{eqn:ANcondition},\end{aligned}$$ where ${\widetilde}\tau^{k}\null^{+} = G({\widetilde}\tau^{k})$ and next$(i) = i + 1$ if $i + 1 \leq N$ and $1$ otherwise.[^5] Since ${\mathcal{X}}$ contains all points such that at least two or more timers are the same, we can consider the case when one component of ${\widetilde}\tau^k$ is equal to ${\bar{\tau}}$ at a time. For each such case, we have $(N - 1)!$ possible permutations of the other components and $N$ possible timer components equal to ${\bar{\tau}}$, leading to $N!$ total possible sets $\ell_{k}$. For the $N$ case, the algorithm above results in the system of equations $\Gamma \tau_s = b$, where and $ b = \bar{\tau} {\bf 1}, $ where $\tau_s$ is the state ${\widetilde}\tau^{k}$ sorted into decreasing order. It can be shown that for any $\varepsilon \in (-1,0)$, a solution $\tau_{s}$ exists (see Lemma \[eqn:tauSsolution\]). Then, $\tau_s$ needs to be unsorted and becomes ${\widetilde}\tau^k$ in the definition of the set $\ell_k$. The solution to $\Gamma\tau_s = b$ is the result of a single case of $\tau \in D \setminus {\mathcal{X}}$. As indicated above, to get a full definition of the set ${\mathcal{A}}$, the $N!$ sets $\ell_k$ should be computed. For arbitrary $N$, the set ${\mathcal{A}}$ is given as a collection of sets $\ell_{k}$ given by $${\mathcal{A}}= \bigcup_{k =1}^{N!}\ell_{k} \label{eqn:AN},$$ where, for each $k \in \{1,2,\dots,N!\}$, $\ell_k := \{\tau :\tau = {\widetilde}\tau^k + {\mathbf 1}s \in P_N, s \in {\mathbb{R}}\}.$ Lyapunov Stability {#sec:lyapunov} ------------------ Lyapunov theory for hybrid systems is employed to show that the set of points ${\mathcal{A}}$ is asymptotically stable. Our candidate Lyapunov-like function, which is defined below and uses the distance function, is built by observing that there exist points where the distance to ${\mathcal{A}}$ may increase during flows. This is due to the sets $\ell_{k}$ being a subset $P_{N}$. To avoid this issue, we define $${\widetilde}{\mathcal{A}}= \bigcup_{k=1}^{N!} {\widetilde}\ell_k \supset {\mathcal{A}}$$ where ${\widetilde}\ell_k$ is the extension of $\ell_k$ given by $$\begin{aligned} {\widetilde}\ell_k = \left\{\tau \in {\mathbb{R}}^{N} : \tau = {\widetilde}\tau^k + {\mathbf 1}s, s \in {\mathbb{R}}\right\}. \label{eqn:wtellk}\end{aligned}$$ Then, with this extended version of ${\mathcal{A}}$, the proposed candidate Lyapunov-like function for asymptotic stability of ${\mathcal{A}}$ for ${{{\mathcal}H}}_N$ is given by the locally Lipschitz function $$\label{eqn:lyapunov} V(\tau) = \min\{|\tau|_{{\widetilde}\ell_1},|\tau|_{{\widetilde}\ell_2}, \ldots ,|\tau|_{{\widetilde}\ell_k}, \ldots ,|\tau|_{{\widetilde}\ell_N!}\} \quad \forall \ \tau \in P_N \setminus {\mathcal{X}}$$ where, for some $k$, $|\tau|_{{\widetilde}\ell_k}$ is the distance between the point $\tau$ and the set ${\widetilde}\ell_k$.[^6] The following theorem establishes asymptotic stability of ${\mathcal{A}}$ for ${{{\mathcal}H}}_N$. We show that the change in $V$ during flows is zero and that at jumps we have a strict decrease of $V$; namely, $V(G(\tau)) - V(\tau) = -|{\varepsilon}| V(\tau)$. A key step in the proof is in using [@teel2012hybrid Theorem 8.2] on a restricted version of ${{{\mathcal}H}}_N$. \[thm:stability\] For every $N \in {\mathbb{N}}, N > 1$, ${\bar{\tau}}> 0,\omega > 0$, and $\varepsilon \in (-1,0)$, the hybrid system ${{{\mathcal}H}}_N$ is such that the compact set ${\mathcal{A}}$ is Let the set ${\mathcal{X}}_{v}$ define the $v$-inflation of ${\mathcal{X}}$ (defined in Lemma \[lem:defX\]), that is, the open set[^7] ${\mathcal{X}}_{v} := \{\tau \in {\mathbb{R}}^N : |\tau|_{{\mathcal{X}}} < v\}$, where $v \in (0,v^{*})$ and $v^{*} = \min_{x\in{\mathcal{X}}, y\in{\widetilde}{\mathcal{A}}} |x-y|$. Given any $v \in (0,v^{*})$, we now consider a restricted hybrid system ${\widetilde}{{{\mathcal}H}}_{N} = (f,{\widetilde}{C},G,{\widetilde}{D})$, where ${\widetilde}{C} := C \setminus {\mathcal{X}}_{v}$ and ${\widetilde}{D} := D \setminus {\mathcal{X}}_{v}$, which are closed. We establish that ${\widetilde}{\mathcal{A}}$ is an asymptotically stable set for ${\widetilde}{{{\mathcal}H}}_N$. Note that the continuous function $V$, given by , is defined as the minimum distance from $\tau$ to ${\widetilde}{\mathcal{A}}$, where ${\widetilde}{\mathcal{A}}$ is the union of $N!$ sets ${\widetilde}\ell_{k}$ in . To determine the change of $V$ during flows[^8], we consider the relationship between the flow map and the sets ${\widetilde}\ell_{k}$. The inner product between a vector pointing in the direction of the set ${\widetilde}\ell_{k}$ and the flow map on ${\widetilde}{C}$ satisfies $${\mathbf 1}^{\top}f(\tau) = {\mathbf 1}^{\top}(\omega{\mathbf 1}) = \omega N=|{\mathbf 1}||\omega {\mathbf 1}| = |{\mathbf 1}||f(\tau)|\cos \theta$$, which is only true if $\theta$ is zero. Therefore, the direction of the flow map and of the vector defining ${\widetilde}\ell_{k}$ are parallel, implying that the distance to the set ${\widetilde}{\mathcal{A}}$ is constant during flows. The change in $V$ during jumps is given by $V(G(\tau)) - V(\tau)$ for $\tau \in {\widetilde}D \setminus {\widetilde}{\mathcal{A}}$. Due to the fact that we can rearrange the components of $\tau \in P_N \setminus {\mathcal{X}}$, without loss of generality, we consider a single jump condition, namely, we consider $\tau$ such that ${\bar{\tau}}= \tau_{1} > \tau_{2} > \ldots > \tau_{N-1} > \tau_{N}$. Using the formulation in Section \[sec:AN\] and Lemma \[eqn:tauSsolution\], the elements of the vector ${\widetilde}\tau^{k}$ associated with ${\widetilde}\ell_{k}$ for this case of $\tau$ are given by ${\widetilde}\tau_{i}^{k} = \frac{\sum_{p=0}^{N-i}({\varepsilon}+1)^p}{\sum_{p=0}^{N-1}({\varepsilon}+1)^p}{\bar{\tau}}$, which by Lemma \[lem:consum1\] is equal to $\frac{({\varepsilon}+1)^{N - i +1} - 1}{({\varepsilon}+1)^{N} - 1}{\bar{\tau}}$. After the jump, $G(\tau)$ is single valued and is such that its elements are ordered as follows: $g_{2}(\tau) > g_{3}(\tau) > \ldots > g_{N}(\tau) > g_{1}(\tau) = 0.$ Specifically, the jump map is $G(\tau) = [0,(1+{\varepsilon})\tau_{2},\ldots, (1+{\varepsilon})\tau_{N}]^{\top}$. Then, the formulation in Section \[sec:AN\] and [Lemma \[eqn:tauSsolution\]]{} leads to a case of ${\widetilde}\tau^{k}$ denoted as ${\widetilde}\tau^{k'}$. By [Lemma \[lem:consum1\]]{}, the elements of the vector ${\widetilde}\tau^{k'}$ are given by ${\widetilde}\tau^{k'}_{1} = \frac{{\varepsilon}}{({\varepsilon}+1)^{N} - 1}{\bar{\tau}}$ and ${\widetilde}\tau^{k'}_{i} = \frac{({\varepsilon}+1)^{N - i +2} - 1}{({\varepsilon}+1)^{N} - 1}{\bar{\tau}}$ for $i > 1$. Due to the ordering of $\tau$ and $G(\tau)$, ${\widetilde}\tau^{k'}$ is a one-element shifted (to the right) version of ${\widetilde}\tau^{k}$. From the definition of ${\widetilde}\tau^{k}$ above, $V$ at $\tau$ reduces to $$V(\tau) = |\tau|_{{\widetilde}\ell_{k}} = \left|({\widetilde}\tau^{k} - \tau) - \frac{1}{N}(({\widetilde}\tau^{k} - \tau)^{\top}{\mathbf 1}){\mathbf 1}\right|$$ for some $k$. Note that $$({\widetilde}\tau^{k} - \tau)^{\top}{\mathbf 1}= \sum_{i = 1}^{N} {\widetilde}\tau^{k}_{i} - \sum_{i=1}^{N} \tau_{i}$$ reduces to $\sum_{i = 2}^{N} {\widetilde}\tau^{k}_{i} - \sum_{i=2}^{N} \tau_{i}$ since $\tau_{1} = {\widetilde}\tau_{1}^{k} = {\bar{\tau}}$. Using [Lemmas \[lem:consum1\] and \[lem:consum2\]]{}, it follows that $$\sum_{i = 2}^{N} {\widetilde}\tau^{k}_{i}= \frac{\sum_{i=2}^N \sum_{p=0}^{N-i}({\varepsilon}+1)^p}{\sum_{p=0}^{N-1}({\varepsilon}+1)^p}{\bar{\tau}}= \frac{(({\varepsilon}+1)^{N} - 1) - N{\varepsilon}}{{\varepsilon}(({\varepsilon}+1)^{N} - 1)}{\bar{\tau}}.$$ Then, the first element of the vector inside the norm in the expression of $V(\tau)$ is given as $$\begin{aligned} &({\widetilde}\tau^{k}_{1} - \tau_{1}) - \frac{1}{N}\left(\frac{(({\varepsilon}+1)^{N} - 1 )- N{\varepsilon}}{{\varepsilon}(({\varepsilon}+1)^{N} - 1)}{\bar{\tau}}- \sum_{i=2}^{N} \tau_{i}\right) \\ & \qquad\qquad\qquad\qquad= -\frac{(({\varepsilon}+1)^{N} - 1) - N{\varepsilon}}{{\varepsilon}N(({\varepsilon}+1)^{N} - 1)}{\bar{\tau}}+ \frac{1}{N}\sum_{i=2}^{N} \tau_{i},\end{aligned}$$ while the elements with $m \in \{2,3,\ldots,N\}$ are given by After the jump at $\tau$, since $G(\tau)$ is single valued, $V(G(\tau))$ is given by $$|G(\tau)|_{{\widetilde}\ell_{k'}} = \left|({\widetilde}\tau^{k'} - G(\tau)) - \frac{1}{N}(({\widetilde}\tau^{k'} - G(\tau))^{\top}{\mathbf 1}){\mathbf 1}\right|.$$ Note that $({\widetilde}\tau^{k'} - G(\tau))^{\top}{\mathbf 1}= \sum_{i = 1}^{N} {\widetilde}\tau^{k'}_{i} - \sum_{i = 1}^{N} g_{i}(\tau)$ reduces to $\sum_{i = 1}^{N} {\widetilde}\tau^{k'}_{i} - \sum_{i = 2}^{N} (1+ {\varepsilon})\tau_i$, since $g_{1}(\tau) = 0$ and $g_{i}(\tau) = (1+{\varepsilon})\tau_{i}$ for $i > 1$. Using [Lemmas \[lem:consum1\] and \[lem:consum2\]]{}, it follows that $$\begin{aligned} \sum_{i = 1}^{N} {\widetilde}\tau^{k'}_{i} &= \frac{\sum_{i=1}^N \sum_{p=0}^{N-i}({\varepsilon}+1)^p}{\sum_{p=0}^{N-1}({\varepsilon}+1)^p}{\bar{\tau}}\\ &= \frac{({\varepsilon}+1)(({\varepsilon}+1)^{N} - 1) - N{\varepsilon}}{{\varepsilon}(({\varepsilon}+1)^{N} - 1)}{\bar{\tau}}\end{aligned}$$ which leads to $$({\widetilde}\tau^{k'} - G(\tau))^{\top}{\mathbf 1}= \frac{({\varepsilon}+1)(({\varepsilon}+1)^{N} - 1) - N{\varepsilon}}{{\varepsilon}(({\varepsilon}+1)^{N} - 1)}{\bar{\tau}}- \sum_{i = 2}^{N} (1+{\varepsilon})\tau_{i}.$$ The first element inside the norm in $V(G(\tau))$ is given by $$\begin{aligned} &({\widetilde}\tau_{1}^{k'} - g_1(\tau)) - \frac{1}{N}\left(\frac{({\varepsilon}+1)(({\varepsilon}+1)^{N} - 1) - N{\varepsilon}}{{\varepsilon}(({\varepsilon}+1)^{N} - 1)}{\bar{\tau}}\right. \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left. - \sum_{i = 2}^{N} (1+{\varepsilon})\tau_{i}\right) \\ & = \frac{{\varepsilon}}{({\varepsilon}+1)^{N} - 1}{\bar{\tau}}- \frac{({\varepsilon}+1)(({\varepsilon}+1)^{N} - 1) - N{\varepsilon}}{{\varepsilon}N(({\varepsilon}+1)^{N} - 1)}{\bar{\tau}}\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad + \frac{1}{N}\sum_{i = 2}^{N} (1+{\varepsilon})\tau_{i} \\ &= (1+{\varepsilon})\left(-\frac{(({\varepsilon}+1)^{N} - 1) - N{\varepsilon}}{{\varepsilon}N(({\varepsilon}+1)^{N} - 1)}{\bar{\tau}}+ \frac{1}{N}\sum_{i = 2}^{N} \tau_{i}\right).\end{aligned}$$ For each element $m > 1$, it follows that Combining the expressions for each of the elements inside the norm of $V(G(\tau))$, it follows that $V(G(\tau)) = (1+{\varepsilon}) V(\tau)$. Then, the change during jumps is given by $V(G(\tau)) - V(\tau) = {\varepsilon}V(\tau)$ where ${\varepsilon}\in (-1,0)$. With the property of $V$ during flows established above, the change of $V$ along solutions is bounded during flows and jumps by the nonpositive functions $u_{{\widetilde}{C}}$ and $u_{{\widetilde}{D}}$, respectively, defined as follows: $u_{{\widetilde}{C}} (z)= 0$ for each $z \in {\widetilde}{C}$ and $u_{{\widetilde}{C}} (z)= -\infty$ otherwise; $u_{{\widetilde}{D}}(z) = {\varepsilon}V(z)$ for each $z \in {\widetilde}{D}$ and $u_{{\widetilde}{D}}(z) = -\infty$ otherwise. Using Lemma \[lem:BasicConds\], the fact that ${\widetilde}{C}$ and ${\widetilde}{D}$ are closed, and the fact that every maximal solution to ${\widetilde}{{{\mathcal}H}}$ is bounded and complete, by [@teel2012hybrid Theorem 8.2], every maximal solution to ${\widetilde}{{{\mathcal}H}}_N$ approaches the largest weakly invariant subset of $L_{V}(r') \cap {\widetilde}{C} \cap [L_{u_{{\widetilde}{C}}}(0) \cup (L_{u_{{\widetilde}{D}}}(0) \cap G(L_{u_{{\widetilde}{C}}}(0)))] = L_{V}(r') \cap {\widetilde}{C}$ for $r' \in V({\widetilde}{C})$. Since every maximal solution jumps an infinite number of times, the largest invariant set is given for $r' = 0$ due to the fact that $V(G(\tau)) - V(\tau) = {\varepsilon}V(\tau) < 0$ if $r' > 0$. Then, the largest invariant set is given by $L_{V}(0) \cap {\widetilde}{C} = {\widetilde}{\mathcal{A}}\cap {\widetilde}{C}$ which is identically equal to ${\mathcal{A}}$. Hence, the set ${\mathcal{A}}$ is attractive. Stability is guaranteed from the fact that $V$ is nonincreasing during flows and strictly decreasing during jumps. Then, the set ${\widetilde}{\mathcal{A}}$ is asymptotically stable for the hybrid system ${\widetilde}{{{\mathcal}H}}_{N}$. We have that ${\mathcal{A}}$ is (strongly) forward invariant and from Theorem 3.4 we know that ${\mathcal{A}}$ is uniformly attractive from a neighborhood of itself. Then by Proposition 7.5 in [@teel2012hybrid], it follows that ${\mathcal{A}}$ is asymptotically stable. Note that the set of solutions to ${\widetilde}{{{\mathcal}H}}_{N}$ coincides with the set of solutions to ${{{\mathcal}H}}_{N}$ from $P_{N} \setminus {\mathcal{X}}_{v}$. Therefore, the set ${\mathcal{A}}$ is asymptotically stable for ${{{\mathcal}H}}_N$ with basin of attraction ${{\mathcal}B}_{{\mathcal{A}}} = P_{N} \setminus {\mathcal{X}}_{v}$. [[ Since $v$ is arbitrary, it follows that the basin of attraction is equal to $P_{N} \stopmodif \setminus {\mathcal{X}}$.]{}]{} Note that the jump map $G$, at points $\tau \in {\mathcal{X}}$, is set valued by definition of $g_i$ in . From these points there exist solutions to ${{{\mathcal}H}}_N$ that jump out of ${\mathcal{X}}$. In fact, consider the case $\tau \in {\mathcal{X}}$. We have that $\tau_{i} = \tau_r$ for some $i,r \in I$. Then, after the jump it follows that $g_{i}(\tau) \in \{0, (1+{\varepsilon}){\bar{\tau}}\}$ and $g_{r}(\tau) \in \{0, (1+{\varepsilon}){\bar{\tau}}\},$ and there exist $g_{i}$ and $g_{r}$ such that $g_{i} = g_{r}$ or $g_{i} \neq g_{r}$. Since for every point in ${\mathcal{X}}$ there exists a solution that converges to ${\mathcal{A}}$ and also a solution that stays in ${\mathcal{X}}$, ${\mathcal{X}}$ is weakly forward invariant.[^9] Characterization of Time of Convergence {#sec:timetoconverge} --------------------------------------- In this section, we characterize the time to converge to a neighborhood of ${\mathcal{A}}$. The proposed (upper bound) of the time to converge depends on the initial distance to the set ${\widetilde}{\mathcal{A}}$ and the parameters of the hybrid system $(\varepsilon,{\bar{\tau}})$. \[thm:timetoconverge\] For every $N \in {\mathbb{N}}$, $N > 1$, and every $c_1, c_2$ such that $\overline{c} > c_2 > c_1 > 0$ with $\overline{c} = \max_{x \in {\mathcal{X}}} |x|_{\widetilde{\mathcal{A}}}$, every maximal solution to ${{{\mathcal}H}}_N$ with initial condition $\tau(0,0) \in (P_{N} \setminus {\mathcal{X}}) \cap {\widetilde}L_{V}(c_2)$ is such that Let $\tau_0 = \tau(0,0)$ and pick a maximal solution $\tau$ to ${{{\mathcal}H}}_{N}$ from $\tau_{0}$. At every jump time $(t_{j},j) \in {\mathop{\rm dom}\nolimits}\tau$, define $\bar{g}_1 = \tau(t_1,1)$, $\bar{g}_2 = \tau(t_2,2), \ldots, \bar{g}_J = \tau(t_J,J)$, for some $J \in {\mathbb{N}}$. From Theorem \[thm:stability\], we have that there is no change in the Lyapunov function during flows. Furthermore, we have that for each $\tau \in D \setminus {\mathcal{A}}$ the difference $V(G(\tau)) - V(\tau) = \varepsilon V(\tau)$ with $\varepsilon \in (-1,0)$. Since, for every $j,$ $\tau(t_{j},j) \in D$, we have which implies At the next jump, we have Proceeding in this way, after $J$ jumps we have From $V(\bar g_{J}) = (1+\varepsilon)^{J}V(\tau_{0})$, we want to find $J$ so that $V(\bar g_{J}) \leq c_{1}$ when $V(\tau_{0}) \leq c_{2}$. Considering the worst cast for $V(\tau_{0})$, we want $(1+\varepsilon)^{J}c_{2} \leq c_{1}$, which implies $\frac{c_{2}}{c_{1}} \leq \left(\frac{1}{1+\varepsilon} \right)^{J}$, and therefore $J = \left\lceil\frac{\log \frac{c_{2}}{c_{1}}}{ \log \frac{1}{1+\varepsilon}}\right\rceil > 0.$ For each $j$, the time between jumps satisfies $t_1 - t_0 \leq \frac{{\bar{\tau}}}{\omega}, t_2 - t_1 \leq \frac{{\bar{\tau}}}{\omega}, \ldots, t_j - t_{j-1} \leq \frac{{\bar{\tau}}}{\omega}.$ Then, we have that after $J$ jumps, $ \sum_{j=1}^{J} t_{j} - t_{j-1} \leq J\frac{{\bar{\tau}}}{\omega}. $ With $t_{0} = 0$, the expression reduces to $t_{J} \leq J\frac{{\bar{\tau}}}{\omega} = \left\lceil\frac{\log \frac{c_{2}}{c_{1}}}{ \log \frac{1}{1+\varepsilon}}\right\rceil\frac{{\bar{\tau}}}{\omega}.$ Then, after $t+j \geq t_{J} +J$, the solution is at least $c_{1}$ close to the set ${\widetilde}{\mathcal{A}}$. Defining $M = t_{J} + J$ we then have Figure \[fig:timetoconverge\] shows the time to converge (divided by $\frac{{\bar{\tau}}}{\omega}+1$) versus $\varepsilon$ with constant $c_{2} = 0.99{\bar{\tau}}$ and varying values of $c_{1}$. As the figure indicates, the time to converge decreases as $|{\varepsilon}|$ increases, which confirms the intuition that the larger the jump the faster oscillators desynchronize. Robustness Analysis {#sec:robustness} ------------------- [[ Lemma \[lem:BasicConds\] establishes that the hybrid model of $N$ impulse-coupled oscillators satisfies the hybrid basic conditions. In light of this property, the asymptotic stability property of ${\mathcal{A}}$ for ${{{\mathcal}H}}_N$ is preserved under certain perturbations; i.e., asymptotic stability is robust [@teel2012hybrid]. In the next sections, we consider a perturbed version of ${{{\mathcal}H}}_N$ and present robust stability results. In particular, we consider generic perturbations to ${{{\mathcal}H}}_N$, and two different cases of perturbations only on the timer rates to allow for heterogeneous timers.]{}]{} ### Robustness to Generic Perturbations We start by revisiting the definition of perturbed hybrid systems in [@teel2012hybrid]. .1cm Given a hybrid system ${{{\mathcal}H}}$ and a function $\rho: {\mathbb{R}}^N \to {\mathbb{R}}_{\geq 0}$, the $\rho$-perturbation of ${{{\mathcal}H}}$, denoted ${{{\mathcal}H}}_\rho$, is the hybrid system $$\left\{\begin{array}{cc}x \in C_\rho & \quad \dot{x} \in F_\rho(x) \\ x \in D_\rho & \quad x^+ \in G_\rho(x) \\ \end{array} \right.$$ where .1cm Using this definition, we can deduce a generic perturbed hybrid system modeling $N$ impulse-coupled oscillators. Then, for the hybrid system ${{{\mathcal}H}}_N$, we denote ${{{\mathcal}H}}_{N,\rho}$ as the $\rho$-perturbation of ${{{\mathcal}H}}_N$. Given the perturbation function $\rho : {\mathbb{R}}^N \to {\mathbb{R}}_{\geq 0}$, the perturbed flow map is given by where the perturbed flow set $C_\rho$ is given by For example, if $N = 2$ and $\rho(\tau) = \bar\rho > 0$ for all $\tau \in {\mathbb{R}}^N$, which would correspond to constant perturbations on the lower value and threshold, then $C_\rho = C + \rho {\mathbb{B}}$. The perturbed jump map and jump set are defined as where $g_{i,\rho}$ is the $i$-th component of $G_\rho$. The following result establishes that the hybrid system ${{{\mathcal}H}}_N$ is robust to small perturbations. (robustness of asymptotic stability)\[thm:robustofAS\] If $\rho : {\mathbb{R}}^N \to {\mathbb{R}}_{\geq 0}$ is continuous and positive on ${\mathbb{R}}^{N} \setminus {\mathcal{A}}$, then ${\mathcal{A}}$ is semiglobally practically robustly ${\mathcal}{KL}$ asymptotically stable with basin of attraction $B_{{\mathcal{A}}} = P_N \setminus {\mathcal{X}}$, i.e., for every compact set $K \subset B_{{\mathcal{A}}}$ and every $\alpha > 0$, there exists $\delta \in (0,1)$ such that every maximal solution $\tau$ to ${{{\mathcal}H}}_{N,\delta\rho}$ from $K$ satisfies $|\tau(t,j)|_{{\mathcal{A}}} \leq \beta(|\tau(0,0)|_{{\mathcal{A}}},t+j) + \alpha$ for all $(t,j) \in {\mathop{\rm dom}\nolimits}\tau$. From Lemma \[lem:BasicConds\], the hybrid system ${{{\mathcal}H}}_N$ satisfies the hybrid basic conditions. Therefore, by [@teel2012hybrid Theorem 6.8] ${{{\mathcal}H}}_N$ is nominally well-posed and, moreover, by [@teel2012hybrid Proposition 6.28] is well-posed. From the proof of Theorem \[thm:stability\], we know that the set ${\mathcal{A}}$ is an asymptotically stable compact set for the hybrid system ${{{\mathcal}H}}_N$ with basin of attraction $B_{{\mathcal{A}}}$. Since by Lemma \[lem:solutions2HS\], every maximal solution is complete, then [@teel2012hybrid Theorem 7.20] implies that ${\mathcal{A}}$ is semiglobally practically robustly ${\mathcal}{KL}$ asymptotically stable. Section \[sec:jumpperturbs1\] showcases of ${{{\mathcal}H}}_{N}$ with $\rho$-perturbations on the jump map. ### Robustness to Heterogeneous Timer Rates We consider the case when the continuous dynamic rates are perturbed in the form of for a given solution $\tau$. For example, consider the perturbation of the flow map given by $$\begin{aligned} f(\tau) = \omega {\mathbf 1}+ \Delta\omega \label{eqn:fdelta}\end{aligned}$$ where $\Delta\omega \in {\mathbb{R}}^n$ is a constant defining a perturbation from the natural frequencies of the impulse-coupled oscillators. Then for some $k$, during flows, along a solution $\tau$ such that over $[t_{j},t_{j+1}]\times\{j\}$ satisfies $V(\tau(t,j)) = |\tau(t,j)|_{{\widetilde}\ell_{k}}$, it follows that $c$ reduces to $c(t,j) = \left(\frac{r_{\ell_{k}}^{\top}(\tau(t,j))(\frac{1}{N}{\bf \underline{1} - \bf I})}{|\tau(t,j)|_{\ell_{k}}}\right) \Delta\omega.$[^10] Furthermore, the norm of the hybrid arc $c$ can be bounded by a constant $\bar{c}$ given by $$\begin{aligned} \bar{c} = \left|\left(\frac{1}{N}\underline{\bf 1} - {\bf I}\right)\Delta\omega\right| \label{eqn:ctbound}.\end{aligned}$$ Building from this example, the following result provides properties of the distance to ${\widetilde}{\mathcal{A}}$ from solutions $\tau$ to ${{{\mathcal}H}}_N$ under generic perturbations on $f$ (not necessarily as in ). \[thm:vanishingC\] Suppose that the perturbation on the flow map of ${{{\mathcal}H}}_N$ is such that a perturbed solution $\tau$ satisfies, for each $j$ such that $\{t : (t,j) \in {\mathop{\rm dom}\nolimits}\tau\}$ has more than one point, $\frac{d}{dt} |\tau(t,j)|_{{\widetilde}{\mathcal{A}}} = c(t,j)$ for all $t \in \{t : (t,j) \in {\mathop{\rm dom}\nolimits}\tau\}$ [[ and $\tau(t,j) \in P_N \setminus {\mathcal{X}}$ for all $(t,j) \in {\mathop{\rm dom}\nolimits}\tau$]{}]{}, for some hybrid arc $c$ with ${\mathop{\rm dom}\nolimits}c = {\mathop{\rm dom}\nolimits}\tau$. Then, the following hold: - The asymptotic value of $|\tau(t,j)|_{{\widetilde}{\mathcal{A}}}$ satisfies $$\begin{aligned} \lim_{t+j \to \infty}|\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq \lim_{t+j \to \infty} \sum_{i = 0}^{j} (1+{\varepsilon})^{j-i} \int_{t_i}^{t_{i+1}} c(t,j) dt\end{aligned}$$ - If there exists $\bar{c} > 0$ such that $|c(t,j)| \leq \bar{c}$ for each $(t,j) \in {\mathop{\rm dom}\nolimits}\tau$ then $$\begin{aligned} \lim_{t+j \to \infty}|\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq \frac{\bar{c}{\bar{\tau}}}{|{\varepsilon}|\omega}. \label{thmeqn:distupperbound}\end{aligned}$$ - If ${\widetilde}{j} : {\mathbb{R}}_{\geq 0} \to {\mathbb{N}}$ is a function that [[ chooses the appropriate minimum $j$ such that $(t,j) \in {\mathop{\rm dom}\nolimits}\tau$]{}]{} for each time $t$ and $t \mapsto c(t,{\widetilde}{j}(t))$ is absolutely integrable, i.e., $\exists B$ such that then $$\begin{aligned} \lim_{t + j \to \infty} |\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq \frac{B}{{\varepsilon}}. \label{eqn:limboundbeps}\end{aligned}$$ Consider a maximal solution $\tau$ to ${{{\mathcal}H}}_N$ with initial condition $\tau(0,0) \in P_N \setminus {\mathcal{X}}$. This proof uses the function $V$ from the proof of Theorem \[thm:stability\]. With $V$ equal to the distance from $\tau$ to the set ${\widetilde}{\mathcal{A}}$, then, for each $\tau \in D \setminus {\mathcal{X}}$, we have that $V(G(\tau)) - V(\tau) = {\varepsilon}V(\tau)$. Using the fact that $V(\tau) = |\tau|_{{\widetilde}{\mathcal{A}}}$ and the fact that, $G$ along the solution is single valued, it follows that $|\tau|_{{\widetilde}{\mathcal{A}}}$ after a jump can be equivalently written as By assumption, in between jumps, the distance to the set ${\widetilde}{\mathcal{A}}$ is such that $\frac{d}{dt}|\tau(t,j)|_{{\widetilde}{\mathcal{A}}} = c(t,j)$, which implies that at $t_{j+1}$ the distance to the desynchronization set is given by $$|\tau(t_{j+1},j)|_{{\widetilde}{\mathcal{A}}} = \int_{t_{j}}^{t_{j+1}} c(s,j) ds + |\tau(t_{j},j)|_{{\widetilde}{\mathcal{A}}}.$$ It follows that Then, proceeding in this way, we obtain $$\begin{aligned} |\tau(t_{j},j)|_{{\widetilde}{\mathcal{A}}} &= (1+{\varepsilon})^{j}|\tau(0,0)|_{{\widetilde}{\mathcal{A}}} \\ & \qquad \qquad \qquad +\sum_{i = 0}^{j-1}(1+{\varepsilon})^{j-i}\int_{t_{i}}^{t_{i+1}} c(s,i)ds. \end{aligned}$$ For the case of generic $t_{j+1} \geq t \geq t_j$, we have that $$\begin{aligned} |\tau(t,j)|_{{\widetilde}{\mathcal{A}}} = (1+{\varepsilon})^{j}|\tau(0,0)|_{{\widetilde}{\mathcal{A}}} + \sum_{i = 0}^{j}(1+{\varepsilon})^{j-i}\int_{t_{i}}^t c(s,i)ds.\end{aligned}$$ Since, we know that as either $t$ or $j$ goes to infinity, $j$ or $t$ go to infinity as well, respectively. The expression reduces to If $c(t,j) \leq \bar{c}$, it follows that Lastly, since this hybrid system has the property that for any maximal solution $\tau$ with $(t,j) \in {\mathop{\rm dom}\nolimits}\tau$, if $t$ approaches $\infty$ then the parameter $j$ also approaches $\infty$, the expression given by $\lim_{t + j \to \infty} |\tau(t,j)|_{{\widetilde}{\mathcal{A}}}$ can be simplified. To do this, we know that the series $\sum_{i=0}^j (1+{\varepsilon})^{j -i} = \frac{(1+{\varepsilon})^{j+1}-1}{{\varepsilon}}$ approaches $\frac{1}{|{\varepsilon}|}$ as $j \to \infty$. Since $1+{\varepsilon}>0$ for ${\varepsilon}\in (-1,0)$, the series is absolutely convergent and its partial sum $s_j = \sum_{i=0}^j (1+{\varepsilon})^{j -i}$ is such that $\{s_j\}^\infty_{j=m}$ is a nondecreasing sequence (for each $m$). This implies that $s_j \leq 1/|{\varepsilon}|$ for all $j$ and for each $m$. Then, it follows that $(1+{\varepsilon})^{j-i} \leq \frac{1}{|{\varepsilon}|}$ for every $j,i \in {\mathbb{N}}$. Since the expression is a function of $j$ only and, for complete solutions, $t$ is such that as $t \to \infty$, then $j \to \infty$, we obtain Numerical Analysis {#sec:numerics} ================== This section presents numerical results obtained from simulating ${{{\mathcal}H}}_N$. First, we present results for the nominal case of ${{{\mathcal}H}}_N$ given by . Then, we present results for ${{{\mathcal}H}}_N$ under different types of perturbations. The Hybrid Equations (HyEQ) Toolbox in [@Sanfelice.ea.13.HSCC] was used to compute the trajectories. Nominal Case ------------ The possible solutions to the hybrid system ${{{\mathcal}H}}_N$ fall into four categories: always desynchronized, asymptotically desynchronized, never desynchronized, and initially synchronized. The parameters used in these simulations are ${\bar{\tau}}= 1$ and $\varepsilon = -0.2$. A solution of ${{{\mathcal}H}}_N$ that starts in $P_N \setminus ({\mathcal{X}}\cup {\mathcal{A}})$ asymptotically converges to ${\mathcal{A}}$, as Theorem \[thm:timetoconverge\] indicates. show solutions to both ${{{\mathcal}H}}_{2}$ and ${{{\mathcal}H}}_{3}$ converging to their respective desynchronization sets.  \ \[\]\[\]\[.7\][$\tau_1,\tau_{2}$]{} \[\]\[\]\[.7\][$t$ \[seconds\]]{} \[\]\[\]\[.8\] For ${{{\mathcal}H}}_2$, if $\tau(0,0) = [0,0.1]^\top$, then the initial sublevel set is ${\widetilde}L_{V}(c_{2})$ with $c_{2} = 0.24$. Using Theorem \[thm:timetoconverge\], the time to converge to the sublevel set ${\widetilde}L_{V}(c_{1})$ with $c_{1} = 0.1$ leads to $M = 7.84$. Figure \[fig:H2notinX2\] shows a solution to the system for 10 seconds of flow time. From the figure, it can be seen that $V(\tau(t,j)) \approx 0.1$ at $(t,j) = (3,4)$. Then, the property guaranteed by Theorem \[thm:timetoconverge\], namely, $V(\tau(t,j)) \leq c_{1}$ for each $(t,j)$ such that $t + j \geq M$, is satisfied. Figure \[fig:H3notinX3\], shows a solution and the distance of this solution to ${\mathcal{A}}$. Notice that the initial sub level set is ${\widetilde}{L}_{V}(c_{2})$ with $c_{2} =0.32$. From Theorem \[thm:timetoconverge\] it follows that the time to converge to ${\widetilde}{L}_{V}(c_{1})$ with $c_{1} = 0.1$ is given by $M =10.14$, which is actually already satisfied at $(t,j) = (2.2,4)$. show solutions to ${{{\mathcal}H}}_N$ that asymptotically desynchronize for $N \in \{7,10\}$. Perturbed Case -------------- In this section, we present numerical results to validate the statements in Section \[sec:robustness\]. ### Simulations of ${{{\mathcal}H}}_N$ with perturbed jumps {#sec:jumpperturbs1} $\bullet$ [**Perturbation of the threshold in the jump set:**]{} We replace the jump set $D$ by $D_{\rho} := \{\tau : \exists i \in I \ s.t. \ \tau_i = {\bar{\tau}}+ \rho_i\}$ where $\rho_i \in [0, \bar{\rho}_i]$, $\bar{\rho}_i > 0$ for each $i \in I$. To avoid maximal solutions that are not complete, the flow set $C$ is replaced by $C_\rho := [0,{\bar{\tau}}+\rho_1]\times[0,{\bar{\tau}}+\rho_2] \times \ldots \times [0,{\bar{\tau}}+\rho_N]$. Furthermore, the components of the jump map are also replaced by $$g_{\rho_i} (\tau) = \left\{ \begin{array}{ll} 0 \qquad \qquad \ \ \ \ &\mbox{if } \tau_{i} = {\bar{\tau}}+\rho_i, \tau_r < {\bar{\tau}}+\rho_j \ \ \forall j \in I\setminus \{i\} \\ \{0 , \tau_{i}(1+\varepsilon) \} \ &\mbox{if } \tau_{i} = {\bar{\tau}}+\rho_i \ \exists j \in I \setminus \{i\} \ \mbox{s.t.} \ \tau_r = {\bar{\tau}}+\rho_j \\ (1+\varepsilon)\tau_{i} \qquad \ &\mbox{if } \tau_i < {\bar{\tau}}+\rho_i \ \exists j \in I \setminus \{i\} \ \mbox{s.t.} \ \tau_r = {\bar{\tau}}+\rho_j\end{array} \right. \label{eqn:gi_perturbed} .$$ $\bullet$ [**Perturbations on the reset component of the jump map:**]{} Under the effect of the perturbations considered in this case, instead of reseting $\tau_i$ to zero, the perturbed jump resets $\tau_i$ to a value $\rho_i \in {\mathbb{R}}_{\geq 0}$, for each $i \in I$. The perturbed hybrid system has the following data: and where, for each $i \in I$, the perturbed jump map is given by $$g_i (\tau) = \left\{ \begin{array}{l} \rho_i \qquad \qquad \ \ \ \ \mbox{if } \tau_{i} = {\bar{\tau}}, \tau_r < {\bar{\tau}}\ \ \forall j \in I\setminus \{i\} \\ \{\rho_i , \tau_{i}(1+\varepsilon) \} \ \mbox{if } \tau_{i} = {\bar{\tau}}\ \exists j \in I \setminus \{i\} \ \mbox{s.t.} \ \tau_r = {\bar{\tau}}\\ (1+\varepsilon)\tau_{i} \qquad \ \mbox{if } \tau_i < {\bar{\tau}}\ \exists j \in I \setminus \{i\} \ \mbox{s.t.} \ \tau_r = {\bar{\tau}}\end{array} \right. .\label{eqn:gi_resetperturbation}$$ This case of perturbations exemplifies Theorem \[thm:robustofAS\] with $\rho$ affecting only the jump map of ${{{\mathcal}H}}_N$. Figures \[figs:ResetPerturb\_di\_equal\_di=0.2\] and \[figs:ResetPertub\_di\_notequal\_di\] show several simulations to this perturbation of ${{{\mathcal}H}}_N$. All of the simulations in this section use parameters $\omega = 1$, ${\bar{\tau}}= 3$, ${\varepsilon}= -0.3$, and $N = 2$. The first case of the perturbed jump map $G_\rho$ considered is for $\rho_1 = \rho_2 = 0.02$. Figure \[ResetPerturb\_di\_equal\_t1t2plot\_di=0.2\] shows a solution to the perturbed ${{{\mathcal}H}}_2$ from the initial condition $\tau(0,0) = [2.4,2.3]^\top$ on the $(\tau_1,\tau_2)$-plane. Notice that for $\tau \in D$ such that $\tau_i = {\bar{\tau}}$ the jump map resets $\tau_i$ to $\rho_i$ (red dashed line) and not to $0$ as in the unperturbed case. The solution for this case approaches a region around ${\widetilde}{\mathcal{A}}$, as Theorem \[thm:robustofAS\] guarantees. Figure \[ResetPerturb\_di\_equal\_timeplot\_di=0.2\] shows the distance to the set ${\widetilde}{\mathcal{A}}$ over time for 10 solutions of the perturbed system ${{{\mathcal}H}}_2$ with initial conditions $\tau(0,0) \in P_2\setminus {\mathcal{X}}_{2}$. This figure shows that solutions approach a distance of about $0.12$ after 25 seconds. Now, consider the case where $\rho_1 \neq \rho_2$. Figure \[figs:ResetPertub\_di\_notequal\_di\] shows the distance to ${\widetilde}{\mathcal{A}}$ for two sets of solutions with different values for $\rho_1$ and $\rho_2$. More specifically, Figure \[ResetPertub\_di\_NotEqual\_timeplot\_di=\[0.15,0.25\]\] shows the case of $\rho_1 = 0.15$ and $\rho_2 = 0.25$. For this case, it can be seen that the solutions converge after $\approx 28$ seconds of flow time and, after that time, satisfy $|\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq 0.25$. Figure \[ResetPertub\_di\_NotEqual\_timeplot\_di=\[0.01,0.02\]\] shows the case of $\rho_1 = 0.02$ and $\rho_2 = 0.01$. For this case, this figure shows that, after $\approx 28$ seconds of flow time, the solutions satisfy $|\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq 0.04$. These simulations validate Theorem \[thm:robustofAS\] with $\rho$ affecting only the jump map, verifying that the smaller the size of the perturbation the smaller the steady-state value of the distance to ${\widetilde}{\mathcal{A}}$. the component $(1+{\varepsilon})\tau_i$ of the jump map is perturbed, namely, we use $\tau_i^+ = (1+{\varepsilon}) \tau_i + \rho_i(\tau_i)$, where $\rho_i : {\mathbb{R}}_{\geq 0} \to P_N \setminus {\mathcal{X}}$ is a continuous function. The perturbed jump map $G_{\rho}$ has components $g_{\rho i}$ that are given as $g_{i}$ in but with $\tau_{i}(1+{\varepsilon}) + \rho_{i}(\tau_{i})$ replacing $\tau_{i}(1+{\varepsilon})$. Consider the case $\rho_i(\tau_{i}) = {\widetilde}\rho_i \tau_i$ with ${\widetilde}\rho_i \in (0,|{\varepsilon}|)$ and let ${\widetilde}{\varepsilon}_i = {\varepsilon}+ {\widetilde}\rho_i \in (-1,0)$. Then $\tau_i^+$ reduces to $\tau_i^+ = (1+{\widetilde}{\varepsilon}_i)\tau_i$ and the jump map $g_{\rho i}$ is given by with ${\widetilde}{\varepsilon}_{i}$ in place of ${\varepsilon}$. This type of perturbation is used to verify Theorem \[thm:robustofAS\] with $\rho$ affecting only the “bump” portion of the jump map. and \[figs:ResetBumpPerturb\_di\_notequal\_di\] show simulations to ${{{\mathcal}H}}_N$ with the parameters $\omega = 1$, ${\bar{\tau}}= 3$, ${\varepsilon}= -0.3$, and $N = 2$. Consider the case of ${{{\mathcal}H}}_2$ with $G_\rho$ when ${\widetilde}\rho_1 = {\widetilde}\rho_2 = 0.1$, leading to ${\widetilde}{\varepsilon}_1 = {\widetilde}{\varepsilon}_2$ = 0.2. shows a solution on the $(\tau_1,\tau_2)$-plane for this case with initial condition $\tau(0,0) = [0.1,0.2]^\top$. Notice that the solution approaches a region around ${\mathcal{A}}$ (green line), as Theorem \[thm:robustofAS\] guarantees. Figure \[ResetBumpPertub\_di\_equal\_timeplot\_di=\[0.1,0.1\]\] shows the distance to the set ${\widetilde}{\mathcal{A}}$ over time for 10 solutions with initial conditions $\tau(0,0) \in C$. It shows that solutions approach a distance to ${\widetilde}{\mathcal{A}}$ of $\approx0.09$ after $\approx40$ seconds of flow time. Next, we consider the case of $G_{\rho}$ with ${\widetilde}{\varepsilon}_1 \neq {\widetilde}{\varepsilon}_2$. Figure \[ResetBumpPertub\_di\_NotEqual\_timeplot\_di=\[0.15,0.1\]\] shows the distance to ${\widetilde}{\mathcal{A}}$ for 10 solutions with perturbations given by ${\widetilde}\rho_1 = 0.15$ and ${\widetilde}\rho_2 = 0.1$. For this case, the distance to ${\widetilde}{\mathcal{A}}$ satisfies $|\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq 0.3$ after $\approx40$ seconds of flow time. Figure \[fig:ResetBumpPertub\_di\_NotEqual\_timeplot\_di=\[0.02,0.01\]\] shows simulation results with ${\widetilde}\rho_1 = 0.02$ and ${\widetilde}\rho_2 = 0.01$. Notice that the smaller the value of the perturbation is, the closer the solutions get to the set ${\widetilde}{\mathcal{A}}$. For this case, after $\approx30$ seconds of flow time, the distance to ${\widetilde}{\mathcal{A}}$ satisfies $|\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq 0.06$. These simulations validate Theorem \[thm:robustofAS\] with $\rho$ affecting only the jump map, verifying that the smaller the size of the perturbation the smaller the steady-state value of the distance to ${\widetilde}{\mathcal{A}}$ would be. ### Perturbations on the Flow Map {#sec:flowpertrubs} In this section, we consider a class of perturbations on the flow map. More precisely, consider the case when there exists a function $(t,j) \mapsto c(t,j)$ such that $c(t,j) \leq \bar{c}$ with $\bar{c}$ as in . Then, from Theorem \[thm:vanishingC\] with , we know that $$\begin{aligned} \lim_{t+j \to \infty} |\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq \left|\frac{\bar{c}{\bar{\tau}}}{{\varepsilon}\omega}\right| \leq \left|\frac{\left|(\frac{1}{N}\underline{\bf 1} - {\bf I})\Delta\omega\right|{\bar{\tau}}}{{\varepsilon}\omega}\right|. \label{eqn:flowpert2}\end{aligned}$$ Figure  shows a simulation so as to verify this property. The parameters of this simulation are $N = 2$, $\omega = 1$, ${\varepsilon}= -0.3$, ${\bar{\tau}}= 4$, and $\Delta\omega = [0.120,0.134]^\top$. It follows from that $\overline{c} = 0.0105$. Then, from , it follows that $\lim_{t+j \to \infty}|\tau(t,j)|_{{\widetilde}{\mathcal{A}}} \leq 0.1047$. Specifically, Figure \[fig:c=constant\_tau1tau2plot\] shows a solution on the $(\tau_1,\tau_2)$-plane of the perturbed hybrid system ${{{\mathcal}H}}_2$ with initial condition $\tau(0,0) = [0,0.01]^\top$. This figure shows the solution (blue line) converging to a region around ${\widetilde}{\mathcal{A}}$ (between dash-dotted lines about ${\mathcal{A}}$ in green). Figure \[fig:c=constant\_disttimetraj\] shows the distance to the set ${\widetilde}{\mathcal{A}}$ of 10 solutions with initial conditions $\tau(0,0) \in C$ with a dashed line denoting the upper bound on the distance in . Notice that all solutions are within this bound after approximately 15 seconds of flow time and stay within this region afterwards. Conclusion {#sec:conclusion} ========== We have shown that desynchronization in a class of impulse-coupled oscillators is an asymptotically stable and robust property. These properties are established within a solid framework for modeling and analysis of hybrid systems, which is amenable for the study of synchronization and desynchronization in other impulse-coupled oscillators in the literature. The main difficulty in applying these tools lies on the construction of a Lyapunov-like quantity certifying asymptotic stability. As we show here, invariance principles can be exploited to relax the conditions that those functions have to satisfy, so as to characterize convergence, stability, and robustness in the class of systems under study. Future directions of research include the study of nonlinear reset maps, such as those capturing the phase-response curve of spiking neurons, as well as impulse-coupled oscillators connected via general graphs. Appendix {#sec:appendix} ======== The following result derives the solution to $\Gamma\tau_{s} = b$ with $\Gamma$ given in and $b = {\bar{\tau}}{\mathbf 1}$ via Gaussian elimination. \[eqn:tauSsolution\] For each ${\varepsilon}\in (-1,0)$, the solution $\tau_s$ to $\Gamma\tau_{s} = b$ with $\Gamma$ given in and $b = {\bar{\tau}}{\mathbf 1}$ is such that its elements, denoted as $\tau_s^k$ for each $k \in \{1,2,\ldots,N\}$, are given by $\tau_{s}^{k} = \frac{\sum_{i=0}^{N-k}({\varepsilon}+1)^i}{\sum_{i=0}^{N-1}({\varepsilon}+1)^i}{\bar{\tau}}$. The $N \times N$ matrix in and the $N \times 1$ matrix $b = {\bar{\tau}}{\mathbf 1}$ leads to the augmented matrix $[\Gamma|b]$ given by To solve for $\tau_{s}^{k}$, we apply the Gauss-Jordan elimination technique to to remove the elements $-({\varepsilon}+1)$ above the diagonal. Starting from the $N$-th row to remove the $-({\varepsilon}+1)$ component in the $N-1$ row, and continuing up to the second row, gives Denoting the augmented matrix in as $[\Gamma'|b']$, with $\tau_{s}^{1} = {\bar{\tau}}$ and $\tau_{s}^{2} = \frac{\sum_{i=0}^{N-2}({\varepsilon}+1)^i}{\sum_{i=0}^{N-1}({\varepsilon}+1)^i}{\bar{\tau}}$, the solution for each element of $\tau_{s}^{k}$ with $k > 2$ can be derived from as $\Gamma'_{k,2}\tau^{2}_{s} + \tau^{k}_{s} = b'_{k}$ where $\Gamma'_{k,2}$ denotes the $(k,2)$ entry of $\Gamma'$. Noting that $\tau_{s}^{1}$ can be rewritten as $\tau_{s}^{1} = \frac{\sum_{i=0}^{N-1}({\varepsilon}+1)^i}{\sum_{i=0}^{N-1}({\varepsilon}+1)^i}{\bar{\tau}}$ leads to $\tau_{s}^{k} = \frac{\sum_{i=0}^{N-k}({\varepsilon}+1)^i}{\sum_{i=0}^{N-1}({\varepsilon}+1)^i}{\bar{\tau}}$. \[lem:consum1\] For each $x \neq 1$, and $m,n \in {\mathbb{N}}$ such that $n-1 \geq m$, the finite sum $\sum_{i=m}^{n-1} x^i$ satisfies $ \sum_{i=m}^{n-1} x^i = \frac{x^n - x^m}{x-1}. $ \[lem:consum2\] For each $x \neq 1$, [[ and each $m,N \in {\mathbb{N}}$ such that $N \geq m$]{}]{}, the finite sum $\sum_{n=m}^N\sum_{i=0}^{N-n}x^i$ satisfies [^1]: Department of Computer Engineering, University of California, Santa Cruz, CA 95064. Email: [seaphill,ricardo@ucsc.edu]{}. This research has been partially supported by the National Science Foundation under CAREER Grant no. ECS-1150306 and by the Air Force Office of Scientific Research under Grant no. FA9550-12-1-0366. [^2]: Cf. the model for synchronization in [@Mirollo.90.SIAMJAM.BiologicalOscillators] where $\varepsilon > 0$. [^3]: In [@mauroy:037122], a more general flow map and a jump map incrementing $\tau_i$ by ${\varepsilon}> 0$ are considered. [^4]: A set-valued mapping $G : {\mathbb{R}}^N{\rightrightarrows}{\mathbb{R}}^N$ is [*outer semicontinuous*]{} if its graph $\{(x,y): x \in {\mathbb{R}}^N, y \in G(x)\}$ is closed, see [@teel2012hybrid Lemma 5.10] and [@RockafellarWets98]. [^5]: Note that $G$ is single valued at each ${\widetilde}\tau^k \notin {\mathcal{X}}$. [^6]: The set ${\widetilde}\ell_k$ can be described as a straight line in ${\mathbb{R}}^n$ passing through a point ${\widetilde}\tau^k$ and with slope ${\mathbf 1}$. Then, $|\tau|_{{\widetilde}\ell_k}$ can be written as the general point-to-line distance $|({\widetilde}\tau^k - \tau) - 1/N (({\widetilde}\tau^k - \tau)^\top{\mathbf 1}){\mathbf 1}|$. [^7]: The set ${\mathcal{X}}_{v}$ is open since every point $\tau \in {\mathcal{X}}_{v}$ is an interior point of ${\mathcal{X}}_{v}$. [^8]: Its derivative can be computed using Clarke’s generalized gradient [@Clarke90]. [^9]: For example, consider the case $N = 2$. If $\tau(0,0) = [{\bar{\tau}},{\bar{\tau}}]^\top \in D$, then there are nonunique solutions due to the jump map begin set valued. It follows that after the jump, each $\tau_i$ can be mapped to any point in $\{0,\tau_i(1+{\varepsilon})\}$, which leads to any of the following four options of the states $(\tau_1,\tau_2)$ after such a jump: $(0,0),(0,{\bar{\tau}}(1+{\varepsilon})),({\bar{\tau}}(1+{\varepsilon}),0)$ or $({\bar{\tau}}(1+{\varepsilon}),{\bar{\tau}}(1+{\varepsilon}))$. If the state is mapped to either $(0,0)$ or $({\bar{\tau}}(1+{\varepsilon}),{\bar{\tau}}(1+{\varepsilon}))$, then it remains in ${\mathcal{X}}_2$. Conversely, if any of the other options are chosen, then $({\tau_1},{\tau_2})$ leaves ${\mathcal{X}}_2$ and converges to ${\mathcal{A}}$ asymptotically. [^10]: Let $r_{\ell_{k}}(\tau)$ be the vector defined by the minimum distance from $\tau$ to the line $\ell_{k}$. Then, it follows that $V(\tau) = (r_{\ell_{k}}^{\top}(\tau)r_{\ell_{k}}(\tau))^{\frac{1}{2}}$. To determine its change during flows, note that on $C\setminus ({\mathcal{X}}\cup {\mathcal{A}})$ the gradient is given by $ \nabla V(\tau) = \frac{\partial}{\partial \tau} \left( r_{\ell_{k}}^{\top}(\tau)r_{\ell_{k}}(\tau) \right)^{\frac{1}{2}} = \frac{\left(r_{\ell_{k}}^{\top}(\tau) \frac{\partial}{\partial \tau} r_{\ell_{k}}(\tau) \right)}{|\tau|_{\ell_{k}}} $ where each $j$-th entry of $\frac{\partial}{\partial \tau} r_{\ell_{k}}(\tau)$ is given by $ \frac{\partial}{\partial \tau} r^{j}_{\ell_{k}}(\tau) = \frac{\partial}{\partial \tau}\left(({\widetilde}\tau_j\null\hspace{-.08cm}^{k} - \tau_{j}) - \frac{1}{N}\sum_{i=1}^N({\widetilde}\tau_i\null\hspace{-.08cm}^k - \tau_i)^\top \right) = \left[\frac{1}{N}, \frac{1}{N},\ldots, \frac{1}{N}, -1 + \frac{1}{N}, \frac{1}{N}, \ldots, \frac{1}{N} \right] $ – the term $-1 + \frac{1}{N}$ corresponds to the $j$-th element of the vector. It follows that $\frac{\partial}{\partial \tau} r_{\ell_{k}}(\tau) = \frac{1}{N}{\bf \underline{1}} - {\bf I}$. Then, for each $\tau \in C \setminus {\mathcal{X}}$, $ \langle \nabla V(\tau), f(\tau) \rangle = \left(\frac{r_{\ell_{k}}^{\top}(\tau)(\frac{1}{N}{\bf \underline{1} - \bf I})}{|\tau|_{\ell_{k}}}\right)f(\tau) $.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper considers the problem of simultaneously communicating two messages, a high-security message and a low-security message, to a legitimate receiver, referred to as the security embedding problem. An information-theoretic formulation of the problem is presented. A coding scheme that combines rate splitting, superposition coding, nested binning and channel prefixing is considered and is shown to achieve the secrecy capacity region of the channel in several scenarios. Specifying these results to both scalar and independent parallel Gaussian channels (under an average individual per-subchannel power constraint), it is shown that the high-security message can be embedded into the low-security message at full rate (as if the low-security message does not exist) without incurring any loss on the overall rate of communication (as if both messages are low-security messages). Extensions to the wiretap channel II setting of Ozarow and Wyner are also considered, where it is shown that “perfect" security embedding can be achieved by an encoder that uses a two-level coset code.' author: - 'Hung D. Ly, Tie Liu, and Yufei Blankenship[^1]' title: Security Embedding Codes --- Introduction ============ Physical layer security has been a very active area of research in information theory. See [@LPS-M09] and [@LT-M10] for overviews of recent progress in this field. A basic model of physical layer security is a wiretap/broadcast channel [@Wyn-BSTJ75; @CK-IT78] with two receivers, a legitimate receiver and an eavesdropper. Both the legitimate receiver and the eavesdropper channels are assumed to be *known* at the transmitter. By exploring the (statistical) difference between the legitimate receiver channel and the eavesdropper channel, one may design coding schemes that can deliver a message reliably to the legitimate receiver while keeping it asymptotically perfectly secret from the eavesdropper. While assuming the transmitter’s knowledge of the legitimate receiver channel might be reasonable (particularly when a feedback link is available), assuming that the transmitter knows the eavesdropper channel is *unrealistic* in most scenarios. This is mainly because the eavesdropper is an *adversary*, who usually has no incentive to help the transmitter to acquire its channel state information. Hence, it is critical that physical layer security techniques are designed to withstand the *uncertainty* of the eavesdropper channel. In this paper, we consider a communication scenario where there are *multiple* possible realizations for the eavesdropper channel. Which realization will actually occur is *unknown* to the transmitter. Our goal is to design coding schemes such that the number of *secure* bits delivered to the legitimate receiver depends on the *actual* realization of the eavesdropper channel. More specifically, when the eavesdropper channel realization is weak, *all* bits delivered to the legitimate receiver need to be secure. In addition, when the eavesdropper channel realization is strong, a prescribed *part* of the bits needs to *remain* secure. We call such codes *security embedding codes*, referring to the fact that high-security bits are now embedded into the low-security ones. We envision that such codes are naturally useful for the secrecy communication scenarios where the information bits are *not* created equal: some of them have more security priorities than the others and hence require stronger security protections during communication. For example, in real wireless communication systems, control plane signals have higher secrecy requirement than data plane transmissions, and signals that carry users’ identities and cryptographic keys require stronger security protections than the other signals. A key question that we consider is at what expense one may allow part of the bits to enjoy stronger security protections. Note that a “naive" security embedding scheme is to design two separate secrecy codes to provide two different levels of security protections, and apply them to two separate parts of the information bits. In this scheme, the high-security bits are protected using a stronger secrecy code and hence are communicated at a lower rate. The overall communication rate is a *convex* combination of the low-security bit rate and the high-security bit rate and hence is lower than the low-security bit rate. Moreover, this rate loss becomes larger as the portion of the high-security bits becomes larger and the additional security requirement (for the high-security bits) becomes higher. The main result of this paper is to show that it is possible to have a significant portion of the information bits enjoying additional security protections *without* sacrificing the overall communication rate. This further justifies the name “security embedding," as having part of the information bits enjoying additional security protections is now only an added bonus. More specifically, in this paper, we call a secrecy communication scenario *embeddable* if a *nonzero* fraction of the information bits can enjoy additional security protections without sacrificing the overall communication rate, and we call it *perfectly embeddable* if the high-security bits can be communicated at *full* rate (as if the low-security bits do not exist) without sacrificing the overall communication rate. Key to achieve optimal security embedding is to *jointly* encode the low-security and high-security bits (as opposed to separate encoding as in the naive scheme). In particular, the low-security bits can be used as (part of) the *transmitter randomness* to protect the high-security bits (when the eavesdropper channel realization is strong); this is a key feature of our proposed security embedding codes. The rest of the paper is organized as follows. In Sec. \[sec:wtc\], we briefly review some basic results on the secrecy capacity and optimal encoding scheme for several classical wiretap channel settings. These results provide performance and structural benchmarks for the proposed security embedding codes. In Sec. \[sec:mswtc\], an information-theoretic formulation of the security embedding problem is presented, which we term as *two-level security wiretap channel*. A coding scheme that combines rate splitting, superposition coding, nested binning and channel prefixing is proposed and is shown to achieve the secrecy capacity region of the channel in several scenarios. Based on the results of Sec. \[sec:mswtc\], in Sec. \[sec:gmswtc\] we study the engineering communication models with real channel input and additive white Gaussian noise, and show that both scalar and independent parallel Gaussian (under an individual per-subchannel average power constraint) two-level security wiretap channels are *perfectly embeddable*. In Sec. \[sec:mswtc2\], we extend the results of Sec. \[sec:mswtc\] to the *wiretap channel II* setting of Ozarow and Wyner [@OW-BSTJ84], and show that two-level security wiretap channels II are also *pefectly embeddable*. Finally, in Sec. \[sec:con\], we conclude the paper with some remarks. Wiretap Channel: A Review {#sec:wtc} ========================= Consider a discrete memoryless wiretap channel with transition probability $p(y,z|x)$, where $X$ is the channel input, and $Y$ and $Z$ are the channel outputs at the legitimate receiver and the eavesdropper, respectively (see Fig. \[fig:wtc\]). The transmitter has a message $M$, uniformly drawn from $\{1,\ldots,2^{nR}\}$ where $n$ is the block length and $R$ is the rate of communication. The message $M$ is intended for the legitimate receiver, but needs to be kept asymptotically perfectly secret from the eavesdropper. Mathematically, this secrecy constraint can be written as $$\frac{1}{n}I(M;Z^n) \rightarrow 0 \label{eq:cons1}$$ in the limit as $n \rightarrow \infty$, where $Z^n=(Z[1],\ldots,Z[n])$ is the collection of the channel outputs at the eavesdropper during communication. A communication rate $R$ is said to be *achievable* if there exists a sequence of codes of rate $R$ such that the message $M$ can be reliably delivered to the legitimate receiver while satisfying the asymptotic perfect secrecy constraint . The largest achievable rate is termed as the *secrecy capacity* of the channel. A discrete memoryless wiretap channel $p(y,z|x)$ is said to be *degraded* if $X \rightarrow Y \rightarrow Z$ forms a Markov chain in that order. The secrecy capacity $C_s$ of a degraded wiretap channel was characterized by Wyner [@Wyn-BSTJ75] and can be written as $$C_s = \max_{p(x)} \left[I(X;Y)-I(X;Z)\right] \label{eq:Cs-Wyn}$$ where the maximization is over all possible input distributions $p(x)$. The scheme proposed in [@Wyn-BSTJ75] to achieve the secrecy capacity is *random binning*, which can be described as follows. Consider a codebook of $2^{n(R+T)}$ codewords, each of length $n$. The codewords are partitioned into $2^{nR}$ bins, each containing $2^{nT}$ codewords. Given a message $m$ (which is uniformly drawn from $\{1,\ldots,2^{nR}\}$), the encoder *randomly* and uniformly chooses a codeword $x^n$ in the $m$th bin and sends it through the channel. The legitimate receiver needs to decode the entire codebook (and hence recover the transmitted message $m$), so the overall rate $R+T$ cannot be too high. On the other hand, the rate $T$ of the sub-codebooks in each bin represents the amount of external randomness injected by the transmitter (transmitter randomness) into the channel and hence needs to be sufficiently large to confuse the eavesdropper. With an appropriate choice of the codebooks and the partitions of bins, it was shown in [@Wyn-BSTJ75] that any communication rate $R$ less than the secrecy capacity is achievable by the aforementioned random binning scheme. For a *general* discrete memoryless wiretap channel $p(y,z|x)$ where the channel outputs $Y$ and $Z$ are *not* necessarily ordered, the random binning scheme of [@Wyn-BSTJ75] is *not* necessarily optimal. In this case, the secrecy capacity $C_s$ of the channel was characterized by Csiszár and Körner [@CK-IT78] and can be written as $$C_s = \max_{p(v,x)} \left[I(V;Y)-I(V;Z)\right] \label{eq:Cs-CK}$$ where $V$ is an auxiliary random variable satisfying the Markov chain $V \rightarrow X \rightarrow (Y,Z)$. The scheme proposed in [@CK-IT78] is to first *prefix* the channel input $X$ by $V$ and view $V$ as the input of the *induced* wiretap channel $p(y,z|v)=\sum_{x}p(y,z|x)p(x|v)$. Applying the random binning scheme of [@Wyn-BSTJ75] to the induced wiretap channel $p(y,z|v)$ proves the achievability of rate $I(V;Y)-I(V;Z)$ for any given joint auxiliary-input distribution $p(v,x)$. In communication engineering, communication channels are usually modeled as discrete-time channels with real input and additive white Gaussian noise. Consider a (scalar) Gaussian wiretap channel where the channel outputs at the legitimate receiver and the eavesdropper are given by $$\begin{array}{rcl} Y & = & \sqrt{a}X+N_1\\ Z & = & \sqrt{b}X+N_2. \end{array} \label{eq:Ch-GWTC}$$ Here, $X$ is the channel input which is subject to the average power constraint $$\frac{1}{n}\sum_{i=1}^n (X[i])^2 \leq P \label{eq:APC}$$ $a$ and $b$ are the channel gains for the legitimate receiver and the eavesdropper channel respectively, and $N_1$ and $N_2$ are additive white Gaussian noise with zero means and *unit* variances. The secrecy capacity of the channel was characterized in [@LH-IT78] and can be written as $$C_s(P,a,b) = \left[\frac{1}{2}\log(1+aP)-\frac{1}{2}\log(1+bP)\right]^+ \label{eq:Cs-LH}$$ where $[x]^+:=\max(0,x)$. Note from that $C_s(P,a,b)>0$ if and only if $a>b$. That is, for the Gaussian wiretap channel , asymptotic perfect secrecy communication is possible if and only if the legitimate receiver has a larger channel gain than the eavesdropper. In this case, we can equivalently write the channel output $Z$ at the eavesdropper as a degraded version of the channel output $Y$ at the legitimate receiver, and the random binning scheme of [@Wyn-BSTJ75] with *Gaussian* codebooks and *full* transmit power achieves the secrecy capacity of the channel. A closely related engineering scenario consists of a bank of $L$ independent parallel scalar Gaussian wiretap channels [@LYT-All06]. In this scenario, the channel outputs at the legitimate receiver and the eavesdropper are given by $Y=(Y_1,\ldots,Y_L)$ and $Z=(Z_1,\ldots,Z_L)$ where $$\begin{array}{rcl} Y_l & = & \sqrt{a_l}X_l+N_{1,l}\\ Z_l & = & \sqrt{b_l}X_l+N_{2,l} \end{array}, \quad l=1,\ldots,L. \label{eq:Ch-PGWTC}$$ Here, $X_l$ is the channel input for the $l$th subchannel, $a_l$ and $b_l$ are the channel gains for the legitimate receiver and the eavesdropper channel respectively in the $l$th subchannel, and $N_{1,l}$ and $N_{2,l}$ are additive white Gaussian noise with zero means and *unit* variances. Furthermore, $(N_{1,l},N_{2,l})$ are independent for $l=1,\ldots,L$ so all $L$ subchannels are independent of each other. Two different types of power constraints have been considered: the average individual per-subchannel power constraint $$\frac{1}{n}\sum_{i=1}^n (X_l[i])^2 \leq P_l, \quad l=1,\ldots,L \label{eq:pcons-s}$$ and the average total power constraint $$\sum_{l=1}^{L}\left[\frac{1}{n}\sum_{i=1}^n (X_l[i])^2\right] \leq P. \label{eq:pcons-t}$$ Under the average individual per-subchannel power constraint , the secrecy capacity of the independent parallel Gaussian wiretap channel is given by [@LYT-All06] $$C_s(\{P_l,a_l,b_l\}_{l=1}^{L})=\sum_{l=1}^{L}C_s(P_l,a_l,b_l) \label{eq:Cs-LYT}$$ where $C_s(P,a,b)$ is defined as in . Clearly, any communication rate less than the secrecy capacity can be achieved by using $L$ separate scalar Gaussian wiretap codes, each for one of the $L$ subchannels. The secrecy capacity, $C_s(P,\{a_l,b_l\}_{l=1}^{L})$, under the average total power constraint is given by $$C_s(P,\{a_l,b_l\}_{l=1}^{L}) =\max_{(P_1,\ldots,P_L)}\sum_{l=1}^{L}C_s(P_l,a_l,b_l)$$ where the maximization is over all possible power allocations $(P_1,\ldots,P_L)$ such that $\sum_{l=1}^{L}P_l \leq P$. A waterfilling-like solution for the optimal power allocation was derived in [@LYT-All06 Th. 1], which provides an efficient way to numerically calculate the secrecy capacity $C_s(P,\{a_l,b_l\}_{l=1}^{L})$. Two-Level Security Wiretap Channel {#sec:mswtc} ================================== Channel Model {#sec:mswtc-mod} ------------- Consider a discrete memoryless broadcast channel with three receivers and transition probability $p(y,z_1,z_2|x)$. The receiver that receives the channel output $Y$ is a legitimate receiver. The receivers that receive the channel outputs $Z_1$ and $Z_2$ are two possible realizations of an eavesdropper. Assume that the channel output $Z_2$ is *degraded* with respect to the channel output $Z_1$, i.e., $$X \rightarrow Z_1 \rightarrow Z_2 \label{eq:mk}$$ forms a Markov chain in that order. Therefore, the receiver that receives the channel output $Z_1$ represents a stronger realization of the eavesdropper channel than the receiver that receives the channel output $Z_2$. The transmitter has two independent messages: a high-security message $M_1$ uniformly drawn from $\{1,\ldots,2^{nR_1}\}$ and a low-security message $M_2$ uniformly drawn from $\{1,\ldots,2^{nR_2}\}$. Here, $n$ is the block length, and $R_1$ and $R_2$ are the corresponding rates of communication. Both messages $M_1$ and $M_2$ are intended for the legitimate receiver, and need to be kept asymptotically perfectly secure when the eavesdropper realization is weak, i.e., $$\frac{1}{n}I(M_1,M_2;Z_2^n) \rightarrow 0 \label{eq:cons2-1}$$ in the limit as $n \rightarrow \infty$. In addition, when the eavesdropper realization is strong, the high-security message $M_1$ needs to remain asymptotically perfectly secure, i.e., $$\frac{1}{n}I(M_1;Z_1^n) \rightarrow 0 \label{eq:cons2-2}$$ in the limit as $n \rightarrow \infty$. A rate pair $(R_1,R_2)$ is said to be *achievable* if there is a sequence of codes of rate pair $(R_1,R_2)$ such that both messages $M_1$ and $M_2$ can be reliably delivered to the legitimate receiver while satisfying the asymptotic perfect secrecy constraints and . The collection of all possible achievable rate pairs is termed as the *secrecy capacity region* of the channel. Fig. \[fig:mswtc\] illustrates this communication scenario, which we term as *two-level security wiretap channel*. The above setting of two-level security wiretap channel is closely related to the traditional wiretap channel setting of [@Wyn-BSTJ75; @CK-IT78]. More specifically, without the additional secrecy constraint on the high-security message $M_1$, we can simply view the messages $M_1$ and $M_2$ as a single (low-security) message $M$ with rate $R_1+R_2$. And the problem reduces to communicating the message $M$ over the traditional wiretap channel with transition probability $p(y,z_2|x)=\sum_{z_1}p(y,z_1,z_2|x)$. By the secrecy capacity expression , the maximum achievable $R_1+R_2$ is given by $$\max_{p(v,x)}\left[I(V;Y)-I(V;Z_2)\right] \label{eq:Cs_CK2}$$ where $V$ is an auxiliary random variable satisfying the Markov chain $V \rightarrow X \rightarrow (Y,Z_2)$. Similarly, without needing to communicate the low-security message $M_2$ (i.e., $R_2=0$), the secrecy constraint reduces to $(1/n)I(M_1;Z_2^n) \rightarrow 0$ which is implied by the secrecy constraint since $I(M_1;Z_2^n) \leq I(M_1;Z_1^n)$ due to the Markov chain . In this case, the problem reduces to communicating the high-security message $M_1$ over the traditional wiretap channel with transition probability $p(y,z_1|x)=\sum_{z_2}p(y,z_1,z_2|x)$. Again, by the secrecy capacity expression , the maximum achievable $R_1$ is given by $$\max_{p(w,x)}\left[I(W;Y)-I(W;Z_1)\right] \label{eq:Cs_CK3}$$ where $W$ is an auxiliary random variable satisfying the Markov chain $W \rightarrow X \rightarrow (Y,Z_1)$. Based on the above connections, we may conclude that a two-level security wiretap channel $p(y,z_1,z_2|x)$ is *embeddable* if there exists a sequence of coding schemes with a rate pair $(R_1,R_2)$ such that $R_1+R_2$ is equal to and $R_1>0$, and it is *perfectly embeddable* if there exists a sequence of coding schemes with a rate pair $(R_1,R_2)$ such that $R_1+R_2$ is equal to and $R_1$ is equal to . An important special case of the two-level security wiretap channel problem considered here is when the channel output $Z_2$ is a constant signal. In this case, the secrecy constraint becomes *obsolete*, and the low-security message $M_2$ becomes a *regular* message without any secrecy constraint. The problem of simultaneously communicating a regular message and a confidential message over a discrete memoryless wiretap channel was first considered in [@LLPS-ITS11], where a single-letter characterization of the capacity region was established. For the *general* two-level security wiretap channel problem that we consider here, both high-security message $M_1$ and low-security message $M_2$ are subject to asymptotic perfect secrecy constraints, which makes the problem potentially much more involved. Main Results ------------ The following theorem provides two *sufficient* conditions for establishing the achievability of a rate pair for a given discrete memoryless two-level security wiretap channel. \[thm:DM1\] Consider a discrete memoryless two-level security wiretap channel with transition probability $p(y,z_1,z_2|x)$ that satisfies the Markov chain . A nonnegative pair $(R_1,R_2)$ is an achievable rate pair of the channel if it satisfies $$\begin{array}{rll} R_1 & \leq & I(X;Y)-I(X;Z_1)\\ R_1+R_2 & \leq & I(X;Y)-I(X;Z_2) \end{array} \label{eq:DM1-1}$$ for some input distribution $p(x)$. More generally, a nonnegative pair $(R_1,R_2)$ is an achievable rate pair of the channel if it satisfies $$\begin{array}{rll} R_1 & \leq & I(V;Y|U)-I(V;Z_1|U)\\ R_1+R_2 & \leq & I(V;Y)-I(V;Z_2) \end{array} \label{eq:DM1-2}$$ for some joint distribution $p(u,v,x)$, where $U$ and $V$ are auxiliary random variables satisfying the Markov chain $U \rightarrow V \rightarrow X \rightarrow (Y,Z_1,Z_2)$ and such that $I(U;Y) \geq I(U;Z_2)$. Clearly, the sufficient condition can be obtained from by choosing $V=X$ and $U$ to be a constant. Hence, is a more general sufficient condition than . The sufficient condition can be proved by considering a *nested* binning scheme that uses the low-security message $M_2$ as part of the transmitter randomness to protect the high-security message $M_1$ (when the eavesdropper channel realization is strong). The more general sufficient condition can be proved by considering a more complex coding scheme that combines rate splitting, superposition coding, nested binning and channel prefixing. A detailed proof of the theorem is provided in Sec. \[pf:thm1\]. The following corollary provides sufficient conditions for establishing that a two-level security wiretap channel is (perfectly) embeddable. The conditions are given in terms of the existence of a joint auxiliary-input random triple and are immediate consequences of Theorem \[thm:DM1\]. A two-level security wiretap channel $p(y,z_1,z_2|x)$ is *embeddable* if there exists a pair of auxiliary random variables $U$ and $V$ satisfying the Markov chain $U \rightarrow V \rightarrow X \rightarrow (Y,Z_1,Z_2)$ and such that $I(U;Y) \geq I(U;Z_2)$, $p(v,x)$ is an *optimal* solution to the maximization program , and $I(V;Y|U)-I(V;Z_1|U)>0$, and it is *perfectly embeddable* if there exists a pair of auxiliary random variables $U$ and $V$ satisfying the Markov chain $U \rightarrow V \rightarrow X \rightarrow (Y,Z_1,Z_2)$ and such that $I(U;Y) \geq I(U;Z_2)$, $p(v,x)$ is an *optimal* solution to the maximization program , and $I(V;Y|U)-I(V;Z_1|U)$ is equal to . If, in addition to the Markov chain , we also have the Markov chain $$X \rightarrow Y \rightarrow Z_2 \label{eq:mk2}$$ in that order, the sufficient condition is also necessary, leading to a precise characterization of the secrecy capacity region. The results are summarized in the following theorem; a proof of the theorem can be found in Appendix \[App:1\]. \[thm:DM2\] Consider a discrete memoryless two-level security wiretap channel with transition probability $p(y,z_1,z_2|x)$ that satisfies the Markov chains and . The secrecy capacity region of the channel is given by the set of all nonnegative pairs that satisfy for some joint distribution $p(u,v,x)$, where $U$ and $V$ are auxiliary random variables satisfying the Markov chain $U \rightarrow V \rightarrow X \rightarrow (Y,Z_1,Z_2)$. If, in addition to the Markov chains and , we also have the Markov chain $$X \rightarrow Y \rightarrow Z_1 \label{eq:mk3}$$ in that order, the (weaker) sufficient condition also becomes necessary, leading to a simpler characterization of the secrecy capacity region (which does *not* involve any auxiliary random variables). The results are summarized in the following theorem; a proof of the theorem can be found in Appendix \[App:2\]. \[thm:DM3\] Consider a discrete memoryless two-level security wiretap channel with transition probability $p(y,z_1,z_2|x)$ that satisfies the Markov chains , and . The secrecy capacity region of the channel is given by the set of all nonnegative pairs that satisfy for some input distribution $p(x)$. Proof of Theorem \[thm:DM1\] {#pf:thm1} ---------------------------- We first prove the weaker sufficient condition by considering a nested binning scheme that uses the low-security message $M_2$ as part of the transmitter randomness to protect the high-security message $M_1$ (when the eavesdropper channel realization is strong). We shall consider a random-coding argument, which can be described as follows. Fix an input distribution $p(x)$. *Codebook generation.* Randomly and independently generate $2^{n(R_1+R_2+T)}$ codewords of length $n$ according to an $n$-product of $p(x)$. Randomly partition the codewords into $2^{nR_1}$ bins so each bin contains $2^{n(R_2+T)}$ codewords. Further partition each bin into $2^{nR_2}$ subbins so each subbin contains $2^{nT}$ codewords. Label the codewords as $x^n_{j,k,l}$ where $j$ denotes the bin number, $k$ denotes the subbin number within each bin, and $l$ denotes the codeword number within each subbin. See Fig. \[fig:nb\] for an illustration of the codebook structure. *Encoding.* To send a message pair $(m_1,m_2)$, the transmitter *randomly* (according to a uniform distribution) chooses a codeword $x^n_{m_1,m_2,t}$ from the subbin identified by $(m_1,m_2)$ and sends it through the channel. *Decoding at the legitimate receiver.* Given the channel outputs $y^n$, the legitimate receiver looks into the codebook $\{x^n_{j,k,l}\}_{j,k,l}$ and searches for a codeword that is jointly typical [@CT-B91] with $y^n$. In the case when $$R_1+R_2+T < I(X;Y) \label{eq:FM1}$$ with high probability the transmitted codeword $x^n_{m_1,m_2,t}$ is the *only* one that is jointly typical with $y^n$ (and hence can be correctly decoded). *Security at the eavesdropper.* Note that each bin corresponds to a message $m_1$ and contains $2^{n(R_2+T)}$ codewords, each randomly and independently generated according to an $n$-product of $p(x)$. For a given message $m_1$, the transmitted codeword is randomly and uniformly chosen from the corresponding bin (where the randomness is from both the low-security message $M_2$ and the transmitter’s choice of $t$). Following [@Wyn-BSTJ75], in the case when $$R_2+T > I(X;Z_1) \label{eq:FM2}$$ we have $(1/n)I(M_1;Z_1^n)$ tends to zero in the limit as $n \rightarrow 0$. Furthermore, each subbin corresponds to a message pair $(m_1,m_2)$ and contains $2^{nT}$ codewords, each randomly and independently generated according to an $n$-product of $p(x)$. For a given message pair $(m_1,m_2)$, the transmitted codeword is randomly and uniformly chosen from the corresponding subbin (where the randomness is from the transmitter’s choice of $t$). Again, following [@Wyn-BSTJ75], in the case when $$T > I(X;Z_2) \label{eq:FM3}$$ we have $(1/n)I(M_1,M_2;Z_2^n)$ tends to zero in the limit as $n \rightarrow 0$. Eliminating $T$ from – using Fourier-Motzkin elimination, we can conclude that any rate pair $(R_1,R_2)$ that satisfies is achievable. Next we prove the more general sufficient condition by considering a coding scheme that combines rate splitting, superposition coding, nested binning and channel prefixing. We shall once again resort to a random-coding argument, which can be described as follows. Fix a joint auxiliary-input distribution $p(u)p(v|u)p(x|v)$ with $I(U;Y) \geq I(U;Z_2)$ and $\epsilon>0$. Split the low-security message $M_2$ into two independent submessages $M_2'$ and $M_2''$ with rates $R_2'$ and $R_2''$, respectively. *Codebook generation.* Randomly and independently generate $2^{n(R_2'+I(U;Z_2)+\epsilon)}$ codewords of length $n$ according to an $n$-product of $p(u)$. Randomly partition the codewords into $2^{nR_2'}$ bins so each bin contains $2^{n(I(U;Z_2)+\epsilon)}$ codewords. Label the codewords as $u^n_{j,k}$ where $j$ denotes the bin number, and $k$ denotes the codeword number within each bin. We shall refer to the codeword collection $\{u^n_{j,k}\}_{j,k}$ as the $U$-codebook. For each codeword $u^n_{j,k}$ in the $U$-codebook, randomly and independently generate $2^{n(R_1+R_2''+T)}$ codewords of length $n$ according to an $n$-product of $p(v|u)$. Randomly partition the codewords into $2^{nR_1}$ bins so each bin contains $2^{n(R_2''+T)}$ codewords. Further partition each bin into $2^{nR_2''}$ subbins so each subbin contains $2^{nT}$ codewords. Label the codewords as $v^n_{j,k,l,p,q}$ where $(j,k)$ indicates the base codeword $u_{j,k}^n$ from which $v^n_{j,k,l,p,q}$ was generated, $l$ denotes the bin number, $p$ denotes the subbin number within each bin, and $q$ denotes the codeword number within each subbin. We shall refer to the codeword collection $\{v^n_{j,k,l,p,q}\}_{l,p,q}$ as the $V$-subcodebook corresponding to base codeword $u^n_{j,k}$. See Fig. \[fig:nb2\] for an illustration of the codebook structure. *Encoding.* To send a message triple $(m_1,m_2',m_2'')$, the transmitter *randomly* (according a uniform distribution) chooses a codeword $u^n_{m_2',t_2}$ from the $m_2'$th bin in the $U$-codebook. Once a $u^n_{m_2',t_2}$ is chosen, the transmitter looks into the $V$-subcodebook corresponding to $u^n_{m_2',t_2}$ and *randomly* chooses a codeword $v^n_{m_2',t_2,m_1,m_2'',t_1}$ from the subbin identified by $(m_1,m_2'')$. Once a $v^n_{m_2',t_2,m_1,m_2'',t_1}$ is chosen, an input sequence $x^n$ is generated according to an $n$-product of $p(x|v)$ and is then sent through the channel. *Decoding at the legitimate receiver.* Given the channel outputs $y^n$, the legitimate receiver looks into the $U$-codebook and its $V$-codebooks and searches for a pair of codewords $(u^n_{j,k},v^n_{j,k,l,p,q})$ that are jointly typical [@CT-B91] with $y^n$. In the case when $$\begin{aligned} R_2'+I(U;Z_2)+\epsilon & < & I(U;Y)\label{eq:FM4}\\ \mbox{and} \quad R_1+R_2''+T & < & I(V;Y|U)\label{eq:FM5}\end{aligned}$$ with high probability the codeword pair selection $(u^n_{m_2',t_2},v^n_{m_2',t_2,m_1,m_2'',t_1})$ is the only one that is jointly typical [@CT-B91] with $y^n$. *Security at the eavesdropper.* To analyze the security of the high-security message $M_1$ and the submessage $M_2''$ at the eavesdropper, we shall assume (for now) that both the submessage $m_2'$ and the codeword selection $u_{m_2',t_2}^n$ are known at the eavesdropper. Note that such an assumption can only *strengthen* our security analysis. Given the base codeword $u_{m_2',t_2}^n$, the encoding of $m_1$ and $m_2''$ using the corresponding $V$-subcodebook is identical to the nested binning scheme considered previously (with additional channel prefixing). Thus in the case when $$\begin{aligned} R_2''+T & > & I(V;Z_1|U)\label{eq:FM6}\\ \mbox{and} \quad T & > & I(V;Z_2|U)\label{eq:FM7}\end{aligned}$$ we have $$\begin{aligned} \frac{1}{n}I(M_1;Z_1^n|M_2') \; = \; \frac{1}{n}I(M_1;Z_1^n,M_2') & \rightarrow & 0 \label{eq:eqv1}\\ \mbox{and} \quad \frac{1}{n}I(M_1,M_2'';Z_2^n|M_2') \; = \; \frac{1}{n}I(M_1,M_2'';Z_2^n,M_2') & \rightarrow & 0 \label{eq:eqv2}\end{aligned}$$ in the limit as $n \rightarrow \infty$. The equalities in and are due to the fact that $(M_1,M_2'')$ and $M_2'$ are independent. From we may conclude that $(1/n)I(M_1;Z_1^n) \rightarrow 0$ in the limit as $n \rightarrow \infty$. To analyze the security of the submessage $M_2'$, note that each bin in the $U$-codebook corresponds to a message $m_2'$ and contains $2^{n(R_2'+I(U;Z_2)+\epsilon)}$ codewords, each randomly and independently generated according to an $n$-product of $p(u)$. For a given submessage $m_2'$, the codeword $u_{m_2',t_2}^n$ is randomly and uniformly chosen from the corresponding bin (where the randomness is from the transmitter’s choice of $t_2$). Note from that the rate of each $V$-subcodebook is greater than $I(V;Z_2|U)$. Following [@CE-ITS09 Lemma 1], we have $$\frac{1}{n}I(M_2';Z_2^n) \rightarrow 0 \label{eq:eqv3}$$ in the limit as $n \rightarrow \infty$. Putting together and , we have $$\begin{aligned} \frac{1}{n}I(M_1,M_2;Z_2^n) &=& \frac{1}{n}I(M_1,M_2',M_2'';Z_2^n)\\ &=& \frac{1}{n}I(M_2';Z_2^n)+\frac{1}{n}I(M_1,M_2'';Z_2^n|M_2')\end{aligned}$$ which tends to zero in the limit as $n \rightarrow \infty$. Finally, note that the overall communicate rate $R_2$ of the low-security message $M_2$ is given by $$R_2=R_2'+R_2''. \label{eq:FM8}$$ Eliminating $T$, $R_2'$ and $R_2''$ from –, , and $R_2',R_2'' \geq 0$ using Fourier-Motzkin elimination, simplifying the results using the facts that 1) $I(U;Y) \geq I(U;Z_2)$, 2) $I(V;Z_2|U) \leq I(V;Z_1|U)$ which is due to the Markov chain , and 3) $I(V;Y|U)+I(U;Y)=I(V,U;Y)=I(V;Y)$ and $I(V;Z_2|U)+I(U;Z_2)=I(V,U;Z_2)=I(V;Z_2)$ which are due to the Markov chain $U \rightarrow V \rightarrow X \rightarrow (Y,Z_1,Z_2)$, and letting $\epsilon \rightarrow 0$, we may conclude that any rate pair $(R_1,R_2)$ satisfying is achievable. This completes the proof of Theorem \[thm:DM1\]. Gaussian Two-Level Security Wiretap Channels {#sec:gmswtc} ============================================ Scalar Channel -------------- Consider a discrete-time two-level security wiretap channel with real input $X$ and outputs $Y$, $Z_1$ and $Z_2$ given by $$\begin{array}{rcl} Y &=& \sqrt{a}X+N_1\\ Z_1 &=& \sqrt{b_1}X+N_{2}\\ Z_2 &=& \sqrt{b_2}X+N_{3} \end{array} \label{eq:Ch-MGWTC}$$ where $a$, $b_1$ and $b_2$ are the corresponding channel gains, and $N_1$, $N_2$ and $N_3$ are additive white Gaussian noise with zero means and unit variances. Assume that $b_1 \geq b_2$ so the receiver that receives the channel output $Z_1$ represents a stronger realization of the eavesdropper channel than the receiver that receives the channel output $Z_2$. The channel input $X$ is subject to the average power constraint . We term the above communication scenario as *(scalar) Gaussian two-level security wiretap channel*. The following theorem provides an explicit characterization of the secrecy capacity region. \[thm:MGWTC\] Consider the (scalar) Gaussian two-level security wiretap channel . The secrecy capacity region of the channel is given by the collection of all nonnegative pairs $(R_1,R_2)$ that satisfy $$\begin{array}{rcl} R_1 & \leq & C_s(P,a,b_1)\\ \mbox{and} \quad R_1+R_2 & \leq & C_s(P,a,b_2) \end{array} \label{eq:Cs-MGWTC}$$ where $C_s(P,a,b)$ is defined as in . *Proof:* We first prove the converse part of the theorem. Recall from Sec. \[sec:mswtc-mod\] that without transmitting the low-security message $M_2$ (which can only increase the achievable rate $R_1$), the problem reduces to communicating the high-security message $M_1$ over the traditional wiretap channel $p(y,z_1|x)$. For the Gaussian two-level security wiretap channel , the problem reduces to communicating the high-security message $M_1$ over the Gaussian wiretap channel with channel outputs $Y$ and $Z_1$ given by $$\begin{array}{rcl} Y &=& \sqrt{a}X+N_1\\ Z_1 &=& \sqrt{b_1}X+N_{2}. \end{array}$$ We thus conclude that $R_1 \leq C_s(P,a,b_1)$ for any achievable rate $R_1$. Similarly, ignoring the additional secrecy constraint for the high-security message $M_1$ (which can only enlarge the achievable rate region $\{(R_1,R_2)\}$), we can simply view the messages $M_1$ and $M_2$ as a single message $M$ with rate $R_1+R_2$. In this case, the problem reduces to communicating the message $M$ over the traditional wiretap channel $p(y,z_2|x)$. For the Gaussian two-level security wiretap channel , the problem reduces to communicating the message $M$ over the Gaussian wiretap channel with channel outputs $Y$ and $Z_2$ given by $$\begin{array}{rcl} Y &=& \sqrt{a}X+N_1\\ Z_2 &=& \sqrt{b_2}X+N_{3}. \end{array}$$ We thus conclude that $R_1+R_2 \leq C_s(P,a,b_2)$ for any achievable rate pair $(R_1,R_2)$. To show that any nonnegative pair $(R_1,R_2)$ that satisfies is achievable, let us first consider two simple cases. First, when $b_1 \geq b_2 \geq a$, both $C_s(P,a,b_1)$ and $C_s(P,a,b_2)$ are equal to zero (c.f. definition ). So does not include any positive rate pairs and hence there is nothing to prove. Next, when $b_1 \geq a \geq b_2$, $C_s(P,a,b_1)=0$ and reduces to $$\begin{array}{rcl} R_1& = & 0\\ \mbox{and} \quad R_2 & \leq & C_s(P,a,b_2). \end{array}$$ Since the high-security message $M_1$ does not need to be transmitted, any rate pair in this region can be achieved by using a scalar Gaussian wiretap code to encode the low-security message $M_2$. This has left us with the only case with $a \geq b_1 \geq b_2$. For the case where $a \geq b_1 \geq b_2$, the achievability of any rate pair in follows from that of by choosing $X$ to be Gaussian with zero mean and variance $P$. This completes the proof of the theorem. $\square$ The following corollary follows directly from the achievability of the corner point $$(R_1,R_2)=(C_s(P,a,b_1),C_s(P,a,b_2)-C_s(P,a,b_1)) \label{eq:CP}$$ of . Scalar Gaussian two-level security wiretap channels under an average power constraint are perfectly embeddable. Fig. \[fig:Cssg\] illustrates the secrecy capacity region for the case where $a > b_1 > b_2$. Also plotted in the figure is the rate region that can be achieved by the naive scheme that uses two Gaussian wiretap codes to encode the messages $M_1$ and $M_2$ separately. Note that the corner point is strictly outside the “naive" rate region, which illustrates the superiority of nested binning over the separate coding scheme. Independent Parallel Channel ---------------------------- Consider a discrete-time two-level security wiretap channel which consists of a bank of $L$ independent parallel scalar Gaussian two-level security wiretap channels. In this model, the channel outputs are given by $Y=(Y_1,\ldots,Y_L)$, $Z_1=(Z_{1,1},\ldots,Z_{1,L})$ and $Z_2=(Z_{2,1},\ldots,Z_{2,L})$ where $$\begin{array}{rcl} Y_l &=& \sqrt{a_{1}}X_l+N_{1,l}\\ Z_{1,l} &=& \sqrt{b_{1,l}}X_l+N_{2,l}\\ Z_{2,l} &=& \sqrt{b_{2,l}}X_l+N_{3,l} \end{array} \quad l=1,\ldots,L. \label{eq:Ch-MPGWTC}$$ Here, $X_l$ is the channel input for the $l$th subchannel, $a_l$, $b_{1,l}$ and $b_{2,l}$ are the corresponding channel gains in the $l$th subchannel, and $N_{1,l}$, $N_{2,l}$ and $N_{3,l}$ are additive white Gaussian noise with zero means and unit variances. We assume that $b_{1,l} \geq b_{2,l}$ for all $l=1,\ldots,L$, so the receiver that receives the channel output $Z_1$ represents a stronger realization of the eavesdropper channel in *each* of the $L$ subchannels than the receiver that receives the channel output $Z_2$. Furthermore, $(N_{1,l},N_{2,l},N_{3,l})$, $l=1,\ldots,L$, are independent so all $L$ subchannels are independent of each other. We term the above communication scenario as *independent parallel Gaussian two-level security wiretap channel*. The following theorem provides an explicit characterization of the secrecy capacity region under an average individual per-subchannel power constraint. \[thm:MPGWTC\] Consider the independent parallel Gaussian two-level security wiretap channel where the channel input $X$ is subject to the average individual per-subchannel power constraint . The secrecy capacity region of the channel is given by the collection of all nonnegative pairs $(R_1,R_2)$ that satisfy $$\begin{array}{rcl} R_1 & \leq & \sum_{l=1}^{L}C_s(P_l,a_l,b_{1,l})\\ \mbox{and} \quad R_1+R_2 & \leq & \sum_{l=1}^{L}C_s(P_l,a_l,b_{2,l}) \end{array} \label{eq:Cs-MPGWTC}$$ where $C_s(P,a,b)$ is defined as in . *Proof:* We first prove the converse part of the theorem. Following the same argument as that for Theorem \[thm:MGWTC\], we can show that $$\begin{array}{rcl} R_1 & \leq & C_s(\{P_l,a_l,b_{1,l}\}_{l=1}^L)\\ \mbox{and} \quad R_1+R_2 & \leq & C_s(\{P_l,a_l,b_{2,l}\}_{l=1}^L) \end{array} \label{T111}$$ for any achievable secrecy rate pair $(R_1,R_2)$. By the secrecy capacity expression for the independent parallel Gaussian wiretap channel under an average individual per-subchannel power constraint, we have $$\begin{array}{rcl} C_s(\{P_l,a_l,b_{1,l}\}_{l=1}^L) &=& \sum_{l=1}^{L}C_s(P_l,a_l,b_{1,l})\\ \mbox{and} \quad C_s(\{P_l,a_l,b_{2,l}\}_{l=1}^L) &=& \sum_{l=1}^{L}C_s(P_l,a_l,b_{2,l}). \end{array} \label{T112}$$ Substituting into proves the converse part of the theorem. To show that any nonnegative pair $(R_1,R_2)$ that satisfies is achievable, let us consider *independent* coding over each of the $L$ subchannels. Note that each subchannel is a scalar Gaussian two-level security wiretap channel with average power constraint $P_l$ and channel gains $(a_l,b_{1,l},b_{2,l})$. Thus, by Theorem \[thm:MGWTC\], any nonnegative pair $(R_{1,l},R_{2,l})$ that satisfies $$\begin{array}{rcl} R_{1,l} & \leq & C_s(P_l,a_l,b_{1,l})\\ \mbox{and} \quad R_{1,l}+R_{2,l} & \leq & C_s(P_l,a_l,b_{2,l}) \end{array} \label{T211}$$ is achievable for the $l$th subchannel. The overall communication rates are given by $$\begin{array}{rcl} R_1 &=& \sum_{l=1}^{L}R_{1,l}\\ \mbox{and} \quad R_2 &=& \sum_{l=1}^{L}R_{2,l}. \end{array} \label{T212}$$ Substituting into proves that any nonnegative pair $(R_1,R_2)$ that satisfies is achievable. This completes the proof of the theorem. $\square$ Similar to the scalar case, the following corollary is an immediate consequence of Theorem \[thm:MPGWTC\]. Independent parallel Gaussian two-level security wiretap channels under an average individual per-subchannel power constraint are perfectly embeddable. The secrecy capacity region of the channel under an average total power constraint is summarized in the following corollary. The results follow from the well-known fact that an average total power constraint can be written as the *union* of average individual per-subchannel power constraints, where the union is over all possible power allocations among the subchannels. Consider the independent parallel Gaussian two-level security wiretap channel where the channel input $X$ is subject to the average total power constraint . The secrecy capacity region of the channel is given by the collection of all nonnegative pair $(R_1,R_2)$ that satisfies $$\begin{array}{rcl} R_1 & \leq & \sum_{l=1}^{L}C_s(P_l,a_l,b_{1,l})\\ \mbox{and} \quad R_1+R_2 & \leq & \sum_{l=1}^{L}C_s(P_l,a_l,b_{2,l}) \end{array}$$ for some power allocation $(P_1,\ldots,P_L)$ such that $\sum_{l=1}^{L}P_l \leq P$. Fig. \[fig:Cspg\] illustrates the secrecy capacity with $L=2$ subchannels where $$\begin{aligned} a_{1} = 1.000, & b_{1,1}=0.800, & b_{2,1}=0.100\\ a_{2} = 1.000, & b_{1,2}=0.250, & b_{2,2}=0.100\\ \mbox{and} \quad P = 1.000.\end{aligned}$$ As we can see, under the average total power constraint , the independent parallel Gaussian two-level security wiretap channel is embeddable but *not* perfectly embeddable. The reason is that the optimal power allocation $(P_1,P_2)$ that maximizes $C_s(P_1,a_1,b_{2,1})+C_s(P_2,a_2,b_{2,2})$ is *suboptimal* in maximizing $C_s(P_1,a_1,b_{1,1})+C_s(P_2,a_2,b_{1,2})$. By comparison, under the average individual per-subchannel power constraint , the power allocated to each of the subchannels is fixed so the channel is always perfectly embeddable. Two-Level Security Wiretap Channel II {#sec:mswtc2} ===================================== In Sec. \[sec:wtc\] we briefly summarized the known results on a classical secrecy communication setting known as wiretap channel. A closely related classical secrecy communication scenario is *wiretap channel II*, which was first studied by Ozarow and Wyner [@OW-BSTJ84]. In the wiretap channel II setting, the transmitter sends a binary sequence $X^n=(X_1,\ldots,X_n)$ of length $n$ *noiselessly* to an legitimate receiver. The signal $Z^n=(Z_1,\ldots,Z_n)$ received at the eavesdropper is given by $$Z_i = \left\{ \begin{array}{ll} X_i, & i \in S\\ e, & \mbox{otherwise} \end{array} \right.$$ where $e$ represents an erasure output, and $S$ is a subset of $\{1,\ldots,n\}$ of size $n\alpha$ representing the locations of the transmitted bits that can be accessed by the eavesdropper. If the subset $S$ is *known* at the transmitter, a message $M$ of $n(1-\alpha)$ bits can be noiselessly communicated to the legitimate receiver through $X_{S^c}:=\{X_i: \; i \in S^c\}$. Since the eavesdropper has no information regarding to $X_{S^c}$, *perfectly* secure communication is achieved *without* any coding. It is easy to see that in this scenario, $n(1-\alpha)$ is also the *maximum* number of bits that can be reliably and perfectly securely communicated through $n$ transmitted bits. An interesting result of [@OW-BSTJ84] is that for any $\epsilon>0$, a total of $n(1-\alpha-\epsilon)$ bits can be reliably and *asymptotically perfectly* securely communicated to the legitimate receiver even when the subset $S$ is *unknown* (but with a fixed size $n\alpha$) a priori at the transmitter. Here, by “asymptotically perfectly securely" we mean $(1/n)I(M;Z^n) \rightarrow 0$ in the limit as $n \rightarrow \infty$. Unlike the case where the subset $S$ is known , coding is *necessary* when $S$ is unknown a priori at the transmitter. In particular, [@OW-BSTJ84] considered a random binning scheme that partitions the collection of all length-$n$ binary sequences into an appropriately chosen *group code* and its cosets. For the wiretap channel setting, as shown in Sec. \[sec:mswtc\], a random binning scheme can be easily modified into a nested binning scheme to efficiently embed high-security bits into low-security ones. The main goal of this section is to extend this result from the classical setting of wiretap channel to wiretap channel II. More specifically, assume that a realization of the subset $S$ has two possible sizes, $n\alpha_1$ and $n\alpha_2$, where $1 \geq \alpha_1 \geq \alpha_2 \geq 0$. The transmitter has two independent messages, the high-security message $M_1$ and the low-security message $M_2$, uniformly drawn from $\{1,\ldots,2^{nR_1}\}$ and $\{1,\ldots,2^{nR_2}\}$ respectively. When the size of the realization $S$ is $n\alpha_2$, both messages $M_1$ and $M_2$ need to be secure, i.e., $(1/n)I(M_1,M_2;Z^n) \rightarrow 0$ in the limit as $n \rightarrow \infty$. In addition, when the size of the realization of $S$ is $n\alpha_1$, the high-security message $M_1$ needs to remain secure, i.e., $(1/n)I(M_1;Z^n) \rightarrow 0$ in the limit as $n \rightarrow \infty$. We term this communication scenario as *two-level security wiretap channel II*, in line with our previous terminology in Sec. \[sec:mswtc\]. By the results of [@Wyn-BSTJ75], without needing to communicate the low-security message $M_2$, the maximum achievable $R_1$ is $1-\alpha_1$. Without the additional secrecy constraint $(1/n)I(M_1;Z^n) \rightarrow 0$ on the high-security message $M_1$, the messages $(M_1,M_2)$ can be viewed as a single message $M$ with rate $R_1+R_2$, and the maximum achievable $R_1+R_2$ is $1-\alpha_2$. The main result of this section is to show that the rate pair $(1-\alpha_1,\alpha_1-\alpha_2)$ is indeed achievable, from which we may conclude that two-level security wiretap channels II are *perfectly* embeddable. Moreover, perfect embedding can be achieved by a nested binning scheme that uses a *two-level* coset code. The results are summarized in the following theorem. Two-level security wiretap channels II are perfectly embeddable. Moreover, perfect embedding can be achieved by a nested binning scheme that uses a two-level coset code. *Proof:* Fix $\epsilon>0$. Consider a binary parity-check matrix $$H= \left[ \begin{array}{c} H_1 \\ H_2 \end{array} \right]$$ where the size of $H_1$ is $n(1-\alpha_1-\epsilon)\times n$ and the size of $H_2$ is $n(\alpha_1-\alpha_2)\times n$. Let $s_1(\cdot)$ be a one-on-one mapping between $\{1,\ldots,2^{n(1-\alpha_1-\epsilon)}\}$ and the binary vectors of length $n(1-\alpha_1-\epsilon)$, and let $s_2(\cdot)$ be a one-on-one mapping between $M_2\in \{1,\ldots,2^{n(\alpha_1-\alpha_2)}\}$ and the binary vectors of length $n(\alpha_1-\alpha_2)$. For a given message pair $(m_1,m_2)$, the transmitter randomly (according to a uniform distribution) chooses a solution $x^n$ to the linear equations $$(x^n)^tH=(x^n)^t\left[ \begin{array}{c} H_1 \\ H_2 \end{array} \right]= \left[ \begin{array}{c} s_1(m_1) \\ s_2(m_2) \end{array} \right] \label{eq:gc}$$ and sends it to the legitimate receiver. When the parity-check matrix $H$ has *full* (row) rank, the above encoding procedure is equivalent of a nested binning scheme that partitions the collection of all length-$n$ binary sequences into bins and subbins using a two-level coset code with parity-check matrices $(H_1,H_2)$. Moreover, let $b_1,\ldots,b_n$ be the columns of $H$ and let $\Gamma \subseteq \{1,\ldots,n\}$. Define $D_2(\Gamma)$ as the dimension of the subspace spanned by $\{b_i: \; i \in \Gamma\}$ and $$D_2^* := \min_{|\Gamma|=n(1-\alpha_2)} D_2(\Gamma).$$ When the size of the realization of $S$ is $n\alpha_2$, by [@OW-BSTJ84 Lemma 4] we have $$H(M_1,M_2|Z^n)=D_2^*. \label{BB}$$ Note that the low-security message $M_2$ is uniformly drawn from $\{1,\ldots,2^{n(\alpha_1-\alpha_2)}\}$. So by , for a given high-security message $m_1$, the transmitted sequence $x^n$ is randomly chosen (according to a uniform distribution) as a solution to the linear equations $(x^n)^tH_1=s_1(m_1)$. If we let $a_1,\ldots,a_n$ be the columns of $H_1$ and define $$D_1^* := \min_{|\Gamma|=n(1-\alpha_1)} D_1(\Gamma)$$ where $D_1(\Gamma)$ is the dimension of the subspace spanned by $\{a_i: \; i \in \Gamma\}$, we have again from [@OW-BSTJ84 Lemma 4] $$H(M_1|Z^n)=D_1^* \label{AA}$$ when the size of the realization of $S$ is $n\alpha_1$. Let $\Psi(H)=1$ when we have either $H$ does *not* have full rank, or $D_2^* < n(1-\alpha_2-\epsilon)-3/\epsilon$, or $D_1^* < n(1-\alpha_1-\epsilon)-3/\epsilon$, and let $\Psi(H)=0$ otherwise. By using a randomized argument that generates the entries of $H$ independently according to a uniform distribution in $\{0,1\}$, we can show that there exists an $H$ with $\Psi(H)=0$ for sufficiently large $n$ (see Appendix \[App:3\] for details). For such an $H$, we have from and that $(1/n)I(M_1,M_2;Z^n) \leq 3/(n\epsilon)$ when the size of the realization of $S$ is $n\alpha_2$, and $(1/n)I(M_1;Z^n) \leq 3/(n\epsilon)$ when the size of the realization of $S$ is $n\alpha_1$. Letting $n \rightarrow \infty$ and $\epsilon \rightarrow 0$ (in that order) proves the achievability of the rate pair $(1-\alpha_1,\alpha_1-\alpha_2)$ and hence completes the proof of the theorem. $\square$ Concluding Remarks {#sec:con} ================== In this paper we considered the problem of simultaneously communicating two messages, a high-security message and a low-security message, to a legitimate receiver, referred to as the security embedding problem. An information-theoretic formulation of the problem was presented. With appropriate coding architectures, it was shown that a significant portion of the information bits can receive additional security protections without sacrificing the overall rate of communication. Key to achieve efficient embedding was to use the low-security message as part of the transmitter randomness to protect the high-security message when the eavesdropper channel realization is strong. For the engineering communication scenarios with real channel input and additive white Gaussian noise, it was shown that the high-security message can be embedded into the low-security message at full rate without incurring any loss on the overall rate of communication for both scalar and independent parallel Gaussian channels (under an average individual per-subchannel power constraint). The scenarios with multiple transmit and receive antennas are considerably more complex and hence require further investigations. Finally, note that even though in this paper we have only considered providing two levels of security protections to the information bits, most of the results extend to multiple-level security in the most straightforward fashion. In the limit when the security levels change continuously, the number of secure bits delivered to the legitimate receiver would depend on the realization of the eavesdropper channel even though such realizations are unknown a priori at the transmitter. Proof of Theorem \[thm:DM2\] {#App:1} ============================ First note that when $X \rightarrow Y \rightarrow Z_2$ forms a Markov chain in that order, we have $I(U;Y) \geq I(U;Z_2)$ for any jointly distributed $(U,V,X)$ that satisfies the Markov chain $U \rightarrow V \rightarrow X \rightarrow (Y,Z_1,Z_2)$. To show that the sufficient condition is also necessary, let $(R_1,R_2)$ be an achievable rate pair. Following Fano’s inequality [@CT-B91] and the asymptotic perfect secrecy constraints and , there exists a sequence of codes (indexed by the block length $n$) of rate pair $(R_1,R_2)$ such that $$\begin{aligned} H(M_1,M_2|Y^n) & \leq & n\epsilon_n/2\label{eq:fano}\\ I(M_1;Z_1^n) & \leq & n\epsilon_n/2\label{eq:ct2}\\ \mbox{and} \quad I(M_1,M_2;Z_2^n) & \leq & n\epsilon_n/2\label{eq:ct3}\end{aligned}$$ where $\epsilon_n \rightarrow 0$ in the limit as $n \rightarrow \infty$. Following and , we have $$\begin{aligned} n(R_1-\epsilon_n) & = &H(M_1)-n\epsilon_n\\ & \leq & H(M_1) -\left[I(M_1;Z_1^n)+H(M_1,M_2|Y^n)\right]\\ & = & H(M_1|Z_1^n)-H(M_1,M_2|Y^n) \\ & \leq & H(M_1,M_2|Z_1^n)-H(M_1,M_2|Y^n) \\ & = & I(M_1,M_2;Y^n)-I(M_1,M_2;Z_2^n). \end{aligned}$$ Let $M := (M_1,M_2)$, $Y^{i-1} := (Y[1],\ldots,Y[i-1])$, $Z_{1,i+1}^n=(Z_1[i+1],...,Z_1[n])$ and $U[i] := (Y^{i-1},Z_{1,i+1}^n)$. We further have $$\begin{aligned} n(R_1-\epsilon_n) & \leq & I(M;Y^n) - I(M;Z_1^n)\\ & = & \sum_{i=1}^n \left[I(M;Y[i]|Y^{i-1}) - I(M;Z_1[i]|Z_{1,i+1}^n)\right]\\ & \stackrel{(a)}= & \sum_{i=1}^n \left[I(M;Y[i]|Y^{i-1},Z_{1,i+1}^n) - I(M;Z_1[i]|Y^{i-1},Z_{1,i+1}^n)\right]\\ & = &\sum_{i=1}^n \left[I(M;Y[i]|U[i]) - I(M;Z_1[i]|U[i])\right]\\ & = & n\left[I(M;Y[Q]|U[Q],Q) - I(M;Z_{1}[Q]|U[Q],Q)\right]\\ & = & n\left[I(M,U[Q],Q;Y[Q]|U[Q],Q) - I(M,U[Q],Q;Z_{1}[Q]|U[Q],Q)\right]\\ & = & n\left[I(V[Q];Y[Q]|U[Q],Q) - I(V[Q];Z_{1}[Q]|U[Q],Q)\right]\end{aligned}$$ where (a) is due to the Csiszár-Körner sum equality [@CK-IT78 Lemma 7], $Q$ is a standard time-sharing variable [@CT-B91], and $V[Q]:=(M,U[Q],Q)$. Following and , we have $$\begin{aligned} n(R_1+R_2-\epsilon_n) & = & H(M)-n\epsilon_n\\ & \leq &H(M)-\left[H(M|Y^n)+I(M;Z_2^n)\right]\\ & = & I(M;Y^n)-I(M;Z_2^n)\\ & = & \sum_{i=1}^n \left[I(M;Y[i]|Y^{i-1}) - I(M;Z_2[i]|Z_{2,i+1}^n)\right]\\ & \stackrel{(b)}= & \sum_{i=1}^n \left[I(M;Y[i]|Y^{i-1},Z_{2,i+1}^n) - I(M;Z_2[i]|Y^{i-1},Z_{2,i+1}^n)\right]\\ & = & \sum_{i=1}^n \left[I(M,Y^{i-1},Z_{1,i+1}^n,Z_{2,i+1}^n;Y[i]) - I(M,Y^{i-1},Z_{1,i+1}^n,Z_{2,i+1}^n;Z_2[i])\right]\\ && \quad\quad -\sum_{i=1}^n \left[I(Y^{i-1},Z_{2,i+1}^n;Y[i]) - I(Y^{i-1},Z_{2,i+1}^n;Z_2[i]) \right]\\ && \quad\quad -\sum_{i=1}^n \left[I(Z_{1,i+1}^n;Y[i]|M, Y^{i-1},Z_{2,i+1}^n) - I(Z_{1,i+1}^n;Z_2[i]|M,Y^{i-1},Z_{2,i+1}^n)\right]\\ & \stackrel{(c)}\leq & \sum_{i=1}^n \left[I(M,Y^{i-1},Z_{1,i+1}^n,Z_{2,i+1}^n;Y[i]) - I(M,Y^{i-1},Z_{1,i+1}^n,Z_{2,i+1}^n;Z_2[i])\right]\\ & \stackrel{(d)}= & \sum_{i=1}^n \left[I(M,Y^{i-1},Z_{1,i+1}^n;Y[i]) - I(M,Y^{i-1},Z_{1,i+1}^n;Z_2[i])\right]\\ & = & \sum_{i=1}^n \left[I(M,U[i];Y[i]) - I(M,U[i];Z_2[i])\right]\\ & = & n\left[I(M,U[Q];Y[Q]|Q) -I(M,U[Q];Z_{2}[Q]|Q)\right]\\ & = & n\left[I(M,U[Q],Q;Y[Q]) -I(M,U[Q],Q;Z_{2}[Q])-\left(I(Y[Q];Q)-I(Z_2[Q];Q)\right)\right]\\ & = & n\left[I(V[Q];Y[Q]) -I(V[Q];Z_{2}[Q])-\left(I(Y[Q];Q)-I(Z_2[Q];Q)\right)\right]\\ & \stackrel{(e)}\leq & n\left[I(V[Q];Y[Q]) -I(V[Q];Z_{2}[Q])\right]\end{aligned}$$ where (b) follows from the Csiszár-Körner sum equality [@CK-IT78 Lemma 7], (c) is due to the Markov chain , (d) is due to the Markov chain , and (e) follows again from the Markov chain and the fact that the channel is memoryless. Finally, we complete the proof of the theorem by letting $U:=(U[Q],Q)$, $V:=V[Q]$, $X:=X[Q]$, $Y:=Y[Q]$, $Z_1:=Z_1[Q]$, $Z_2:=Z_2[Q]$ and $n \rightarrow \infty$. Proof of Theorem \[thm:DM3\] {#App:2} ============================ As shown in Theorem \[thm:DM2\], when we have the Markov chains and , there exists a random triple $(U,V,X)$ satisfying the Markov chain $U \rightarrow V \rightarrow X \rightarrow (Y,Z_1,Z_2)$ and such that $R_1 \leq I(V;Y|U)-I(V;Z_1|U)$ and $R_1+R_2 \leq I(V;Y)-I(V;Z_2)$. In fact, the sum rate $R_1+R_2$ can be further bounded from above as $$\begin{aligned} R_1+R_2 & \leq & I(V,X;Y)-I(V,X;Z_2)-\left[I(X;Y|V)-I(X;Z_2|V)\right]\\ & \stackrel{(a)}= & I(X;Y)-I(X;Z_2)-\left[I(X;Y|V)-I(X;Z_2|V)\right]\\ & \stackrel{(b)} \leq & I(X;Y)-I(X;Z_2)\end{aligned}$$ where (a) follows from the Markov chain $V\rightarrow X\rightarrow (Y,Z_2)$, and (b) follows from the Markov chain so $I(X;Y|V) \geq I(X;Z_2|V)$. When we further have the Markov chain , $R_1$ can be further bounded from above as $$\begin{aligned} R_1& \leq & I(U,V;Y)-I(U,V;Z_1|U)-\left[I(U;Y)-I(U;Z_1)\right]\\ & \stackrel{(c)}= & I(V;Y)-I(V;Z_1) - \left[I(U;Y)-I(U;Z_1)\right]\\ & \stackrel{(d)}\leq & I(V;Y)-I(V;Z_1)\\ & = & I(V,X;Y)-I(V,X;Z_1)-\left[I(X;Y|V)-I(X;Z_1|V)\right]\\ & \stackrel{(e)}\leq & I(V,X;Y)-I(V,X;Z_1)\\ & \stackrel{(f)}= & I(X;Y)-I(X;Z_1)\end{aligned}$$ where (c) follows from the Markov chain $U \rightarrow V \rightarrow (Y,Z_1)$, (d) and (e) follow from the Markov chain so $I(U;Y) \geq I(U;Z_1)$ and $I(X;Y|V)\geq I(X;Z_1|V)$, and (f) follows from the Markov chain $V \rightarrow X \rightarrow (Y,Z_1)$. This completes the proof of the theorem. Existence of an $H$ with $\Psi(H)=0$ {#App:3} ==================================== To show that there exists a parity-check matrix $H$ such that $\Psi(H)=0$, it is sufficient to show that $\mathbb{E}\Psi(H)< 1$ where $\mathbb{E}X$ denotes the expectation of a random variable $X$. Let $$\Psi_0(H) := \left\{ \begin{array}{rl} 1, & \mbox{rank}(H)< n(1-\alpha_2-\epsilon)\\ 0, & \text{otherwise} \end{array}\right.$$ and $$\Psi_i(H,\Gamma) := \left\{ \begin{array}{rl} 1, & D_i(\Gamma)<n(1-\alpha_i-\epsilon)-3/\epsilon\\ 0, & \text{otherwise} \end{array}\right.$$ for $i=1,2$. By the union bound, we have $$\mathbb{E}\Psi(H)\leq \mathbb{E}\Psi_0(H)+\sum_{i=1}^2\sum_{\substack{\Gamma\subseteq \{1,\ldots,n\} \\|\Gamma|=n\alpha_i}}\mathbb{E}\Psi_i(H,\Gamma). \label{eq:App3-1}$$ By [@OW-BSTJ84 Lemma 6], $$\mathbb{E}\Psi_0(H) \leq \frac{n(1-\alpha_2-\epsilon)2^{-n(\alpha_2+\epsilon) } } { 1-2^{-n(\alpha_2+\epsilon)}} < \frac{1}{2} \label{eq:App3-2}$$ for sufficiently large $n$. By [@OW-BSTJ84 Lemma 5], for any $\Gamma \subseteq \{1,\ldots,n\}$ such that $|\Gamma|=n\alpha_i$ $$\mathbb{E}\Psi_i(H,\Gamma) \leq 2^{-3n+ n(1-\alpha_i-\epsilon)} \leq 2^{-2n}.$$ Since the total number of different subsets of $\{1,\ldots,n\}$ is $2^n$, we have $$\sum_{i=1}^2\sum_{\substack{\Gamma\subseteq \{1,\ldots,n\} \\|\Gamma|=n\alpha_i}}\mathbb{E}\Psi_i(H,\Gamma) \leq 2\cdot 2^n\cdot 2^{-2n} = 2^{-n+1} < \frac{1}{2} \label{eq:App3-3}$$ for $n>2$. Substituting and into proves that $\mathbb{E}\Psi(H)< 1$ for sufficiently large $n$ and hence completes the proof. [99]{} Y. Liang, H. V. Poor, and S. Shamai (Shitz), *Information Theoretic Security*. Dordrecht, The Netherlands: Now Publisher, 2009. R. Liu and W. Trappe, Eds, *Securing Wireless Communications at the Physical Layer*. New York: Springer Verlag, 2010. A. D. Wyner, “The wire-tap channel," *Bell Sys. Tech. J.*, vol. 54, no. 8, pp. 1355–1387, Oct. 1975. I. Csiszár and J. Körner, “Broadcast channels with confidential messages," *IEEE Trans. Inf. Theory*, vol. IT-24, no. 3, pp. 339–348, May 1978. L. H. Ozarow and A. D. Wyner, “Wire-tap channel II," *Bell Syst. Tech. J.*, vol. 63, no. 10, pp. 2135–2157, Dec. 1984. S. K. Leung-Yan-Cheong and M. Hellman, “The Gaussian wire-tap channel," *IEEE Trans. Inf. Theory*, vol. IT-24, no. 4, pp. 451–456, July 1978. Z. Li, R. Yates, and W. Trappe, “Secrecy capacity of independent parallel channels," in *Proc. 44th Annu. Allerton Conf. Communication, Control and Computing*, Monticello, IL, USA, Sep. 2006. R. Liu, T. Liu, H. V. Poor, and S. Shamai (Shitz), “New results on multiple-input multiple-output Gaussian broadcast channels with confidential messages,"*IEEE Trans. Inf. Theory*, submitted for publication. Available online at <http://arxiv.org/abs/1101.2007> T. M. Cover and J. A. Thomas, *Elements of Information Theory*. New York: Wiley, 1991. Y. K. Chia and A. El Gamal, “3-receiver broadcast channels with common and confidential messages," *IEEE Trans. Inf. Theory*, submitted for publication. Available online at <http://arxiv.org/abs/0910.1407> [^1]: This research was supported in part by the National Science Foundation under Grant CCF-09-16867 and by a gift grant from the Huawei Technologies USA. The material of this paper was presented in part at the 2010 IEEE International Symposium on Information Theory, Austin, TX, June 2010. Hung D. Ly and Tie Liu are with the Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA (e-mail: {hungly,tieliu}@tamu.edu). Yufei Blankenship is with the Huawei Technologies, Rolling Meadows, IL 60008, USA (e-mail: yblankenship@huawei.com).
{ "pile_set_name": "ArXiv" }
--- abstract: '0.3in We present a combined measurement of the production cross section of $VZ$ ($V=W$ or $Z$) events in final states containing charged leptons (electrons or muons) or neutrinos, and heavy flavor jets, using data collected by the CDF and DØ detectors at the Fermilab Tevatron Collider. The analyzed samples of $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV correspond to integrated luminosities of – . Assuming the ratio of the production cross sections $\sigma(WZ)$ and $\sigma(ZZ)$ as predicted by the standard model, we measure the sum of the $WZ$ and $ZZ$ cross sections to be . This is consistent with the standard model prediction and corresponds to a significance of  standard deviations above the background-only hypothesis.' author: - 'The TEVNPH Working Group[^1]' title: 'Combined CDF and measurement of $\bm{WZ}$ and $\bm{ZZ}$ production in final states with $\bm{b}$-tagged jets\' --- 0.5in *Preliminary Results for the Moriond 2012 Conferences* Introduction {#intro} ============ 0[[@dzWHl; @dzZHv; @dzZHl]]{} Studies on the production of $VV$ ($V=W,Z$) boson pairs provide an important test of the electroweak sector of the standard model (SM). In $p\bar{p}$ collisions at $\sqrt{s}=1.96$ TeV, the next-to-leading order (NLO) SM cross sections for these processes are $\sigma(WW)=\wwnlo\pm\wwnloe$ pb, $\sigma(WZ)=\wznlo\pm\wznloe$ pb and $\sigma(ZZ)=\zznlo\pm\zznloe$ pb [@dibo]. These cross sections assume both $\gamma^{*}$ and $Z^{\circ}$ components in the neutral current exchange and corresponding production of dilepton final states in the region 75 $\leq m_{\ell^+\ell^-} \leq$ 105 GeV/$c^2$. Measuring a significant departure in cross section or deviations in the predicted kinematic distributions would indicate the presence of anomalous gauge boson couplings [@bib:anocoups] or new particles in extensions of the SM [@bib:newphen]. The $VV$ production in $\pp$ collisions at the Fermilab Tevatron Collider has been observed in fully leptonic decay modes [@bib:leptonic] and in semi-leptonic decay modes [@bib:hadronic], where the combined $WW+WZ$ cross section was measured. Recently, the DØ experiment presented evidence for $WZ$ and $ZZ$ production in semileptonic decays with a $b$-tagged final state [@dzDibosonCombo]. The $WZ$ and $ZZ$ production cross sections, as well as their sum, were measured in final states where one of the $Z$ bosons decays into $\bb$ (although there is some signal contribution from $\wcs$, $\zcc$) and the other weak boson decays to charged leptons or neutrinos ($\wlv$, $\zvv$, or $\zll$, with $\ell=e,\mu$). In this note we report an improved measurement of the $WW+ZZ$ production cross section in such final states based on the combination of the results from [@dzDibosonCombo], with a corresponding new set of CDF analyses [@cdfDibosonCombo]. This analysis is relevant as a proving ground for the combined Tevatron search for a low-mass Higgs boson produced in association with a weak boson and decaying into a $\bb$ pair [@bib:higgs] since it shares the same selection criteria as well as analysis and combination techniques. Summary of Contributing Analyses {#analyses} ================================ This result is the combination of three CDF analyses  and three analyses 0 outlined in Table \[tab:chans\]. These analyses utilize data corresponding to integrated luminosities ranging from  to  , collected with the CDF [@cdf] and  [@dzero] detectors at the Fermilab Tevatron Collider, and they are organized into multiple sub-channels for each different configuration of final state particles. To facilitate proper combination of signals, the analyses from a given experiment are constructed to use mutually exclusive event selections. In the  analyses , events containing an isolated electron or muon, and two or three jets are selected (exactly two jets in the case of the CDF analysis). The presence of a neutrino from the $W$ decay is inferred from a large imbalance of transverse momentum ($\met$). The  analyses  select events containing large $\met$ and two or three jets (exactly two jets in the case of the analysis). Finally, in the  analyses   events are required to contain two electrons or two muons and at least two jets. In the case of the CDF  analysis, events with two or three jets are analyzed separately. In the  and   analyses as well as the CDF  analysis, each lepton flavor of the $W/Z$ boson decay ($\ell=e,\mu$) is treated as an independent channel. In the case of the CDF  analysis lepton types are separated into four different channels based on their purity and location within the detector. To ensure that event samples used for the different analyses do not overlap, the  analyses reject events in which a second isolated electron or muon is identified, and the  analyses reject events in which any isolated electrons or muons are identified. To isolate the $\zbb$ decays, algorithms for identifying jets consistent with the decay of a heavy-flavor quark are applied to the jets in each event candidate ($b$-tagging). All of the analyses, as well as the CDF  and  analyses, use multivariate discriminants based on sets of kinematic variables sensitive to displaced decay vertices and tracks within jets with large transverse impact parameters relative to the hard-scatter vertices. The algorithm is a boosted decision tree discriminant which builds upon the previously utilized neural network $b$-tagging tool [@bib:btagnn], while the CDF algorithm [@bib:HOBIT] is based on a neural network discriminant. In both cases, a spectrum of increasingly stringent $b$-tagging operating points is constructed through the use of progressively higher requirements on the minimum output of the $b$-tagging discriminant. The analyses are separated into two groups: a double-tag (DT) group in which two of the jets are $b$-tagged with a loose tag requirement ( and ) or one loose and one tight tag requirement (); and an orthogonal single-tag (ST) group in which only one jet has a loose ( and ) or tight () $b$-tag. A typical per-jet $b$ efficiency and fake rate for the loose (tight) $b$-tag selection is about 80% (50%) and 10% (0.5%), respectively. The corresponding efficiency for jets from $c$-quarks is 45% (12%). The  and  analyses also use the output of the $b$-tagging algorithm as an additional input to the discriminants used in the final signal extraction. Candidate events in the CDF and  analyses are also separated into channels based on tight and loose tagging definitions. Events with two tight tags (TT), one tight and one loose tag (TL), two loose tags (LL), and a single tight tag (Tx) are used by both analyses. The CDF  analysis also considers events with a single loose tag (Lx). A typical per-jet efficiency and fake rate for the CDF loose (tight) neural network $b$-tag selection is about 70% (45%) and 7% (0.6%), respectively. The CDF   analysis utilizes a tight $b$-tagging algorithm [@bib:SecVtx] based on reconstruction of a displaced secondary vertex and a loose $b$-tagging algorithm [@bib:JetProb] that assigns a likelihood for the tracks within a jet to have originated from a displaced vertex. Based on the output of these algorithms events with two tight tags (SS) and those with one tight tag and one loose tag (SJ) are separated into independent analysis channels. The signal in all of the double-tag samples is expected to be primarily composed of events with $\zbb$ decays, with smaller contributions from $\zcc$ and $\wcs$ decays. In the single-tag samples, which are defined by less stringent requirements on the $b$-jet content of the event, the contributions from the three decay modes are comparable. Experiment Channel Luminosity () Reference ------------ --------------------------- --------------- ----------- CDF ,  TT/TL/Tx/LL/Lx, 2 jets 9.5 [@cdfWHl] CDF ,  SS/SJ, 2/3 jets 9.5 [@cdfZHv] CDF ,  TT/TL/Tx/LL, 2/3 jets 9.5 [@cdfZHl] ,  ST/DT, 2/3 jets 7.5 [@dzWHl] ,  ST/DT, 2 jets 8.4 [@dzZHv] ,  ST/DT, $\geq$ 2 jets 7.5 [@dzZHl] : \[tab:chans\]List of analysis channels and their corresponding integrated luminosities. See Sect. \[analyses\] for details ($\ell=e, \mu$). The primary background is from $W/Z$+jets, which is modeled with  [@alpgen] by both CDF and DØ. The backgrounds from multijet production are measured from control samples in the data. At DØ the other backgrounds are generated with  and  [@singletop], with  [@pythia] providing parton-showering and hadronization. At CDF most backgrounds from other SM processes are modeled using  Monte Carlo samples. Background rates are normalized either to next-to-leading order (NLO) or higher-order theory calculations or to data control samples. The DØ  and both experiment’s  analyses normalize $W/Z$+jets backgrounds to data, whereas the the CDF  and both experiment’s   analyses normalize them to the predictions from . The fraction of the $W/Z$+jets in which the jets arise from heavy quarks ($b$ or $c$) is obtained from NLO calculations using  [@mcfm] at DØ while at CDF the prediction from  is corrected based on a data control region. The background from  events is normalized to the approximate NNLO cross section [@ttbar_xsec]. The $s$-channel and $t$-channel cross sections for the production of single-top quarks are from approximate NNLO+NNLL calculations [@schan_top_xsec] and approximate NNNLO+NLL calculations [@tchan_top_xsec], respectively. The background from $WW$ events is normalized to NLO calculations from  [@dibo]. All Monte Carlo samples are passed through detailed [geant]{}-based simulations [@geant] of the CDF and D0 detectors. The analyses use multivariate discriminants (MVA) based on decision trees as the final variables for extracting the $VZ$ signal from the backgrounds. These decision trees are trained to discriminate the $VZ$ signal from the backgrounds using the same set of discriminant variables as in the corresponding Higgs analyses. The CDF analyses follow the same strategy, using neural network-based discriminants instead for signal-to-background discrimination. Systematic Uncertainties ======================== Systematic uncertainties differ between experiments and analyses, and they affect the normalizations and the differential distributions (shapes) of the predicted signal and background templates in correlated ways. The combined result incorporates the sensitivity of predictions to values of nuisance parameters and takes into account correlations in these parameters both within each individual experiment and between experiments. The largest uncertainty contributions and their correlations between and within the two experiments are discussed here. Further details on the individual analyses are available in Refs. . ### Correlated Systematics between CDF and The uncertainties on measurements of the integrated luminosities are 5.9% (CDF) and 6.1% (). Of these values, 4% arises from the uncertainty on the inelastic $\pp$ scattering cross section, which is correlated between CDF and DØ. CDF and also share the assumed values and uncertainties on the cross sections for $WW$ production and top-quark production processes ( and single top). In most analyses determination of the multijet (“QCD”) background involves data control samples, and the methods used differ between CDF and DØ, and even between analyses within the collaborations. Therefore, there is no assumed correlation in the predicted rates of this background between analysis channels. Likewise, calibrations of quantities such as the fake lepton rate, $b$-tag efficiencies, and mistag rates are performed by each collaboration using independent data samples and different methods, and are treated as uncorrelated. Similarly, different techniques are used to estimate background rates for $W/Z$+heavy flavor backgrounds and the associated uncertainties are taken as uncorrelated. ### Correlated Systematic Uncertainties for CDF The dominant systematic uncertainties for the CDF analyses are shown in Appendix Tables \[tab:cdfsystwh1\] and \[tab:cdfsystwh2\] for the  channels, in Table \[tab:cdfsystzhvv\] for the channels, and in Tables \[tab:cdfllbb1\] and \[tab:cdfllbb2\] for the  channels. Each source induces a correlated uncertainty across all of CDF’s channels’ signal and background contributions which are sensitive to that source. The largest uncertainties on signal arise from measured $b$-tagging efficiencies, jet energy scale, and other Monte Carlo modeling. Shape dependencies of templates on jet energy scale, $b$-tagging, and gluon radiation (“ISR” and “FSR”) are taken into account for some analyses (see tables). Uncertainties on background event rates vary significantly for the different processes. The backgrounds with the largest systematic rate uncertainties are in general quite small. Such uncertainties are constrained through fits to the nuisance parameters and do not affect the result significantly. Since normalizations for the $W/Z$+heavy flavor backgrounds are obtained from data in the and  analyses, the corresponding rate uncertainties associated with each analysis are treated as uncorrelated even within CDF. ### Correlated Systematic Uncertainties for The  and  analyses carry an uncertainty on the integrated luminosity of 6.1% [@lumi], while the overall normalization of the  analysis is determined from the NNLO $Z/\gamma^*$ cross section [@dyxsec] in data events near the peak of $\zll$ decays. The uncertainty from the identification and energy measurement of jets is $\sim$7%. The uncertainty arising from the $b$-tagging rate ranges from 1 to 10%. All analyses include uncertainties associated with lepton measurement and acceptances, which range from 1 to 9% depending on the final state. The largest contribution for all analyses is the theoretical uncertainty on the background cross sections at 7-20% depending on the analysis channel and specific background. The uncertainty on the expected multijet background is dominated by the statistics of the data sample from which it is estimated. Further details on the systematic uncertainties are given in Tables \[tab:d0systwh\]-\[tab:d0llbb1\]. All systematic uncertainties originating from a common source are taken to be 100% correlated, as detailed in Table \[tab:corr\]. Measurement of the $\bm{WZ+ZZ}$ Cross Section ============================================= The total $VZ$ cross section is determined from a maximum likelihood fit of the MVA distributions for the background and signal samples from the contributing analyses to the data. The cross section for the signal ($WZ+ZZ$) is a free parameter in the fit, but the ratio of the $WZ$ and $ZZ$ cross sections is fixed to the SM prediction. Events from $WW$ production are considered as a background. The fit is performed simultaneously on the distributions in all sub-channels. As a consistency check, we also determine the Bayesian posterior probability by integrating over the nuisance parameters. Here we report only the results from the maximum likelihood fit, but the results from the Bayesian method are consistent. ![image](dbFigs/tevDibosonCondensedFit_combo_Subtract.eps){height="0.2\textheight"} The combined fit for the total $VZ$ cross section distributions yields . This measurement is consistent with the NLO SM prediction of $\sigma(WW+WZ)=\vznlo\pm\vznloe$ pb [@dibo], as well as with the individual measurements from [@dzDibosonCombo], $\sigma(WW+WZ)=5.0 \pm 1.6$ pb, and from CDF [@cdfDibosonCombo], $\sigma(WW+WZ)=4.1 ^{+1.4}_{-1.3}$ pb. Based on the measured central value for the $VZ$ cross section and its uncertainties, the observed significance is estimated to be  standard deviations (s.d.), while the expected significance is $\sim 4.8$ s.d. To visualize the sensitivity of the combined analysis, we calculate the expected signal over background ($s/b$) in each bin of the MVA distributions from the contributing analyses. Bins with similar $s/b$ are then combined to produce a single distribution, shown in Fig. \[fig:rfsub\]. The binning was chosen to keep the background fluctuations roughly of the same size as in the dijet mass distributions. Figure \[fig:mjj\] shows the distributions of the invariant mass of the dijet system, summed over all channels from CDF and DØ, after adjusting the signal and background predictions according to the results of the fit. Figure \[fig:mjj\_sub\] shows the background subtracted dijet mass distributions after the fit, demonstrating the presence of a hadronic resonance in the data consistent with the SM expectation, both in shape and normalization. ------------------------------------------------------ ------------------------------------------------------ ------------------------------------------------------ ![image](dbFigs/tevDBmjjST_combo.eps){width="2.4in"} ![image](dbFigs/tevDBmjjDT_combo.eps){width="2.4in"} ![image](dbFigs/tevDBmjjAT_combo.eps){width="2.4in"} [**(a)**]{} [**(b)**]{} [**(c)**]{} ------------------------------------------------------ ------------------------------------------------------ ------------------------------------------------------ --------------------------------------------------------------- --------------------------------------------------------------- --------------------------------------------------------------- ![image](dbFigs/tevDBmjjST_combo_Subtract.eps){width="2.4in"} ![image](dbFigs/tevDBmjjDT_combo_Subtract.eps){width="2.4in"} ![image](dbFigs/tevDBmjjAT_combo_Subtract.eps){width="2.4in"} [**(a)**]{} [**(b)**]{} [**(c)**]{} --------------------------------------------------------------- --------------------------------------------------------------- --------------------------------------------------------------- Summary ======= In summary, we combine analyses in the , , and  ($\ell=e,~\mu$) final states from the CDF and DØ experiments to observe, with a significance of  s.d., the production of $VZ$ ($V=W$ or $Z$) events. The analyzed samples correspond to  to   of $\pp$ collisions at $\sqrt{s}=1.96$ TeV. We measure the total cross section for $VZ$ production to be . This result demonstrates the ability of the Tevatron experiments to measure a SM production process with cross section of the same order magnitude as that expected for Higgs production from the same set of background-dominated final states containing two heavy-flavor jets used in our low mass Higgs searches. [99]{} J. M. Campbell and R. K. Ellis, Phys. Rev.  D [**60**]{}, 113006 (1999). We used [MCFM]{} v6.0. Cross sections are computed using a choice of scale $\mu_0^2=M_V^2+p_T^2(V)$, where $V$ is the vector boson, and the MSTW2008 PDF set. K. Hagiwara, S. Ishihara, R. Szalapski, and D. Zeppenfeld, Phys. Rev. D [**48**]{} (1993). J. C. Pati and A. Salam, Phys. Rev. D [**10**]{}, 275 (1974); [**11**]{} 703(E) (1975);\ G. Altarelli, B. Mele, and M. Ruiz-Altaba, Z. Phys. C [**45**]{}, 109 (1989); [**47**]{}, 676(E) (1990);\ L. Randall and R. Sundrum, Phys. Rev. Lett. [**83**]{}, 3370 (1999);\ H. Davoudiasl, J. L. Hewett, and T. G. Rizzo, Phys. Rev. D [**63**]{}, 075004 (2001);\ H. He [*et al.*]{}, Phys. Rev. D [**78**]{}, 031701 (2008). T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**104**]{}, 201801 (2010);\ V. M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**101**]{}, 171803 (2008);\ V. M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Lett. B [**695**]{}, 67 (2011);\ V. M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. D [**84**]{}, 011103 (2011);\ V. M. Abazov [*et al.*]{} (D0 Collaboration), arXiv:1201.5652 \[hep-ex\]. T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**103**]{}, 091803 (2009);\ T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**104**]{}, 101801 (2010);\ V. M. Abazov [*et al.*]{} (D0 Collaboration), arXiv:1112.0536 \[hep-ex\]. V. M. Abazov [*et al.*]{} (D0 Collaboration), Note 6260-CONF (2011). T. Aaltonen [*et al.*]{} (CDF Collaboration), CDF Conference Note 10805 (2012). V. M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**104**]{}, 071801 (2010);\ V. M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Rev. Lett. [**105**]{}, 251801 (2010);\ V. M. Abazov [*et al.*]{} (D0 Collaboration), Phys. Lett. B [**698**]{}, 6 (2011);\ T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**105**]{}, 251802 (2010);\ T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**103**]{}, 101802 (2009);\ T. Aaltonen [*et al.*]{} (CDF Collaboration), Phys. Rev. Lett. [**104**]{}, 141801 (2010). T. Aaltonen [*et al.*]{} (CDF Collaboration), CDF Conference Note 10796 (2012). T. Aaltonen [*et al.*]{} (CDF Collaboration), CDF Conference Note 10798 (2012). T. Aaltonen [*et al.*]{} (CDF Collaboration), CDF Conference Note 10799 (2012). V. M. Abazov [*et al.*]{} (D0 Collaboration), Note 6220-CONF (2011). V. M. Abazov [*et al.*]{} (D0 Collaboration), Note 6223-CONF (2011). V. M. Abazov [*et al.*]{} (D0 Collaboration), Note 6256-CONF (2011). D. Acosta [*et al.*]{} (CDF Collaboration), Phys. Rev. D [**71**]{}, 032001 (2005). V. M. Abazov [*et al.*]{} (D0 Collaboration), Nucl. Instrum. Methods Phys. Res. A [**565**]{}, 463 (2006);\ M. Abolins [*et al.*]{}, Nucl. Instrum. Methods Phys. Res. A [**584**]{}, 75 (2008);\ R. Angstadt [*et al.*]{}, Nucl. Instrum. Methods Phys. Res. A [**622**]{}, 298 (2010). V. M. Abazov [*et al.*]{} (D0 Collaboration), Nucl. Instrum. Methods Phys. Res. A [**620**]{}, 490 (2010). T. Aaltonen [*et al.*]{} (CDF Collaboration), CDF Conference Note 10803 (2012). D. Acosta [*et al.*]{} (CDF Collaboration), Phys. Rev. D [**71**]{}, 052003 (2005). A. Abulencia [*et al.*]{} (CDF and CDF - Run II Collaborations), Phys. Rev. D [**74**]{}, 072006 (2006). M. L. Mangano, M. Moretti, F. Piccinini, R. Pittau and A. D. Polosa, J. High Energy Phys. [**07**]{}, 001 (2003). CompHEP, E. Boos [*et al.*]{}, Nucl. Instrum. Methods Phys. Res. A [**534**]{}, 250 (2004);\ E. Boos, V. Bunichev, L. Dudko, V. Savrin, and A. Sherstnev, Phys. Atom. Nucl. [**69**]{}, 1317 (2006). T. Sjöstrand, L. Lonnblad and S. Mrenna, arXiv:hep-ph/0108264. J. M. Campbell and R. K. Ellis, http://mcfm.fnal.gov/.\ J. M. Campbell, R. K. Ellis, Nucl. Phys. Proc. Suppl.  [**205-206**]{}, 10 (2010). U. Langenfeld, S. Moch and P. Uwer, Phys. Rev.  D [**80**]{}, 054009 (2009). N. Kidonakis, arXiv:1005.3330 \[hep-ph\] (2010);\ N. Kidonakis, Phys. Rev.  D [**81**]{}, 054028 (2010). N. Kidonakis, Phys. Rev.  D [**74**]{}, 114012 (2006). R. Brun, R. Hagelberg, M. Hansroul, and J. C. Lasalle, [*GEANT: Simulation Program for Particle Physics Experiments. User Guide and Reference Manual*]{}, CERN-DD-78-2-REV;\ S. Agostinelli [*et al.*]{}, Nucl. Instrum. Methods A [**506**]{}, 250 (2003). T. Andeen [*et al.*]{}, FERMILAB-TM-2365 (2007). R. Hamberg, W.L. van Neerven and W.B. Kilgore, Nucl Phys. B [**359**]{}, 343 (1991) \[Erratum-ibid. B [**644**]{}, 403 (2002)\]. T. Junk, Nucl. Instrum. Methods Phys. Res. A [**434**]{}, 435 (1999);\ A. L. Read, J. Phys. G [**28**]{}, 2693 (2002). W. Fisher, FERMILAB-TM-2386-E (2006). Additional Material =================== Source       --------------------- ---------- ---------- ---------- -- Luminosity $\times$ $\times$ Normalization Jet Energy Scale $\times$ $\times$ $\times$ Jet ID $\times$ $\times$ $\times$ Electron ID/Trigger $\times$ $\times$ $\times$ Muon ID/Trigger $\times$ $\times$ $\times$ $b$-Jet Tagging $\times$ $\times$ $\times$ Background $\sigma$ $\times$ $\times$ $\times$ Background Modeling Multijet Background Signal $\sigma$ $\times$ $\times$ $\times$ : \[tab:corr\]The correlation matrix for the D0 analysis channels. Uncertainties marked with an $\times$ are considered 100% correlated across the affected channels. Otherwise the uncertainties are not considered correlated, or do not apply to the specific channel. The systematic uncertainties on the background cross section ($\sigma$) and the normalization are each subdivided according to the different background processes in each analysis. [^1]: The Tevatron New-Phenomena and Higgs Working Group can be contacted at TEVNPHWG@fnal.gov. More information can be found at http://tevnphwg.fnal.gov/.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We develop an improved sky background estimator which employs optimal filters for both spatial and pixel intensity distributions. It incorporates growth of masks around detected objects and a statistical estimate of the flux from undetected faint galaxies in the remaining sky pixels. We test this algorithm for underlying sky estimation and compare its performance with commonly used sky estimation codes on realistic simulations which include detected galaxies, faint undetected galaxies, and sky noise. We then test galaxy surface brightness recovery using GALFIT 3, a galaxy surface brightness profile fitting optimizer, yielding fits to Sérsic profiles. This enables robust sky background estimates accurate at the 4 parts-per-million level. This background sky estimator is more accurate and is less affected by surface brightness profiles of galaxies and the local image environment compared with other methods.' author: - Inchan Ji - Imran Hasan - 'Samuel J. Schmidt' - 'J. Anthony Tyson' title: Estimating Sky Level --- Introduction {#intro} ============ Detection and surface photometry of faint objects rely heavily on accurately estimating the underlying sky background flux. From scattered light originating from astronomical objects such as the Sun, the Moon, the Milky way, stars, and galaxies, to light pollution from the ground, there are many sources that contribute to the night sky surface brightness [@Roach:1973aa]. Therefore, all ground-based telescopes encounter the challenge of estimating and subtracting the night sky surface brightness ($\mu_{\rm sky} \simeq 21~{\rm mag~arcsec^{-2}}$ at typical “dark” locations) in order to access the far smaller flux levels characteristic of faint galaxy halos. Not only ground-based telescopes, but also space-based telescopes must tackle the issue of sky subtraction. The sky surface brightness measured by the $Hubble~ Space~Telescope~(HST)$ is 1-2 mag arcsec$^{-2}$ fainter than the sky surface brightness measured by ground-based telescopes [@Trujillo:2016aa], but contaminating flux sources remain: above the atmosphere zodiacal light, airglow from the Solar wind, and excitation of residual propellant gas from spacecraft all contribute to the sky background. The proper sky level can be different for detection of objects than it is for the optimal measurement of photometry. This is due to the fact that faint, unresolved and undetected objects underlie the object for which photometry is desired. This is true for stars as well as galaxies. Unbiased sky estimation has been attempted widely in the literature: prominent examples include FOCAS [@Tyson:1979aa], DAOPHOT [@Stetson:1987aa], SExtractor [@Bertin:1996aa], SDSS Photo [@Lupton:2002aa], GALFIT [@Peng:2010aa], PyMorph [@Vikram:2010aa; @Bernardi:2017aa; @Fischer:2017aa], and the LSST Data Management Stack [LSST Stack hereafter, @Bosch:2018aa], @Huang:2018aa, and @Jenness:2015aa. The problem of using biased sky background around detected objects is typically encountered on scales that are large compared with the point spread function (PSF), where the pixel counts from the object become indistinguishable from the sky pixel counts. Traditionally, detection of low surface brightness galaxies has relied on background sky estimation precision of one part in 10,000. Extreme dwarf galaxies in the Local Group have mean surface brightnesses as faint as $\sim32$ mag arcsec$^{-2}$ [e.g., @McConnachie:2009aa; @Homma:2016aa]. Thus, accurate surface brightness measurements would require a sky unbiased at a level of $\sim34$ mag arcsec$^{-2}$, or about 6 parts-per-million (ppm) of the typical $R$-band sky level. Current and upcoming surveys such as the Dark Energy Survey [DES; @Flaugher:2005aa], the Large Synoptic Survey Telescope [LSST; @LSSTSciBook:2009], and the Hyper Suprime-Cam (HSC) survey [@Aihara:2018aa] are likely to reveal new aspects of galaxies as low surface brightness (LSB) objects [@Ivezic:2008aa; @Robertson:2017aa]. The discovery space is large: LSB features can exist on scales of arcseconds to many arcminutes, spanning the majority of faint galaxies at high redshift to more nearby LSB galaxies. Tidal tails have already been detected at surface brightness levels of $\sim30$ mag arcsec$^{-2}$ [e.g., @Martinez-Delgado:2010aa; @van-Dokkum:2014aa] and surely exist at lower levels. A relatively unexplored area is the ultra-low surface brightness morphology and tails over a wide range of angular scales at levels of 31-32 mag arcsec$^{-2}$. Discoveries are likely at even fainter levels of surface brightness still, which may become accessible in upcoming deep field observations such as the LSST Deep Drilling Fields (hereafter LSST DDFs), which are expected to achieve a coadded 5$\sigma$ depth of $\sim$29 magnitude in the $r$-band filter [@LSSTSciBook:2009]. Proper sky background estimation is an important tool for studying galaxy formation and evolution, and even more important when estimating galaxy types based on surface brightness profiles. However, it has been challenging to calculate the correct value of sky background. Many automated photometry programs estimate a biased sky background [@Bosch:2018aa; @Huang:2018aa; @Jenness:2015aa]. Previous techniques typically mask detected objects and use the remaining pixel values to estimate the background level; however, sky estimates are generally biased high because pixels in the outskirts of detected galaxies survive the masking processes, and undetected low signal-to-noise sources contaminate the background. Accurate sky estimation is thus a prerequisite for photometric studies of faint objects (e.g., LSB galaxies or low-level features around galaxies). As mentioned above, at these low levels of surface brightness a sufficiently accurate model of camera scattered light must be used for each exposure. Indeed on a wide range of angular scales the sky surface brightness will be dominated by scattered light from bright stars. Such modeling is beyond the scope of this paper, instead we focus on the challenge of sky bias introduced by the detected object’s faint outer halo and by the high density of undetected galaxies. Thus, suppressing sky bias from these two known effects is a necessary but not sufficient condition for ultra-low surface brightness photometry [@LSSTSciBook:2009]. Sky background estimation directly impacts astronomical object detection. Detection of both stars and galaxies requires accurate characterization of the sky background. As an example, photometry of faint stars whose surface brightnesses are very near that of the sky is described in @Stetson:1987aa. Typically, detection algorithms begin by marking a collection of CCD pixels as belonging to an astrophysical source if they are above some threshold, usually after convolution with a spatial filter optimized for some angular scale. Calculating the flux due to this source (and crucially, this source alone) requires that we quantify the flux those CCD pixels would have in the absence of contaminants, e.g. the sky background. In virtually all sky surveys the flux from unresolved, undetected faint galaxies form a component of this background sky. However, their number and luminosity distribution is known statistically from existing deep surveys. Compilations of deep imaging data provide the number of galaxies as a function of magnitude up to $m \simeq 30$ for various astronomical filters. For example, @Metcalfe:2001aa showed that the galaxy count slope in $R$-band is $d({\rm log}N)/dm_{\rm R}$ $\sim$ 0.37 for 20 $\lesssim m_{\rm R}\lesssim$ 26 and becomes shallower for 26 $\lesssim m_{\rm R}\lesssim$ 30. This complete galaxy number count was achieved by compiling a number of observations from both ground-based and space telescopes. Data obtained by a single survey rarely satisfy both a large field of view and a very deep image depth. In deep imaging covering a sufficiently large area (to avoid sample variance) one can statistically expect the same galaxy number counts from any observation at that wavelength. We may thus adopt the well measured mean number of faint undetected galaxies which are responsible for biased sky estimates if the imaging is sufficiently deep. The idea of correct sky estimation over all angular scales is actually an ill-posed problem. The proper background sky for barely resolved galaxies at high redshift is, in principle, quite different from the correct sky level for large angular scale LSB features. Indeed, the flux from barely resolved galaxies sits on top of the fainter, larger, angular scale flux associated with arcminute scale LSB extragalactic features, which in turn sits on top of the starlight reflected by Galactic cirrus, the zodiacal light, the night sky surface brightness caused by atmosphere emission, and scattered light from bright objects in the camera and the atmosphere. Thus there could be a separate sky estimate appropriate for each of the different morphological classes of LSB objects. To make the problem tractable a multi-component sky model must be built. The sky model, in principle, can be built using knowledge of the camera and telescope system, locations of bright objects, observational data, and statistical summaries of faint galaxy counts from ultra deep images like the HST. The first step is detecting all objects above a position-variable local sky estimate and masking them. The remaining pixels still contain flux from both undetected galaxies and the faint outer isophotes of the masked detected galaxies which, if left uncorrected, gives an over-estimate of the sky level around compact objects. Because of this, fitting the remaining “sky" pixels with a Gaussian profile, as if it were pure Poisson noise, is incorrect; the distribution of remaining pixels would follow a Gaussian if the pixels contain only the true sky. However, the real distribution has a tail of positive pixels due to the two contributors mentioned above. While 3$\sigma$ clip and/or one-sided Gaussian fitting improves the estimate, these approaches are arbitrary and lead to a small positive sky background bias [@Robertson:2017aa]. Using the known statistical faint galaxy counts beyond the detection limit together with growing masks around detected objects by a defined amount scaled by total flux help significantly in making these corrections. Indeed, both biases must be removed if the sky is to be correctly estimated at the sub-percent level. This entire process is recursive on every angular scale where there are important sky components. In this paper we focus on the more tractable task of estimating the sky level in the generic case of the extragalactic sky superposed on a slowly varying foreground, thus focusing on the $\sim$few arcsecond scales associated with typical faint galaxies. To explore the effect of modified masking and accounting for undetected galaxies in a controlled way, we develop a set of simulated images with known inputs and properties. We begin by describing our image simulations in Section \[GALSIM\]. In Section \[Object Detection\] we outline the methodology for creating detection and measurement catalogs with SExtractor, where we detail the software specific settings used in this analysis. Section \[skyestimators\] continues with a discussion of two widely used sky estimators and the techniques they employ. In Sections \[newestimationtechnique\] and \[stack+inchan\], we present our new sky estimation technique which deals with biases current sky estimators suffer. In Section \[sersicest\] we examine the importance of accurate sky background estimation in the fitting of galaxy surface brightness profiles. We close this work in Section \[discussion\] with a discussion of the difficulties of correctly estimating sky backgrounds, the effectiveness of our algorithm to overcome them, and prospects for future directions. [cccc]{}\[ht\] &FWHM \[arcsec\] && 0.8 $\le \alpha \le$ 1\ &slope, $\beta$ && 2.9 $\le \beta \le$ 3.2\ &ellipticity, $e$ && 0 $\le e \le$ 0.15\ &position angle, $\theta$ \[degree\] && $0^\circ\le \theta \le$ 180$^\circ$\ & sky level, $\mu_{\rm sky}$ \[ADU pixel$^{-1}$\] & All &$\mu_{\rm sky}=$ 3240\ & sky noise, $\sigma_{\rm sky}$ \[ADU pixel$^{-1}$\] & All &$\sigma_{\rm sky}=$ 12.73\ & sky level, $\mu_{\rm sky}$ \[ADU pixel$^{-1}$\] & $^{\rm a}$ &$\mu_{\rm sky}=$ 1000\ & sky noise, $\sigma_{\rm sky}$ \[ADU pixel$^{-1}$\] & &$\sigma_{\rm sky}=$ 0.1\ & & Uniform dist. & 19 $\le {m_{\rm R}}\le$ 25\ &&Random dist.& 19 $\le {m_{\rm R}}\le$ 29\ & & Uniform dist.& 0.3 $\le R_{e} \le$ 2.5\ &&Random dist.& 0.1 $\le R_{e} \le$ 2.5\ &[Sérsic index, $n$]{} & All & 0.5 $\le n \le$ 5\ & & & 0 $\le e \le$ 0.6 (for $n$ $\le$ 2.5)\ && & 0 $\le e \le$ 0.3 (for $n$ $\ge$ 2.5)\ &position angle, $\theta$ \[degree\] &All & $0^\circ\le \theta \le$ 180$^\circ$\ \[tbl:galsim\] GALSIM: Galaxy Image Simulator {#GALSIM} ============================== We use GALSIM [@Rowe:2015aa] to generate galaxy images and sky background. Galaxy and PSF parameters are chosen to be similar to those observed in the $R$-band imaging data of the Deep Lens Survey [DLS; @Wittman:2002aa]. In all, three images are simulated in which we vary the sky level, sky noise, galaxy placement, and magnitude distribution. Each simulation is 8000 $\times$ 8000 pixels in area. In Table \[tbl:galsim\] we list the parameters used in our simulations. ![image](fig1.eps){width="90.00000%"} In order to isolate the effect of detection masks from that of undetected faint galaxies, in our first simulation (hereafter referred to as the “uniform distribution simulation”) 10,000 galaxies are evenly spaced in the image in a grid pattern, and are not surrounded by [*any*]{} other galaxies inside the sky analysis areas. This ensures the galaxies are well separated and do not contaminate their neighbors with stray flux. As discussed earlier, extended features in detected galaxies often ‘bleed’ beyond the detection mask, contaminating the sky background estimate. As we will show in further detail in Section \[SExSky\], galaxies in the uniform distribution simulation are all detected, meaning there will be no sky background contribution from unresolved or undetected sources. Having completely detected all simulated galaxies, well localized sources will enable us to directly test the impact of growing detection masks, independent of any effects from undetected sources in more realistic images. The number count as a function of magnitude in R band follows a power law with a shallow slope of $d({\rm log}N)/m_{\rm R}$ = 0.1 over the magnitude range of 19 $<m_{\rm R}<$ 25 for galaxies in this “uniform distribution" simulation. We choose a shallower slope for the number counts to ensure that our simulated population includes a balanced mix of bright and faint galaxies in our sample of 10,000. A slope of $d({\rm log}N)/dm_{\rm R} =$ 0.1 yields a sample that contains adequate bright galaxies to test our algorithms while minimizing problems due to flux overlaps that may occur from two neighboring bright galaxies on our simulation grid. The second simulation (hereafter referred to as the “random distribution simulation") contains 793,116 galaxies which are randomly placed in position over the image. As a consequence of their more realistic placement, galaxies may contribute flux to neighboring profiles. Importantly, the random distribution simulation includes a much larger number of galaxies, with an $n(m)$ distribution that extends to a much fainter magnitude limit of $m_{\rm R}$ = 29. Magnitudes of galaxies in the random distribution simulation follow a power law with a slope of $d({\rm log}N)/dm_{\rm R} =$ 0.4. The fainter galaxies with $m_{\rm R}>$ 25.5 are mostly not detected by the detection algorithms tested in this paper. We call the flux from these undetected galaxies the extragalactic background (EBL). The EBL will contaminate both the estimate of the sky background, and the flux of nearby detected galaxies. These undetected galaxies will be important in Section \[newestimationtechnique\]. Taken together, the two simulations enable us to test the effect of mask growth in isolation in the uniform distribution simulation, and the joint effects of mask growth and unresolved galaxies in the random distribution simulation. In Figure \[GALSIM:IMG\] we show a small portion of each of the first two simulations generated by GALSIM. The simulated images are intended to reflect the observing conditions in the DLS, with 20 co-added 900 s R band exposures on a 4m telescope. We constructed 20 simulated images with different random seeds and co-added the images for each simulation. In doing so, we increase the signal-to-noise ratio and limiting magnitude. The simulated sky in both the uniform and random distribution simulations properly emulates conditions of the DLS. The sky surface brightness is $\mu_{\rm sky}~=~21~{\rm mag~arcsec^{-2}}$ (3240 ADU pixel$^{-1}$), which corresponds to a single exposure time of 900 s in DLS R band with Poisson noise. We also assume that the sky surface brightness is spatially flat in the uniform and random distribution simulations. The root-mean-square value of sky background in the co-added image is reduced by a factor of the square root of the number of co-added images and becomes $\sigma_{\rm sky}$ = 12.73 ADU pixel$^{-1}$. The range of signal-to-noise ratio is 5 $\lesssim$ S/N $\lesssim$ 300 for 19 $\leq m_{\rm R} \leq$ 26 at DLS depth. Each model galaxy follows a single Sérsic profile with index ranging from 0.5 to 5. The PSF profiles in the DLS are broader than a Gaussian, and are well described by a Moffat profile [@Moffat:1969aa] which is given by: $${\rm PSF}(R) = \frac{\beta-1}{\pi \alpha^2} \bigg[ 1 + \bigg(\frac{R}{\alpha} \bigg)^2 \bigg]^{-\beta},$$ where $\alpha$ is the scale length, and $\beta$ is the slope of the profile. To match the DLS observations, we use Moffat profiles of $0.8 <\alpha [{\rm arcsec}]< 1$ and $2.9 < \beta < 3.2$. Under these parameter ranges, Moffat PSFs are randomly distributed in the entire image for both simulations. To investigate effects of noise, it is informative to simulate deeper data with higher signal to noise for the target galaxies. To this end, for our third image simulation, we regenerate the uniform distribution simulation, this time to LSST DDF depth, in addition to the simulations to DLS depth. Anticipated typical observing conditions for the LSST DDF are as follows: 10,000 co-added 15 s R band exposures on a 6.7 m (effective aperture) telescope. The sky surface brightness of $\mu_{\rm sky}~=~21~{\rm mag~arcsec^{-2}}$ is estimated as 1000 ADU pixel$^{-1}$. The root-mean-square value of sky background in the co-added image is $\sigma_{\rm sky}$ = 0.1 ADU pixel$^{-1}$. Simulating 10,000 simulations with varying random seeds requires a substantial amount of computing time. To impart realistic sky noise fluctuations in our LSST DDF depth image, we take the following steps instead: we re-normalize pixel values of the uniform distribution simulation without sky noise in DLS depth to meet the observing condition for the LSST DDF. We subsequently add the Poisson noise of $\sigma_{\rm sky}$ = 0.1 ADU pixel$^{-1}$ to each pixel. Object Detection {#Object Detection} ================ The correct sky level to be used in detection and photometry can differ. The proper sky level for object detection is the sky underlying all objects, bright and faint. This is true even though the faintest objects are generally not detected and form an unresolved extragalactic background. Any detection algorithm should use the true underlying sky after EBL subtraction. However, current algorithms are not sensitive at the levels discussed above. Developing a new detection algorithm which takes full advantage of the high precision sky estimates is beyond the scope of this paper. For the purposes of the inter-comparisons in this work we use SExtractor. SExtractor [@Bertin:1996aa] is an automated catalog builder used to identify and measure various properties of astronomical objects on a CCD image. We run SExtractor using a detection threshold of `DETECT_THRESH` = 0.5$\sigma$ where $\sigma$ is the root-mean-square sky noise in the entire image, a minimum detection area of 6 pixels, and the number of deblending sub-thresholds of `DEBLEND_NTHRESH` = 10 with a deblending contrast of `DEBLEND_MINCONT` = 0.0001. The images are filtered through a 5 pixel $\times$ 5 pixel Gaussian convolution kernel with FWHM = 3 pixels. A mesh of 80 pixels $\times$ 80 pixels is used to estimate sky background for all identified galaxies by SExtractor. A `PHOT_FLUXFRAC` = 0.5 is used to estimate the half-light radius for each galaxy. The numbers of cataloged galaxies are 10,000 and 111,409 for uniform and random distributions, respectively. As in all current object detection algorithms, SExtractor fails to detect faint galaxies of low signal-to-noise. Background Estimation with Three Sky Estimators {#skyestimators} =============================================== In this section, we compare sky background values estimated by various methods on our simulated images. Because we have *a priori* knowledge of the true underlying sky brightness that was input into our simulations, we can directly assess the accuracy of these estimators by comparing their results with the truth. To estimate sky background around each of our galaxies, we first run SExtractor to construct a detection catalog of galaxies. By running SExtractor, we obtain SExtractor’s sky background estimate at the position of each galaxy. The SExtractor catalog is used as input to GALFIT, which provides a second catalog of background estimations. An overview of the background estimation procedures and our parameters used in SExtractor and GALFIT follows in the next two subsections. Motivated by the strengths and weaknesses observed in these methods, we develop a new scheme for estimating the local sky background around galaxies. Finally, we couple our local sky estimation technique with a global polynomial background model in each of the simulations to obtain accurate sky background estimates with high precision. SExtractor sky estimation {#SExSky} ------------------------- SExtractor provides a $local$ sky background estimate. In SExtractor this quantity is estimated by performing an iterative 3$\sigma$ clip of the pixel values within a user-specified mesh grid that covers the image. We use a mesh of 80 pixels $\times$ 80 pixels to estimate sky background. Sextractor considers the cell to be “non-crowded" if $\sigma$ drops by less than 0.2$\sigma$ per clipping iteration, and crowded otherwise. Based on these two cases the sky background is given by: $${\rm sky} = \begin{cases} {\rm Mean} & \sigma_i - \sigma_f \le 0.2 \sigma_i \\ 2.5 \times {\rm Median} - 1.5 \times {\rm Mean} &{\rm otherwise} \end{cases}$$ where $\sigma_i$ and $\sigma_f$ are the standard deviations of the pixel values in a mesh before and after the 3$\sigma$ clip, respectively. GALFIT sky estimation {#GALFITSky} --------------------- GALFIT [version 3; @Peng:2010aa] is a two-dimensional model fitter designed to model multiple categories of astronomical objects. In the course of measuring a model galaxy GALFIT estimates sky background, which is subtracted in order to find the best-fit parameters for a functional model on a CCD image. GALFIT estimates the sky background at the object’s centroid as follows: $$\begin{aligned} {\rm sky} (x_0,y_0) = &~{\rm sky} (x_c,y_c) + (x_0-x_c) \frac{d{\rm sky}}{dx}\\ &+ (y_0-y_c)\frac{d{\rm sky}}{dy}, \end{aligned}$$ where $(x_0, y_0)$ is the centroid of the object in pixel coordinates, $(x_c, y_c)$ is the center of an image cutout and $d{\rm sky}/{dx}$ and $d{\rm sky}/{dy}$ are gradients of sky background in x and y directions, respectively. Care must be taken in choosing the background estimation parameters for GALFIT, particularly when choosing an image cutout size. If the size is too small, the image cutout does not include enough sky pixels to make an accurate estimate. However, too large of an image cutout not only requires expensive computational resources, but also results in inaccurate estimation of the sky background due to the increasing number of undetected galaxies [@Barden:2012aa; @Vikram:2010aa]. In our study, we adaptively choose the width $w$, and height $h$, of the image cutout centered on each galaxy’s position. It is crucial that the cutout does not truncate the faint tails of galaxies. To ensure this is the case, we adopt a scheme to conservatively estimate the radius at which the galaxy profile reaches a surface brightness of $\mu$ = 30 mag arcsec$^{-2}$, $R_{\rm 30}$. We assume all galaxies are described by an $n$ = 4 Sérsic profile and use the half-light radius as measured by SExtractor as the profile’s half-light radius. After the mock profile is constructed, $R_{\rm 30}$ can be readily calculated for each galaxy. The width $w$, and height $h$, of the image cutout are then defined as: $$\begin{aligned} w &= f_{\rm img} R_{\rm 30} (|\cos\theta| + (1-e)|\sin\theta|),\\ h &= f_{\rm img} R_{\rm 30} (|\sin\theta| + (1-e)|\cos\theta|),\end{aligned}$$ where $\theta$ is `THETA_IMAGE`, $e$ is `ELLIPTICITY`, and $f_{\rm img}$ is a free parameter to set the optimal size of an image cutout. We empirically determine that a value of $f_{\rm img} = 2$ results in a sufficient number of background sky pixels to determine an accurate estimate of the sky background. We use this same image cutout for galaxy surface brightness profile fitting in Section \[sersicest\]. New Sky Estimation Technique {#newestimationtechnique} ---------------------------- Sky estimation methods which use a sample of local image CCD pixels to estimate the background level at the position of a galaxy can suffer a high bias from two factors: flux from the outer tails of galaxy profiles which extend beyond their respective masks, and flux from undetected (and hence completely unmasked) faint galaxies that reside in the image pixels used to estimate the sky. To deal with these two effects, we develop a new sky estimation technique. This technique employs a two-filter estimator, one spatial and one statistical, to minimize the contribution from pixels coming from the unmasked outskirts of galaxy profiles and from unidentified objects. Our method consists of three high level steps and ultimately yields an estimation of the sky level at the positions of detected galaxies. A flow chart describing the overall process of our sky estimation is shown in Figure \[Flowchart\]. In the spatial filtering step, we create an updated object mask for each cataloged source. These new masks more effectively exclude flux from the extended tails of galaxy profiles, which previously contaminated the pixels used to estimate the sky background. The procedure for creating new masks was calibrated on the uniform distribution simulation, where galaxies are laid down on a regular grid, with no neighboring galaxies inside the cutout area. The masks are generated by first creating a mock one-dimensional Sérsic profile for each source. Pixels are then masked out if their positions satisfy: $$\begin{aligned} &{\rm C_{xx}}(x-x_c)^2 + {\rm C_{xy}}(x-x_c)(y-y_c) \\ &+ {\rm C_{yy}}(y-y_c)^2< (R_{30}/a_{\rm rms})^2 \end{aligned}$$ where $a_{\rm rms}$ is the 2nd moment along the semimajor axis (`A_IMAGE`), (${\rm C_{xx}}$, ${\rm C_{xy}}$, ${\rm C_{yy}}$) is the object ellipse parameter (`CXX_IMAGE`, `CXY_IMAGE`, and `CYY_IMAGE`, respectively) measured by SExtractor, ($x_c$, $y_c$) is the centroid of a galaxy, and $R_{\rm 30}$ is the cutout radius (see Section \[GALFITSky\]). We find that using a fainter surface brightness for our masks does not significantly change the remaining pixel statistics. Additionally, the measured SExtractor ellipse parameters are used to assign orientation angles and ellipticities to the masks. We stress that these masks are not meant to perfectly model the profiles of galaxies, but rather effectively mask out their flux. Using an $n$ = 4 Sérsic parameter is sufficient to mask out galaxies which are best described by $0.5\leq\,n\,\leq\,4$ Sérsic profiles. ![ Galaxy number counts as a function of magnitude in the random distribution simulation: input number counts (black), power law with a slope of $d({\rm log}N)/dm_{\rm R}$ = 0.4 (blue), measurement by SExtractor (red), and residual counts (sea-green). The residual counts with $m_{\rm R}~>$ 25.5 are used to model extragalactic background light of unmasked pixels. We assume that the flux from these unresolved galaxies is uniformly distributed over the sky, which enables us to determine a background flux to subtract in our improved sky estimate. []{data-label="GalaxyCount"}](fig3.eps){width="47.00000%"} We then proceed to the statistical filtering step to estimate and subtract the flux contribution from undetected faint galaxies which make up the EBL. To model the EBL the number counts of undetected galaxies in the entire field and their flux must be considered. In the uniform distribution simulation, all galaxies are detected by SExtractor, so accounting for undetected EBL galaxies is unnecessary. In the random distribution simulation, however, SExtractor fails to detect and catalog some galaxies at a true magnitude fainter than $m_{\rm R}\,\sim$ 25.5, as the signal-to-noise for these galaxies approaches the user-set detection threshold. In Figure \[GalaxyCount\] we show the galaxy number counts for the random distribution simulation. The blue curve shows the histogram of true magnitudes for the objects in the random distribution simulation, which is an excellent match to the the input number-counts slope that was used to generate the mock galaxies, shown in black. The red histogram shows the actual number of detected objects, and the filled sea-green histogram indicates the number of objects not detected by SExtractor, the difference of the red and blue histograms. We exclude a small number of galaxies with observed magnitude $m_{\rm R} <$ 25.5 when modeling the EBL; residuals in the observed number counts compared to the input number counts for $m_{\rm R} <$ 25.5 galaxies are due to small measurement errors and the effects of blending. The total number of undetected galaxies with $m_{\rm R} >$ 25.5, which make up the EBL, is 679,753. Since the number of EBL galaxies is large, for simplicity we assume that the galaxies are uniformly distributed in the field. We use the residual in the observed number counts of galaxies as a function of magnitude compared to their expected value to calculate the total flux of all undetected galaxies. Because we assume EBL galaxies are uniformly distributed across the field, we also assume the total EBL flux is uniformly distributed as well. As a result, we obtain an estimate for the EBL flux per pixel for our simulated data, $\mu_{\rm EBL}$= 0.898 ADU pixel$^{-1}$, by simply dividing the total EBL flux by the number of unmasked pixels in the image. This ‘pedestal’ level of flux is then subtracted from each unmasked image pixel to mitigate the effects of the EBL in background estimation. To examine whether the method is sensitive to the exact cutoff in the simulated faint galaxies, we test our method for EBL estimation on an additional image simulation where galaxies are generated up to $m_{\rm R}$ = 31 with a simple power-law of $d({\rm log}N)/dm_{\rm R} =$ 0.4. We find a background consistent with the added flux from the $29\,\leq\,m_{\rm R}\,\leq\,31$ galaxies: the EBL in this case is $\mu_{\rm EBL} = $ 1.670 ADU pixel$^{-1}$ while the median pixel value is $\mu_{\rm median} = $ 1.593 ADU pixel$^{-1}$. Thus, the method is not sensitive to the EBL faint end cutoff beyond 30 mag arcsec$^{-2}$. Statistical galaxy counts as a function of magnitude are now complete to $m_{\rm R} \simeq$ 30 [@Metcalfe:2001aa]. The slope of the galaxy count at $m_{\rm R} \simeq 29$ becomes so shallow that the EBL from the galaxies with $m_{\rm R} > 29$ decreases rapidly [@Tyson:1995aa]. Because of this, there is little difference in the EBL estimates even though we simulate galaxies following the real galaxy counts. It is therefore safe to simulate galaxies with $m_{\rm R} < 29$. Lastly, we measure local sky background estimates for each galaxy. We select an image cutout centered on the centroid of each target galaxy. The initial width and height of the images are 15 $R_{e} $, where $R_{e}$ is `FLUX_RADIUS` with `PHOT_FLUXFRAC` = 0.5 in our SExtractor catalog. If the number of unmasked pixels is less than 4000 or the width (or height) is less than 80 pixels, we iterate by increasing the width and height with an increment of 10 pixels. Once the number of unmasked pixels residing in the image cutout is greater than 4000, the mean of the unmasked pixels is calculated and used as an estimate of the local sky value for the center of the image cutout. ![image](fig4.eps){width="90.00000%"} In Figure \[skyestimation\] we compare the performance of background estimation by SExtractor and our method for a particular galaxy. It is known that SExtractor tends to overestimate sky background [@Haussler:2007aa], and the estimate becomes worse when the number of unidentified objects increases. Although the 3$\sigma$ clip by SExtractor described in Section \[SExSky\] removes excessively bright pixels from consideration when estimating the sky background, the remaining pixels are still contaminated by the flux from undetected galaxies and insufficiently masked galaxies. This can be seen directly in the top right panel in Figure \[skyestimation\], which shows the distribution of unmasked pixel values when using SExtractor’s detection masks. While the distribution is well described by a Gaussian, the mean value of the distribution is 1.23 ADU pixel$^{-1}$ above the true sky value. This bias results from the excess number of pixels in the bright wing of the distribution. In the bottom right panel of Figure \[skyestimation\], we show the distribution of unmasked pixel values after using our new masking scheme. The bias from extended galaxy profiles, which otherwise survives SExtractor’s masking procedure, is mitigated with our new masks, as the mean value is $\mu$ = 0.910 ADU pixel$^{-1}$ above the true sky value. However, undetected galaxies continue to pollute the pixels used for sky estimation with excess flux, even when new masks are used. To deal with flux contamination from undetected galaxies, we must also subtract the calculated EBL flux per pixel to obtain an accurate estimation of the sky background ($\mu$ = 0.01 ADU pixel$^{-1}$). The precision of the estimate can be increased by utilizing many such samples of sky over a much larger area and requiring smoothness. For this we assume the sky underlying all galaxies varies on scales much larger than galaxy scales.\ Global Background Model Coupled With New Local Sky Estimator {#stack+inchan} ------------------------------------------------------------ In this sub-section, we describe the procedure to create a global background model for entire simulated images. The background model is created by computing the average background value in semi-local uniformly spaced subsections of the CCD image, and subsequently fitting a smooth two-dimensional polynomial to the average background values. Once created, the global background model can be evaluated at any point in the image to produce a sky estimation [@Bosch:2018aa]. Note, this is in contrast to the estimation techniques detailed above. Aforementioned techniques are *local* estimators of the sky background: using pixels from the surrounding $\sim$ 1 arcminute diameter of a galaxy to construct an estimate of the sky brightness in the immediate vicinity of each galaxy. The details specific to creating the global background model (subdividing the CCD, computing the average background values, and polynomial fitting) are discussed below. We begin by sub-dividing the image into 2500 evenly spaced, equally sized image subsections, where each subsection is 160 $\times$ 160 pixels in size ($\sim$ 40 square arcseconds), and image subsections do not overlap. The centers of these subsections define a 50 $\times$ 50 point spatial grid. In each image subsection, an iterative 3$\sigma$ clip mean and variance are computed on all pixels which do not correspond to detected objects (i.e., the pixels not included in the corresponding SExtractor segmentation maps). The sigma-clipped mean of each image subsection is assigned to its corresponding grid point. The average position of the non-masked pixels in each image subsection is used to place the points for the spatial grid in their respective image subsections. Subsequently, a 6th-order two-dimensional Chebyshev polynomial is fit to the spatial grid. Chebyshev polynomials are more robust to over fitting than spline interpolation, as they are not strictly required to pass through the grid points obtained from the 3$ \sigma$ clip. Each grid point is inverse-variance-weighted in the fit, so image subsections where many pixels are masked have a reduced impact on the fidelity of the fit [@Bosch:2018aa]. The fitted polynomial model may be used as a background model which may be evaluated at any location in the image to predict the local sky value. ![image](fig2.eps){width="70.00000%"} Several factors must be considered when choosing the image subsection size, as this will determine spatial scales the background model is sensitive to. If the image subsections are too small, they will include too few surviving unmasked pixels from which to calculate a mean. Additionally, if the subsections are smaller than the spatial scales of galaxy profiles, extended features in galaxy profiles risk being subtracted out. However, all other variations in the sky must happen at spatial scales lower than that of the sky model if they are to be fitted and removed. Because we have restricted our focus to the general case of an extragalactic sky superposed on a slowly varying foreground sky level for this work, we have chosen image subsection sizes that yield a background model sensitive to spatial variations on the order of 40 square arcseconds. Investigations in varying the subsection size showed the model is insensitive to varying the subsection size between 25 square arcseconds and 1 square arcminute. Using subsections smaller than 25 square arcseconds created subsections which were completely masked and had no usable pixels from which to estimate statistics, and subsections larger than 1 square arcminute are sufficiently large to avoid over subtracting extended tails of the galaxies simulated here. A summary flow chart of our methods is shown in Figure \[Flowchart\]. The appropriate polynomial order for the background model must also be chosen with care, and will depend on a variety of factors. The background in data taken with real cameras on telescopes (in contrast to our idealized simulated data here) will contain contributions due to the optics, like scattered light and diffuse ghosts. Artifacts in detectors, such as tree rings, can give rise to variations across the CCD itself. The spatial scales over which these contributions occur will vary from instrument to instrument and telescope to telescope. This is also true for the astrophysical background itself, as the scales over which it varies can depend on the field of view. The model must vary on spatial frequencies at least as small as those discussed immediately above. If the model varies on scales finer than this, then the model will be susceptible to over-fitting noise, or fitting and subtracting extended features in galaxy profiles. As a result, the optimal polynomial order in our scheme will depend largely on the data being considered. Fitting a polynomial background is a somewhat heuristic procedure when the true underlying sky model is not explicitly known, as is the case for most real observational data. As a consequence, the order of the polynomial used in the background model can be somewhat ad hoc. In @Bosch:2018aa, the authors use the same polynomial background model discussed above, and find a 6th order polynomial is well suited for modeling the background on the 4K $\times$ 2K CCDs used on the Subaru Hyper Suprime-Cam over the appropriate scales. Because our simulated data is meant to emulate DLS data, we directly turn to DLS data to investigate the appropriate polynomial order. We find that while such a model can suffer over subtraction in the immediate vicinity of bright field stars, it otherwise leaves galaxy profiles intact. Our idealized simulations include only galaxies, freeing us from this potential issue. We do not include a spatially varying component of the sky background in our idealized simulation, but for consistency with the methods discussed above, we employ a 6th order polynomial fit in our simulations. ![Three polynomial background models evaluated at the centroids of detected galaxies: with aggressive masking and EBL correction (red), only aggressive masking with no EBL correction (green), and SExtractor detection map masking with no EBL correction (blue). We use a combination of image binning, statistical estimation of sky and its variance in each image bin, and polynomial fitting to predict sky background levels at the centers of detected galaxies. For each background model, we subdivide the random distribution simulation into 160 $\times$ 160 pixel image subsections, and use a statistical estimator on all unmasked pixels in each subsection. A spatial 2D polynomial model is then fit to the local mean backgrounds to create a global background model, which can be used to evaluate the background at the centroids of detected galaxies. 3$\sigma$ clip with SExtractor detection masks (blue) suffers overestimation from extended unmasked galaxy features, and unmasked faint galaxies. Using a simple mean and our new masks (green) improves the estimation, but still suffers from bias. Only after using both new, larger masks and the EBL correction (red) can accurate, precise sky estimations be made. []{data-label="improved_stack"}](fig5.eps){width="47.00000%"} In Figure \[improved\_stack\], we show the result of this background estimator when used on our DLS depth random distribution simulation. In the blue histogram, we show the distribution of background estimations at the centroids of detected galaxies. The mean of this distribution is 1.115 $\pm$ 0.019 ADU pixel$^{-1}$ above the true sky value. We argue this overestimation is due to flux in the extended, unmasked outskirts of galaxy profiles, and undetected galaxies. We attempt to remove this bias using techniques described in the previous subsection. The overestimation in sky background can be partially ameliorated by recreating the background polynomial model, where we use our new masks in lieu of the SExtractor masks. Additionally, a simple mean of the unmasked pixels is used to estimate the sky in each image sub-section, instead of the 3$\sigma$ clip. In the green curve in Figure \[improved\_stack\], we show the distribution of background estimations at the centroids of detected galaxies using this technique. The mean sky value, 0.892 $\pm$ 0.013 ADU pixel$^{-1}$, while an improvement to the 3$\sigma$ clip method, still suffers from an overestimation. As before, we can remove the persistent bias by subtracting the EBL flux in addition to dealing with previously unmasked extended galaxy profiles. In the red curve in Figure \[improved\_stack\], we subtract the EBL flux level, estimated from the residual between the true and measured galaxy number counts, from each unmasked pixel in the random simulation. We then repeat the previous procedure to create the green distribution. The resulting distribution has a mean of -0.006 $\pm$ 0.013 ADU pixel$^{-1}$. As discussed in Section \[newestimationtechnique\], the biases from extended galaxy profiles and undetected galaxies must be dealt with in order to create accurate estimations of the sky background. Note, in its implementation in Section \[newestimationtechnique\], our sky estimator initially takes an image cutout centered on a galaxy, and grows the image cutout by 10 pixel increments in width and height as needed to ensure at least 4000 unmasked pixels reside in the image cutout. For the global background models discussed in this sub-section, however, we do not grow the 160 $\times$ 160 pixel sub-image when determining the mean and variance pixel value. This is done to ensure that an equally spaced grid is used in the polynomial fit, and that the statistics computed for each sub-image are representative of the pixels in that sub-image alone. By combining the new sky estimation technique advocated here with a global polynomial background model, we can benefit from “the best of both worlds.” The polynomial background model captures and smooths over spatial fluctuations on the $\sim$ 40 arcsecond scale. This ultimately yields a low variance in the distribution of predicted background values at the centers of galaxies. The combined model also benefits from the accuracy of our new sky estimation technique by accounting for flux contributions from undetected sources, and by excluding flux from tails of galaxy profiles from detected sources by growing our detection masks. ![image](fig6.eps){width="70.00000%"} Comparing different sky estimators ---------------------------------- We compare sky background estimates for detected galaxy images in our DLS-depth simulated images. In the uniform distribution simulation, all galaxies are detected and cataloged; consequently, we use our estimators on all galaxies in this simulation. In the random distribution simulation, however, we only consider galaxies which SExtractor detected and for which GALFIT is able to converge on a sky estimation. The numbers of galaxies that meet the criteria above are 10,000 and 98,149 for the uniform and random simulations, respectively. In Figure \[sky:set\], we compare sky estimations by SExtractor, GALFIT, Gaussian fit + new masks (with and without EBL correction), and polynomial background model with EBL correction and new masks. Note that the true sky background is 3240 ADU pixel$^{-1}$ and this pedestal has been subtracted in the uniform and random distribution simulations, although the shot noise from this sky level is included. The ensemble of SExtractor local background estimations has a mean of 0.567 $\pm$ 0.310 ADU pixel$^{-1}$ and 1.767 $\pm$ 0.198 ADU pixel$^{-1}$ for the uniform and random distribution simulations, respectively. As we discussed in Section \[GALSIM\], each image cutout in the uniform distribution simulation is completely isolated from the flux of neighboring galaxies, allowing us to test background estimation independent of blending and crowding effects. Even so, SExtractor overestimates the sky background for simulated galaxies in the uniform distribution. This is due to the flux residing in the extended galaxy profiles SExtractor fails to mask. If the measurement is done in a crowded region with a number of undetected galaxies (the random distribution simulation), the overestimation is compounded by the flux of these undetected galaxies. GALFIT background estimates are better than SExtractor background estimates; GALFIT estimates average local backgrounds of 0.142 $\pm$ 0.331 ADU pixel$^{-1}$ and 0.990 $\pm$ 0.393 ADU pixel$^{-1}$ for the uniform and random distribution simulations, respectively. The distributions of GALFIT background estimates have noticeable extended tails, and comparatively larger scatters than other methods. This is likely due to GALFIT’s sensitivity to noise. Nevertheless, the peak values in the pixel histograms as shown in Figure \[sky:set\] are close to the true value in the uniform distribution simulation and the EBL in the random distribution simulation. Estimation of sky background in our hybrid method has the highest precision of all in terms of mean and standard deviation of histogram, and is immune to the environments we have tested. Before using the polynomial fit, average local backgrounds are 0.057 $\pm$ 0.152 ADU pixel$^{-1}$ and -0.001 $\pm$ 0.199 ADU pixel$^{-1}$ for the uniform and random distribution simulations, respectively. As we discussed in Section \[newestimationtechnique\], we can reduce the noise in sky estimation by applying the polynomial model to the sky estimates obtained from fitting a Gaussian profile to the pixel distribution in each image cutout. The resulting sky backgrounds (after EBL correction for the random distribution simulation) are 0.071 $\pm$ 0.014 ADU pixel$^{-1}$ and -0.006 $\pm$ 0.012 ADU pixel$^{-1}$ for the uniform and random distribution simulations, respectively. By doing so, we reduce the uncertainty of sky estimation by a factor of 10 or more; most sky estimates lie within $\pm$ 0.0004% (0.015 ADU pixel$^{-1}$). Below we investigate the precision of recovery of surface brightness profiles of simulated galaxies. Sé[r]{}sic Index Recovery with Various Sky Estimators {#sersicest} ===================================================== In this section we focus on the effects of sky estimation accuracy on the apparent galaxy morphology. Since Edwin Hubble’s first study of galaxy classification using their appearance [@Hubble:1926aa], connections between galaxy morphology, shape, and color have provided insight into galaxy formation and evolution. Further methods for galaxy classification have been developed using the one-dimensional radially averaged profile of galaxy surface brightness [@de-Vaucouleurs:1948aa; @Sersic:1963aa]. Among several functional forms for the profiles, the Sérsic profile is one of the most popular. The Sérsic profile is a fitting function that describes the surface brightness profile (the intensity of light as a function of distance from the center) given by: $$\Sigma(R) = \Sigma_e \exp\bigg[-b_n\bigg\{ \bigg( \frac{R} {R_e}\bigg)^{1/n} - 1 \bigg\}\bigg]$$ where $\Sigma_e$ is the effective intensity (the surface brightness at the effective radius), $R_e$ is the effective radius which encloses half of the total light (half-light radius, hereafter), $b_n$ is the concentration which is defined so that half of the total light is inside the half-light radius for a given Sérsic index [@Graham:2005aa], and $n$ is the Sérsic index which describes the shape of profile and is correlated with galaxy surface brightness morphology. As an illustrative example of the impact different background estimation techniques can have on source measurement, we measure the surface brightness profiles of model galaxies after subtracting the sky background using the three techniques discussed in Section \[skyestimators\]: SExtractor, GALFIT, and our new method. We measure the Sérsic index in all simulations using GALFIT. As galaxy profile fits are sensitive to flux from nearby objects that are blended with the galaxy of interest, we restrict this study to the uniform distribution simulations where blending is not an issue to isolate the sky subtraction effect from the blending effect. We do so for both DLS and LSST DDF depths. To no surprise, we find that the shape of the faint outer surface brightness tail is affected by sky level mis-estimation. The GALFIT output parameters are very sensitive to the parameters in the GALFIT start file. For this test, we use parameters derived from SExtractor. The initial parameters are determined as follows: the size of image cutout is the same as used for sky estimation (see Section \[skyestimators\]). The total magnitude is given by `MAG_AUTO`; the half-light radius is given using `FLUX_RADIUS` with `PHOT_FLUXFRAC` = 0.5. The axis ratio $b/a$ and the position angle are derived by taking 1 - `ELLIPTICITY` and `THETA_IMAGE`, respectively. A Sérsic index of $n$ = 2.5 is assumed as an initial guess in the fitting. For the PSF convolution, we generate the PSF image using GALSIM with the true parameters after matching the SExtractor’s position and true position. We investigate the effect of sky background on determining galaxy surface brightness profile types. In Figure \[sersic:mag\_n\] we compare the measured Sérsic index versus the true value of Sérsic index using different sky estimators for both the DLS-depth and DDF-depth uniform simulations. We compare all galaxies with which GALFIT fits do not fail nor produce problematic parameters. Galaxy surface brightness fits do not converge for fainter galaxies because their sizes become small, and the image cutout becomes noise-dominated. The numbers of galaxies with good fits are 8,428 and 9,784 for DLS and LSST DDF depths, respectively. Overall, our estimation of sky background performs more robustly than the other methods. We note that the scatter of our technique increases for large Sérsic index. Noise fluctuation in the faint, extended tails of large-Sérsic-index galaxies negatively impacts the fidelity of profile fitting. By contrast, the surface brightness profile of low-Sérsic-index galaxies drops rapidly, so galaxy surface brightness profile estimations are less affected by noise in these cases. ![image](fig7.eps){width="70.00000%"} We also investigate the dependence of the Sérsic-index estimation on the Sérsic-index and magnitude of galaxies. For a given sky estimate, sky background and Sérsic index are anti-correlated: for sky estimators that tend to overestimate the background, there is a corresponding underestimate of Sérsic-index. This is more clearly seen as the brightness decreases or Sérsic-index increases. This trend is also found in previous survey data like the SDSS where sky background is overestimated [@Blanton:2005aa; @Blanton:2011aa]. There is an obvious explanation as to why there is an overestimate of Sérsic-index for high-Sérsic-index galaxies: in these cases, the true galaxy profile tends to be truncated at large radii due to the over-subtraction of sky background. In our DLS-depth uniform simulations, our new sky background estimator results in an unbiased and less-scattered estimation of Sérsic-index. For bright galaxies (i.e., $m_{\rm R}\le$ 24 for our study), there is no discernible trend in over- or under-estimation of galaxy surface brightness morphology, i.e. Sérsic-index. For fainter galaxies, however, uncertainties in the fits increase, and a strong dependence on galaxy surface brightness morphology is seen. Nonetheless, our new estimator still outperforms the other sky estimators in this regime. The performance of GALFIT runs using different sky estimators are less distinguishable at LSST DDF depth. Unlike DLS depth, the mean values of estimated Sérsic indices match the true values for all GALFIT runs. No bias is found as the magnitude becomes fainter or as the true Sérsic index increases. This is because the sky noise in LSST DDF depth is much smaller than DLS depth. Also, there is no discernible difference between our work (Gaussian fit and Gaussian fit $\rightarrow$ Poly. fit) and SExtractor. However, the GALFIT runs are still noisier than the other methods. This is due to the fact that the sky value estimated by GALFIT ($\mu_{\rm sky}=-0.049\pm0.225$) has larger error than the other methods ($\mu_{\rm sky}=0.031\pm0.022$, 0.011$\pm$0.011, and 0.014$\pm$0.002 for SExtractor, Gaussian fit, and Gaussian fit $\rightarrow$ Poly. fit, respectively). We conclude that for the isolated galaxy case signal-to-noise ratio is a major factor in the surface brightness profile estimation. For example, see @Taghizadeh-Popp:2015aa, who show an underestimate of galaxy size near the detection limit at multiple depths. This truncation in size is closely related to the bias in profile estimate. Good signal-to-noise is a necessary but not sufficient condition for accurate profile fitting when faint undetected galaxies are included. As we found before in the DLS depth case, accounting for the undetected galaxies corrects for this sky level bias, leading to more accurate outer profile fits. However, the presence of unresolved background galaxies disturbs the Sérsic-index fit for individual galaxies even at fairly high signal-to-noise, leading to increased bias and scatter for all estimators tested compared to the uniform simulation case, where no confounding galaxies are present nearby. We will explore these effects in more detail in a future paper. Discussion and Future Work {#discussion} ========================== In this paper we present sky background estimation using various publicly available packages, and compare with results using our new method that grows the spatial masks around detected objects, and statistically accounts for flux from undetected faint galaxies. Our algorithm is able to recover the sky level to 4 ppm in our simulated data, an improvement over existing sky background estimation techniques. Our analysis is confined to simulations of the extragalactic sky components with added spatially uniform sky foreground; optics ghosts and scattered light around bright stars are beyond the scope of this paper. We demonstrate that insufficiently masking the extended features of galaxies can bias sky estimation high. This occurs because flux from these galaxy regions contaminate the pixels used to estimate the sky background. While widely used estimators suffer from this bias, our technique is able to overcome it; by conservatively masking galaxies, our estimator considers pixels which are truly more representative of the true sky background. We also show that successful estimation of the underlying sky background must consider flux contributions from undetected faint background galaxies. To correct this bias, we use knowledge of galaxy counts as a number of magnitude to accurately estimate the flux contribution from these galaxies. To demonstrate the power of our new technique, we obtain galaxy surface brightness profile fits, via the Sérsic index, using different sky estimators. Previous methods overestimate sky background, resulting in incorrect Sérsic estimates, and all show large scatter. In contrast, our two-filter estimator has the highest precision and is least affected by simulation details such as the brightness, the surface brightness profile, and the galaxy number density. While it is beyond the scope of this paper, there are additional steps that may be taken to improve our technique, so that real imaging data may fully benefit from it. In our demonstration of this hybrid sky estimation algorithm we have used a simplified galaxy number count distribution, $n(m)$. For a more realistic approach in applying this sky estimator to real data one should use a $n(m)$ slope that becomes shallower beyond $m_{\rm R} \simeq$ 26 to more realistically represent the observed faint end of magnitude distribution. Additionally, we do not simulate internal reflections and scattered light in the camera, or other sources of sky variation, all of which will have to be adequately modeled for each exposure in real data. Ultimately, the level of accuracy offered by our technique is a necessary but not sufficient condition for exploration of ultra-low surface brightness. Our simulation placed galaxies randomly in the image plane; however, real galaxies are embedded in large scale structures, and cluster with each other. This may lead to a slight overdensity of undetected EBL galaxies in the vicinity of detected objects that is not reflected in our random simulation. Such an excess could still pollute the outer isophotes of the galaxy profile. The large number of undetected galaxies over a wide range of redshifts in projection strongly mitigates this effect. We emphasize that developing detection algorithms operating at ultra low surface brightness which take full advantage of such high precision sky estimates is beyond the scope of this paper, though such sky precision would be a prerequisite. An added challenge is fitting sky across CCDs in a mosaic. For this, we note that the LSST Project has recently implemented a superior sky estimator which fits the background over an entire exposure.[^1] This allows using a larger scale for the background model, including the removal of static structures (such as the average response of the camera to the sky) in the background. The accuracy of fitting galaxy surface brightness profiles will be improved with multi-wavelength photometry since structural parameters of galaxies vary in different photometric bands, though they are correlated [@Hausler:2013aa]. In such a joint fit our sky estimator can contribute to enhanced profile accuracy, particularly in low signal-to-noise bands. Optimal detection depends on the angular scale of the object: therefore the underlying sky background must be defined on that scale and larger scales. Thus, scale dependent sky models must be developed which include all components of the apparent sky on relevant scales, from telescope optics ghosts to large scale dust. There are many science drivers which rely on detection of low surface brightness features. Increased precision in the measurement of sky background can be applied to a better understanding of the evolution of galaxy surface brightness morphology [@Conselice:2003aa; @Conselice:2014aa]. One can also probe the evolution of mass structure, surface brightness profile, and star formation of galaxies as a function of redshift with less bias at low surface brightness. Studies of the dark halo stellar halo connection would be less biased: there is a correlation between dark matter structure and light distribution of galaxies at late cosmic time [@Wetzel:2015aa; @Huang:2017aa; @Somerville:2017aa] because galaxies have different star formation histories depending on stellar mass [@Qu:2017aa] and galaxy surface brightness morphology [@Wuyts:2011aa]. Finally, it is possible, and even likely, that unexpected discoveries lie at low surface brightness levels. Depending on the angular scale of the object being studied, such applications of sky estimation will require a full multi-component model of the apparent sky. The most demanding application is the unbiased detection and photometry of ultra LSB galaxies of large half-light radius. Using a noise-based non-parametric technique may be a better approach to detect such ultra faint sources. The faint-source detection capability resulting from this method has shown improved results for faint sources relative to the signal-based source detection algorithm employed by SExtractor [@Akhlaghi:2015aa]. For most of these cases, a robust sky estimation with accuracy of a few parts in $10^5$ or better, and at high precision is required. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Perry Gee, Lee Kelvin, Robert Lupton, Chien Peng, Brant Robertson, and Paul Price for helpful discussions. The sky sample fitting algorithm which we use is part of the sky estimator in the 2017 LSST Stack, for which we acknowledge the efforts of Steve Bickerton and Russell Owen. We thank Andrew Bradshaw and Craig Lage for comments on the manuscript. We thank the referee for suggestions that improved the manuscript. Support from DOE grant DE-SC0009999 and NSF/AURA grant N56981C is gratefully acknowledged. , H., [Arimoto]{}, N., [Armstrong]{}, R., [et al.]{} 2018, , 70, S4 , M., & [Ichikawa]{}, T. 2015, , 220, 1 , M., [H[ä]{}u[ß]{}ler]{}, B., [Peng]{}, C. Y., [McIntosh]{}, D. H., & [Guo]{}, Y. 2012, , 422, 449 , M., [Fischer]{}, J.-L., [Sheth]{}, R. K., [et al.]{} 2017, , 468, 2569 , E., & [Arnouts]{}, S. 1996, , 117, 393 , M. R., [Eisenstein]{}, D., [Hogg]{}, D. W., [Schlegel]{}, D. J., & [Brinkmann]{}, J. 2005, , 629, 143 , M. R., [Kazin]{}, E., [Muna]{}, D., [Weaver]{}, B. A., & [Price-Whelan]{}, A. 2011, , 142, 31 , J., [Armstrong]{}, R., [Bickerton]{}, S., [et al.]{} 2018, , 70, S5 , C. J. 2003, , 147, 1 , C. J. 2014, , 52, 291 , G. 1948, AnAp, 11, 247 , J.-L., [Bernardi]{}, M., & [Meert]{}, A. 2017, , 467, 490 , B. 2005, IJMPA, 20, 3121 , A. W., & [Driver]{}, S. P. 2005, , 22, 118 , B., [McIntosh]{}, D. H., [Barden]{}, M., [et al.]{} 2007, , 172, 615 , B., [Bamford]{}, S. P., [Vika]{}, M., [et al.]{} 2013, , 430, 330 , D., [Chiba]{}, M., [Okamoto]{}, S., [et al.]{} 2016, , 832, 21 , K.-H., [Fall]{}, S. M., [Ferguson]{}, H. C., [et al.]{} 2017, , 838, 6 , S., [Leauthaud]{}, A., [Greene]{}, J. E., [et al.]{} 2018, , 475, 3348 , E. P. 1926, , 64, 321 , [Ž]{}., [Kahn]{}, S. M., [Tyson]{}, J. A., [et al.]{} 2008, ArXiv e-prints, arXiv:0805.2366 , T. 2015, ArXiv e-prints, arXiv:1511.06790 , [Abell]{}, P. A., [Allison]{}, J., [et al.]{} 2009, ArXiv e-prints, arXiv:0912.0201 , R. H., [Ivezic]{}, Z., [Gunn]{}, J. E., [et al.]{} 2002, , 4836, 350 , D., [Gabany]{}, R. J., [Crawford]{}, K., [et al.]{} 2010, , 140, 962 , A. W., [Irwin]{}, M. J., [Ibata]{}, R. A., [et al.]{} 2009, , 461, 66 , N., [Shanks]{}, T., [Campos]{}, A., [McCracken]{}, H. J., & [Fong]{}, R. 2001, , 323, 795 , A. F. J. 1969, , 3, 455 , C. Y., [Ho]{}, L. C., [Impey]{}, C. D., & [Rix]{}, H.-W. 2010, , 139, 2097 , Y., [Helly]{}, J. C., [Bower]{}, R. G., [et al.]{} 2017, , 464, 1659 , F. E., & [Gordon]{}, J. L. 1973, Geophys. Astrophys. Monogr., 4. 23 , B. E., [Banerji]{}, M., [Cooper]{}, M. C., [et al.]{} 2017, ArXiv e-prints, arXiv:1708.01617 , B. T. P., [Jarvis]{}, M., [Mandelbaum]{}, R., [et al.]{} 2015, A&C, 10, 121 , J. L. 1963, BAA, 6, 41 , R. S., [Behroozi]{}, P., [Pandya]{}, V., [et al.]{} 2017, ArXiv e-prints, arXiv:1701.03526 , P. B. 1987, , 99, 191 , M., [Fall]{}, S. M., [White]{}, R. L., & [Szalay]{}, A. S. 2015, , 801, 14 , I., & [Fliri]{}, J. 2016, , 823, 123 , J. A. 1995, in Extragalactic Background Radiation Meeting, ed. D. [Calzetti]{}, M. [Livio]{}, & P. [Madau]{}, 103 , J. A., & [Jarvis]{}, J. F. 1979, , 172, 422 , P. G., [Abraham]{}, R., & [Merritt]{}, A. 2014, , 782, L24 , V., [Wadadekar]{}, Y., [Kembhavi]{}, A. K., & [Vijayagovindan]{}, G. V. 2010, , 409, 1379 , A. R., & [Nagai]{}, D. 2015, , 808, 40 , D. M., [Tyson]{}, J. A., [Dell’Antonio]{}, I. P., [et al.]{} 2002, , 4836, 73 , S., [F[ö]{}rster Schreiber]{}, N. M., [van der Wel]{}, A., [et al.]{} 2011, , 742, 96 [^1]: https://community.lsst.org/t/sky-subtraction/2415
{ "pile_set_name": "ArXiv" }
--- abstract: 'It has been shown by @ShchekinovVasiliev2006 (SV06) that HD molecules can be an important cooling agent in high redshift $z\ge10$ haloes if they undergo mergers under specific conditions so suitable shocks are created. Here we build upon @jpp3 who studied in detail the merger-generated shocks, and show that the conditions for HD cooling can be studied by combining these results with a suite of dark-matter only simulations. We have performed a number of dark matter only simulations from cosmological initial conditions inside boxes with sizes from $1$ to $4$ Mpc. We look for haloes with at least two progenitors of which at least one has mass $M\ge M_{cr}(z)$, where $M_{cr}(z)$ is the SV06 critical mass for HD over-cooling. We find that the fraction of over-cooled haloes with mass between $M_{cr}(z)$ and $10^{0.2}M_{cr}(z)$, roughly below the atomic cooling limit, can be as high as $\sim0.6$ at $z\approx10$ depending on the merger mass ratio. This fraction decreases at higher redshift reaching a value $\sim0.2$ at $z\approx15$. For higher masses, i.e. above $10^{0.2}M_{cr}(z)$ up to $10^{0.6}M_{cr}(z)$, above the atomic cooling limit, this fraction rises to values $\ga0.8$ until $z\approx12.5$. As a consequence, a non negligible fraction of high redshift $z\ga10$ mini-haloes can drop their gas temperature to the Cosmic Microwave Background temperature limit allowing the formation of low mass stars in primordial environments.' author: - | Joaquin Prieto$^1$[^1], Raul Jimenez$^{2,1,3}$, Licia Verde$^{2,1,3}$\ $^{1}$ICC, University of Barcelona (IEEC-UB), Marti i Franques 1, E08028, Barcelona, Spain\ $^2$ ICREA\ $^3$ Theory Group, Physics Department, CERN, CH-1211, Geneva 23, Switzerland title: 'Over-cooled haloes at $ z \ge 10$: a route to form low-mass first stars' --- galaxies: formation — large-scale structure of the universe — stars: formation — turbulence. Introduction ============ ----------- ----------------- ------------------- -------------- ------------------ -------------------- ------------------------ Sim. Name Number of sims. Box size Part. number Part. mass $M_{cr}(z=10)/m_p$ $M_{cr,1}(z=17.5)/m_p$ $N$ $L_{\rm box}$/Mpc $N_p$ $m_p/M_\odot$ $N_{p,h}$ $N_{p,h}$ S1Mpc256 20 1 256$^3$ 5.88$\times10^3$ 3.72$\times10^3$ 1.31$\times10^3$ S2Mpc256 20 2 256$^3$ 4.70$\times10^4$ 4.66$\times10^2$ 1.64$\times10^2$ S4Mpc256 20 4 256$^3$ 3.77$\times10^5$ 5.80$\times10^1$ 2.10$\times10^1$ S1Mpc512 5 1 512$^3$ 7.35$\times10^2$ 2.98$\times10^4$ 1.05$\times10^4$ S2Mpc512 5 2 512$^3$ 5.88$\times10^3$ 3.72$\times10^3$ 1.31$\times10^3$ S4Mpc512 5 4 512$^3$ 4.70$\times10^4$ 4.66$\times10^2$ 1.64$\times10^2$ ----------- ----------------- ------------------- -------------- ------------------ -------------------- ------------------------ \[table1\] In the current $\Lambda$ Cold Dark Matter ($\Lambda$CDM) cosmological paradigm, dark matter (DM) over-densities are the building blocks of cosmic structures. These DM over-densities grow due to gravity forming DM haloes in a hierarchical way, i.e. from the smaller to the bigger ones, and mergers play an important role in this process. For the formation of the first luminous objects to become possible, the baryonic content of the haloes must be able to cool. Cooling of primordial gas is driven by molecular Hydrogen (${\rm H}_2$) which can form inside DM mini-haloes of mass $\gtrsim10^6$M$_\odot$. Once ${\rm H}_2$ formation is triggered, rovibrational transitions of the ${\rm H}_2$ molecule are able to cool the primordial gas down to temperatures of $\sim$ 200 K [@Haiman1996; @Tegmarketal1997; @Abeletal2002], see also the @BarkanaLoeb2001 review. At lower temperatures, the ${\rm H}_2$ lines become insufficient to cool the gas further. The ${\rm H}_2$ cooling temperature floor (T$\approx 200$K) and its saturation number density ($n\approx 10^4$cm$^{-3}$ i.e. the density for Local Thermal Equilibrium at which ${\rm H}_2$ cooling is inefficient) yield a Jeans mass: $$M_J\approx 500 {\rm M}_\odot \left(\frac{T}{200 {\rm K}}\right)^{3/2} \left(\frac{10^4 {\rm cm}^{-3}}{n}\right)^{1/2}.$$ This sets a mass scale for gravitationally bounded objects in the primordial gas, suggesting that the first stars were massive [^2]. However, if the HD molecule is formed in a significant amount in primordial environments, although it has no relevant role in the first stage of ${\rm H}_2$ driven cooling it could eventually ($T\lesssim 150$K) become important [@BougleouxGalli1997; @Machidaetal2005], allowing the gas to reach temperatures as low as the Cosmic Microwave Background (CMB) temperature limit at the corresponding redshift. If HD cooling could be triggered, a lower temperature floor for the gas would decrease the Jeans mass and thus this process could favor the formation of low mass stars at high redshift. Because the HD number density depends on the H$_2$ abundance through H$_2$ + D$^+\rightarrow$ HD + H$^+$ [@Pallaetal1995; @GalliPalla2002] and the H$_2$ abundance depends on the free electron number density through e$^-$ + H $\rightarrow$ H$^-$ + $\gamma$ followed by H$^-$ + H $\rightarrow$ H$_2$ + e$^-$ [@Peebles1968], if the gas presents a high ionization fraction it is possible to increase the HD abundance. It has been shown that such high ionization fraction conditions are common in post-shocked gas inside DM haloes [@Greifetal2008; @jpp3]. In fact, the DM halo growing process involves violent merger events. These mergers are able to produce strong shock waves which both compress the halo baryonic content and increase the ionization fraction. This drives an enhancement in the formation rate of HD molecules with the consequent over-cooling of the primordial gas, as shown in @Greifetal2008 and @jpp3. @ShchekinovVasiliev2006 (hereafter SV06) studied the necessary (thermo-chemical) conditions for HD cooling to switch on. They argue that such conditions are fulfilled in merging DM haloes with a total system mass above a critical value, so suitable shocks form. The post-shocked gas with an enhanced HD molecular fraction is able to drop its temperature to the CMB floor of $T_{\rm CMB}\approx 2.73(1+z)$. SV06 however only considered a straw-man head-on collision of two primordial clouds of equal mass, but, clearly the physical state of the post-shock gas depends on many factors that are not captured by this simplified scenario. @jpp3 produced numerical DM + baryons simulations that capture enough physics to study the physical state of the post-shock gas. They find that as a result of the hierarchical merging process, turbulence is generated and the production of coolants is enhanced, so much that even the HD molecule becomes an important coolant in some regions. Yet, their simulations are not sufficient to assess how generic this is, that is, how these regions are associated to the distribution of minihaloes. This is what we set up to do here. In this paper, we use a set of DM cosmological simulations to compute the fraction of haloes able to produce over-cooling of the primordial gas due to mergers at high redshift as predicted by SV06 using the recipe developed in @jpp3. The paper is organized as follows. In §2 we describe our methodology. In §3 we show our numerical results and discuss about them. In §4 we present our summary and conclusions. Methodology {#Methodology} =========== In principle, to compute the fraction of haloes able to over-cool their baryonic content due to mergers at high redshifts, we would want to have multiple hydrodynamic simulations, which model both dark matter and baryonic physics and chemo-thermal evolution of primordial gas, for cosmological initial conditions, reaching a resolution of $\sim 1$pc at $z=10$. This ambitious goal was achieved in @jpp3 but only for a single 1 Mpc size box and in there the formation and baryonic matter accretion process of a single halo was simulated at full resolution: a region of 2 kpc (at $z=10$) with $\sim 2$ pc resolution (at $z=10$). The average CPU-time for one of such systems is $\sim 180000$ CPU-hrs. This makes it computationally very expensive to replicate the @jpp3 runs for multiple haloes in a cosmological context. However this complex problem can be broken in three ingredients which can be studied independently. The first ingredient is the thermo-chemical conditions for the HD cooling to switch on which were studied in SV06. The second ingredient is the physical conditions of the primordial gas (turbulence and shocks) which was studied in @jpp3. They find that post shock regions are able to produce both ${\rm H}_2$ and HD molecules very efficiently even in small mini-haloes ($M\sim 10^6 M_{\odot}$) if they accrete on, or merge with, a more massive but still relatively low mass halo ($M\sim 10^7 M_{\odot}\simeq M_{cr}$). The remaining ingredient is how frequently this happens in a cosmological context. This last step, however, can be addressed with DM-only simulations under minimal assumptions, and this is what we set up to do here. ------- ------- ------------------- ------------------- ------------------- $z_1$ $z_2$ Mass bin 1 Mass bin 2 Mass bin 3 in $10^7 M_\odot$ in $10^7 M_\odot$ in $10^7 M_\odot$ 10.0 10.2 2.10 - 3.33 3.33 - 5.28 5.28 - 8.36 11.0 11.4 1.73 - 2.74 2.74 - 4.34 4.34 - 6.88 12.3 12.7 1.41 - 1.81 1.81 - 2.87 2.87 - 4.55 14.9 15.4 0.99 - 1.57 1.57 - 2.49 2.49 - 3.94 17.2 17.9 0.74 - 1.17 1.17 - 1.85 1.85 - 2.94 ------- ------- ------------------- ------------------- ------------------- : Relevant redshift and mass bins. The mass bins labeled by $i=1,2,3$ have been chosen so that bin mass lower and upper boundaries are $10^{0.2(i-1)}M_{cr}(z_2)$ and $10^{0.2(i)}M_{cr}(z_2)$ respectively. The mass range spanned by the three bins covers the transition from H$_2$ cooling haloes to atomic cooling ones. \[table2\] ----------- --------------------- --------------------- ---------------------- --------- Mass Mass Mass Total Sim. Name bin 1 bin 2 bin 3 Volume $N_{OC}(N_{\rm h})$ $N_{OC}(N_{\rm h})$ $N_{OC}(N_{\rm h}) $ Mpc$^3$ S1Mpc512 5 (7) 4 (4) 3 (3) 5 S1Mpc256 25 (46) 14 (15) 11 (12) 20 S2Mpc512 88 (132) 56 (57) 37 (38) 40 ----------- --------------------- --------------------- ---------------------- --------- : Total number of OC haloes (i.e. the sum over the $N$ realizations) and total number of haloes ($N_h$) in a given mass bin with $f_{mg}=0.6$ and $m_p\le5.88\times10^3M_{\odot}$ at $z_1=10$. \[table3\] The critical mass to trigger HD molecular over-cooling at redshift $z$ is defined by SV06 as: $$M_{cr}^{SV06}(z)=8\times10^6\left(\frac{20}{1+z}\right)^2 M_{\odot}.$$ Following SV06 this is the total mass of the system, i.e. DM plus baryonic mass. Because we have to work with DM only simulations, we have to make an assumption regarding the baryonic matter. To take into account the gas inside the DM haloes we assume that these primordial haloes host the universal baryon fraction $$\frac{M_b}{M_{DM}+M_{b}}=\frac{\Omega_b}{\Omega_m}\equiv f_b,$$ where $M_b$, $M_{DM}$, $\Omega_b$, $\Omega_m$ and $f_{b}$ are the baryonic mass content of the halo, the dark mass content of the halo, the current average baryonic matter density in the Universe in units of the critical density, the current average DM density in the Universe in units of the critical density and the universal baryonic mass fraction of the Universe, respectively. Using this approximation, the necessary (but not yet sufficient) condition for a DM halo at redshift $z$ to become an over-cooled halo is that it must have a DM mass above a critical mass $$M_{cr}^{DM}(z)=(1-f_B)\times M_{cr}^{SV06}(z),$$ hereafter we will refer to $M_{cr}^{DM}$ as $M_{cr}$. We use the cosmological hydrodynamical code `RAMSES` [@Teyssier2002] to perform $75$ DM-only simulations. The cosmological initial conditions are produced with the `mpgrafic` code [@Prunetetal2008] and the initial redshift for each run is set to $z_i\approx65$. The cosmological parameters are those of the concordance $\Lambda$CDM model from @Komatsu2009 [@Komatsu2010]: $\Omega_m=0.258$, $\Omega_\Lambda=0.742$ $h=0.719$, $\sigma_8=0.796$, $n_s=0.963$ and the transfer function of @EisensteinHu1998 with $\Omega_b=0.0441$. Using the `AHF` halo finder [@AHF2009], we identified DM haloes (i.e., objects with a density contrast $\delta\ge 200$) with mass above or equal to the critical mass to enhance the HD molecular cooling $M_{cr}$, at several redshifts $z\ge10$. For reference, in the cosmology adopted here, $f_b=0.1709$, and thus $$M_{cr}(z)=6.63\times10^6\left(\frac{20}{1+z}\right)^2 M_{\odot}\,.$$ In the set-up chosen for the `AHF` halo finder the minimum particle number per halo was set to $N_{min}=20$. We adopted this low number because we are not interested in characterizing the haloes based on their internal-radial features. This corresponds closely to the minimum number of particles per critical mass halo $N_{p,h}$ at the highest redshift of interest in the lower resolution run. Table \[table1\] shows the details of each simulation. From the first column to the last one: the simulation name, referring to both the box size and the particle number, the number of simulations $N$, the box size[^3] $L_{\rm box}$ in Mpc , the number of particles per simulation $N_p$, the particle mass $m_p$ and the number of particle per critical mass halo $N_{p,h}$ at two reference redshift $z=10$ and $z=17.5$. ![The mass scales involved. The lower solid line shows the critical mass for HD cooling $M_{cr}(z)$, the upper solid line corresponds to $T_{\rm vir}=10^4$K necessary for atomic lines cooling. The vertical bars (dotted, dashed and dot-dashed) correspond to the three mass bins considered.[]{data-label="fig:mass"}](mass.pdf){height="6cm"} ![image](fractionL_bin1.pdf){height="12cm" width="16cm"} ![image](fractionL_bin2.pdf){height="12cm" width="16cm"} ![image](fractionL_bin3.pdf){height="12cm" width="16cm"} ![image](convergenceL.pdf){height="8cm" width="12cm"} ![image](map_DM.png){width="2\columnwidth" height="16cm"} As we will show in the next section, the most reliable results come from runs where $M_{cr}$ is defined by $N_{p,h}\ge1310$ particles at $z\le17.5$, i.e. runs with a particle mass $m_p\le5.88\times10^3M_{\odot}$. In these runs the DM haloes are found consistently in successive snapshots. Furthermore, it is worthwile to note that in these reliable runs the primordial perturbation distance scale $\lambda_{M_{cr}}$ associated to the critical mass $M_{cr}$ is well defined by a number of partcles ($>10$) when the simulation start. The @jpp3 findings indicate that a necessary and sufficient condition for triggering HD cooling is that a halo with mass greater than $M_{cr}$ (recall that at the redshifts of interest $M_{cr} \sim 10^7 M_{\odot}$) undergoes a merger or accretes baryonic material funnelled in the halo along filaments. Even in sub-critical haloes ($M\sim10^6 M_{\odot}$) HD cooling can be triggered if they accrete on a critical one, as the relevant physical condition driving the turbulence is the relative velocity, which is set by the potential well created by the super-critical halo. In the super-critical halo, if it is not disrupted by a major merger, the turbulence triggered by accretion is enough enhance the creation of H$_2$ and HD and therefore kickstart over-cooling. Informed by the above findings, here we impose the conditions for over-cooling to happen as follows. We construct the merger trees for each simulation using the `AHF` merger tree tool, we identify DM haloes at redshift $z_2$ with mass $\geq M_{cr}(z_2)$ that subsequently undergo merging to form a bigger halo at $z_1$ (with $z_2\ge z_1$). We define the over-cooled (OC) halo merger as the process in which an existing halo at $z_1$ has at least two progenitors, of which at least one with mass $M_{DM}\ge M_{cr}(z_2)$ and after the merger keeps at least a mass fraction $f_{mg}$ of the most massive progenitor. We vary the factor $f_{mg}$ from 0.6 to 0.9 in order to study how the OC haloes fraction depends on it. Our parameters of the halo finder routine imply that the minimum halo mass (which sets therefore the definition of merger) is somewhat resolution-dependent ranging from $1.47\times 10^4 M_{\odot}$ in simulation S1Mpc512 to $7.54\times10^6$ in simulation S4Mpc256. Here we want to stress that we do not impose a minimum merger mass ratio in our strategy to look for OC haloes. The method described above is suitable to address the question: [*what is the fraction of DM haloes able to over-cool their baryonic content (and thus potential site for low mass star formation) due to mergers and accretion at high redshift?*]{}. Indeed, the condition $M\ge M_{cr}$ ensures that the interaction between haloes will be strong enough to trigger the enhancement of the HD formation. On the other hand, a study based on the merger mass ratio could give us information about the amount of OC gas and then could help answer a different question: [*what is the amount of OC gas in haloes at high redshift?*]{}. Our simulation set up and our methodology cannot quantify the amount of over-cooled gas but it is suitable to estimate the fraction of OC haloes at high redshift. This is the goal of the present work. Results and Discussion ====================== Table \[table2\] shows some example combinations of redshifts $z_1$ and $z_2$ used in building our merger tree, and the ranges of the three halo mass bins (at $z_2$) we consider. The mass bins labeled by $i=1,2,3$ have been chosen so that bin mass lower and upper boundaries are $10^{0.2(i-1)}M_{cr}(z_2)$ and $10^{0.2(i)}M_{cr}(z_2)$ respectively. Thus the three mass bins are centered around, 1.3, 2.0, and 3.2 $M_{cr}(z_2)$, respectively. With this choice, the mass range spanned by the three bins covers the transition from H$_2$ cooling mini-haloes (with virial temperature $T_{vir}\la$few$\times10^3$K and $M_{vir}\la 1.5\times10^7 M_\odot$) to atomic cooling haloes (with $T_{vir}\ga\times10^4$K and $M_{vir}\ga 1.5\times 10^7 M_\odot$). This is summarized in Fig.\[fig:mass\]. Table \[table3\] reports the total number of OC haloes $N_{OC}$ and the total number of haloes $N_{\rm h}$ at $z_1=10$ for the three different mass bins in the less restrictive case $f_{mg}=0.6$ and for the highest resolution simulations with $m_p\le5.88\times10^3M_{\odot}$. The reported number is the sum of all OC haloes (all haloes) in the $N$ simulations considered, i.e., 5 simulations of S1Mpc512, 20 for S1Mpc256 and 5 for S2Mpc512. The effective volume for finding these OC haloes is therefore 5 Mpc$^3$, 20 Mpc$^3$ and 40 Mpc$^3$, respectively. Figures \[fig1\], \[fig2\] and \[fig3\] show the fraction of OC haloes $f_{\rm OC}^{\rm HD}$ for four different values of $f_{mg}$ and for the three halo mass bins as a function of redshift. These results are shown for our two different $N_p$ (in different columns) and different box sizes $L_{\rm box}$ (in different rows). The error bars correspond to the standard deviation between the $N$ simulations at a given redshift. The error on the mean would be smaller by a factor $\sqrt{N}$. As we expected, the higher the $f_{mg}$ the lower the OC fraction $f_{\rm OC}^{\rm HD}$. This trend shows that after a merger process it is very difficult for the resulting halo to keep 100$\%$ of its progenitor’s mass: some of the progenitor’s mass is always removed from the parent halo after the merger. The resulting $f_{\rm OD}^{\rm HD}$ shows a very weak dependence (or no dependence at all) on $f_{mg}$ for $0.6\leq f_{mg}\leq 0.8$. Our results show a clear resolution dependence for $m_p\ge4.70\times10^4M_{\odot}$, i.e. runs S4Mpc512, S4Mpc256 and S2Mpc256. In these runs it is possible to see a monotonic growth of the OC fraction with the simulation resolution, which is particularly marked in the $f_{mg}=0.6$ case: we thus discuss numerical convergence before further interpreting Figs. \[fig1\], \[fig2\] and \[fig3\]. Numerical convergence is investigated further in Fig. \[fig4\] where it is possible to identify a mass-resolution dependent trend. Simulations S1Mpc512, S2Mpc512 (and S1Mpc256) have particle masses below $m_p= 5.88\times10^3M_{\odot}$ and thus a mass threshold for OC halo merger $M=1.2 \times 10^5$ ($M=1.47\times10^4$) or mass merger ratios below $1:65$. These simulations correspond to the (red) plus symbols, (blue) asterisk symbols and (green) “x” symbols. At this resolution results for $f_{\rm OC}^{\rm HD}$ appear to converge. On the other hand simulations S2Mpc256, S4Mpc512 and S4Mpc256 with $m_p\ge4.70\times10^4M_{\odot}$ , (magenta) open square symbols, (cyan) filled square symbols and (yellow) open circle symbols, do not show numerical convergence. This can be understood as the mass threshold for merger in these simulations is high ($> 9.4\times 10^5 M_{\odot}$) and the merger mass ratios are larger than $1:2$. In what follows we will focus on these three higher mass resolution simulations because they have the most reliable results based on both, convergence and number of particles per DM halo. In figure \[fig1\], corresponding to the first mass bin (see table \[table2\]), and for mass resolution $m_p\le5.88\times10^3M_{\odot}$ (i.e., top two panels, and middle right panel) our results show that at $z=10$ the fraction of OC haloes is $f_{\rm OC}^{\rm HD}\ga0.5$ in the case with $f_{mg} \lesssim 0.7$. This fraction tends to decrease at higher redshift ($z\la12.5$) but is always above 20$\%$ (for $f_{mg}\lesssim 0.7$) showing that a non negligible fraction of DM haloes in this mass bin is able to over-cool their gas content due to mergers at high redshift. At higher redshifts, i.e., $z\ga15$, the fraction decreases $f_{\rm OC}^{\rm HD}\la0.2$. This last result comes from S2Mpc512, the only simulation with data above $z\ga15$ in this mass bin. In figure \[fig2\] we show the second mass bin centered around $M=2 M_{cr}(z_2)$. For runs with mass resolution $m_p\le5.88\times10^3M_{\odot}$ (top two panels, and middle right panel) the OC fraction at $z=10$ is $f_{\rm OC}^{\rm HD}\ga0.9$ for $f_{mg}\lesssim 0.7$ and it can reach $f_{\rm OC}^{\rm HD}\sim 1.0$. At higher redshift ($z\la12.5$) the fraction remains significant, $f_{\rm OC}^{\rm HD}\ga0.8$. As expected the OC fraction increases with the mass of the halo. Figure \[fig3\] shows our results for the third mass bin centered around $M=3.2 M_{cr}(z_2)$. The OC fraction keeps increasing with halo mass. In summary these figures show that a non negligible fraction of DM haloes above the critical mass $M_{cr}$ are able to over-cool their gas content due to mergers at high redshift. To illustrate how the OC merger proceeds, figure \[fig5\] shows the evolution of two randomly chosen OC haloes from our catalog at 4 different redshift $z_1$. In the first column we show an OC halo of $M=6.41\times10^7M_\odot$ (computed at $z=10$) from the third bin mass and in the second column a $M= 2.53\times10^7M_\odot$ (computed at $z=10$) OC halo from the first mass bin. The difference in size of the objects reflects the different mass bins. As an additional study, we have computed the probability distribution function for the halo spin parameter $\lambda$ defined by @Bullock2001 and we have found that it follows log-normal distribution characterized by a standard deviation $\sigma\approx0.5$ and an average spin parameter $\bar\lambda\approx0.04$ in good agreement with previous works, e.g. @DavisNatarajan2009. Despite of the low number of haloes in the most reliable runs (see table \[table3\]), we recover a log-normal distribution (for S1Mpc256 and S2Mpc512) characterized with the parameters shown above. This fact supports our claim on the reliability of our results. While the N-body simulations for this work were running, new results on cosmological parameters, derived from the [*Planck*]{} satellite observations, were released [@PlanckCollaboration2013]. The [*Planck*]{}’s best fit $\Lambda$CDM cosmological parameters are somewhat different from WMAP’s ones. Because our results can be cosmology-dependent, let us elaborate on the possible effect of the [*Planck*]{} results. [*Planck*]{}’s $\Omega_m$ value is slightly higher than WMAP’s and $\Omega_b$ slightly lower. This affects directly the computation of the DM critical mass $M_{cr}(z)$ decreasing it by a $2\%$, approximately. Furthermore, because the [*Planck’s*]{} value of the Hubble constant is lower, each redshift in our calculations has to be increased by about $4\%$ and so the box size $L_{\rm box}$. Thus the changes associated to the new best fit cosmological parameters have a negligible effect on our results. Summary and Conclusions ======================= We have performed $75$ DM-only cosmological simulations with two different particle numbers ($256^3$ and $512^3$) and inside three different box sizes ($L_{\rm box}=$ 1Mpc, 2Mpc and 4Mpc) in order to quantify the fraction of haloes able to over-cool their baryonic content due to mergers at high redshift as predicted by @ShchekinovVasiliev2006. As shown in @jpp3 accretion and (minor) mergers onto a halo of mass above the critical value defined by SV06, $M_{cr}(z)$ produce supersonic turbulence and a shocked environment where H$_2$ and HD molecules are formed efficiently. There, regions are able to (over)cool below the H$_2$ cooling temperature floor. To identify the fraction of haloes where the above conditions are verified, we computed the progenitor’s mass for each halo at a given redshift inside a bin mass, specified in table \[table2\]. This mass range spans the transition between H$_2$ molecular cooling to atomic cooling haloes. Every halo with more than one progenitor of which at least one has a mass above $M_{cr}(z)$, was counted as an over-cooled (OC) halo. Our results show that a non negligible fraction of the mini-haloes formed at $z\ge10$ over-cooled their primordial gas due to the process outlined above. The fraction of OC haloes at $z=10$ is $f_{\rm OC}^{\rm HD}\ga0.5$ for masses roughly below the atomic cooling limit: $1\times10^7\la M/M_\odot\la3\times10^7$. At higher redshift, $z\la12.5$, the fraction $f_{\rm OC}^{\rm HD}\ga0.2$ and it is below 0.2 for $z\ga15$. The fraction of OC haloes rises with halo mass. For haloes above the atomic cooling limit, $2\times10^7\la M/M_\odot\la8\times10^7$, the fraction of OC haloes at $z\la12.5$ is $f_{\rm OC}^{\rm HD}\ga0.8$. The existence of a non negligible fraction of OC haloes at high redshift has interesting consequences for the star formation process in primordial environments. As predicted by SV06 the HD molecular cooling drops the gas temperature to the CMB limit $T_{\rm CMB}(z)\approx2.73(1+z)$ allowing the formation of low mass primordial stars [@jpp2; @jpp3]. Their low mass makes these primordial (population III) stars very long-lived opening a window for the potential detection of primordial stars in the local Universe. Acknowledgements {#acknowledgements .unnumbered} ================ JP thanks Roberto Gonzalez and Christian Wagner for their useful and constructive comments on this work. JP, RJ and LV acknowledge support by Mineco grant FPA2011-29678- C02-02. LV is supported by European Research Council under the European CommunityÕs Seventh Framework Programme grant FP7- IDEAS-Phys.LSS. [99]{} Abel T., Bryan G. L. & Norman M. L., 2002, Science, 295, 93 Barkana R. & Loeb A., 2001, Phys. Rep., 349, 125 Bougleoux E. & Galli D., 1997, MNRAS, 288, 638 Bromm V., Coppi P. & Larson R. B., 2002, ApJ, 564, 23 Bullock J. S., Kolatt T. S., Kravtsov A. V., Klypin A. A., Porciani C. & Primack J. R., 2001a, ApJ, 555, 240 Davis A. J. & Natarajan P., 2009, MNRAS, 393, 1498 Eisenstein D. J. & Hu W., 1998, ApJ, 496, 605 Galli D. & Palla F., P&SS, 50, 1197 Greif T. H., Johnson, Jarrett L., Klessen R. S. & Bromm, V., 2008, MNRAS, 387, 1021 Greif T. H., Springel V., White S. D. M., Glover S. C. O., Clark P. C., Smith R. J., Klessen R. S. & Bromm V., 2011, Haiman Z., Thoul A. A. & Loeb A., 1996, ApJ, 464, 523 Knollmann S. R. & Knebe A., 2009, ApJ, 182, 608 Komatsu E., et al., 2009, ApJS, 180, 330 Komatsu E., et al., 2010, arXiv:1001.4538 Lepp S. & Shull J. M., 1983, ApJ, 270, 578 Machida M., Tomisaka K., Nakamura F. & Fujimoto M., 2005, ApJ, 622, 39 Palla F., Galli D. & Silk J., 1995, ApJ, 451, 401 Peebles P. J. E., Dicke R. H., 1968, ApJ 154, 891 arxiv.org/abs/1303.5076 Prieto J., Padoan P., Jimenez R., Infante L., 2011, ApJ, 731, L38 Prieto J. P., Infante L., Jimenez R., 2008, arXiv, arXiv:0809.2786 Prieto J., Jimenez R. & Martí J., 2012, MNRAS, 419, 3092P Prunet S. Pichon C., Aubert D., Pogosyan D., Teyssier R. & Gottloeber S., 2008, ApJs, 178, 179 Tegmark M., Silk J., Rees M. J., Blanchard A., Abel T. & Palla F., 1997, ApJ, 474, 1 Teyssier R., 2002, A&A, 385, 337 Shchekinov Yu. A. & Vasiliev E. O., 2006, MNRAS, 368, 454 Stacy A. & Bromm V., 2013, MNRAS, arXiv:1211.1889 [^1]: email:joaquin.prieto.brito@gmail.com [^2]: But see @Greifetal2011 and @StacyBromm2013 for lower masses primordial stellar binary-multiple systems. [^3]: Note that we adopt the value $h=0.719$ therefore here length are in Mpc and masses in $M_{\odot}$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that there is “no stable free field of index $\alpha\in (1,2)$”, in the following sense. It was proved in [@BPR18] that subject to a *fourth moment assumption*, any random generalised function on a domain $D$ of the plane, satisfying conformal invariance and a natural domain Markov property, must be a constant multiple of the Gaussian free field. In this article we show that the existence of $(1+\ve)$-moments is sufficient for the same conclusion. A key idea is a new way of exploring the field, where (instead of looking at the more standard circle averages) we start from the boundary and discover averages of the field with respect to a certain “hitting density” of Itô excursions.' author: - 'Nathanaël Berestycki[^1]' - Ellen Powell - 'Gourab Ray[^2]' bibliography: - 'EP\_bibliography.bib' title: '$(1+\eps)$ moments suffice to characterise the GFF ' --- Introduction ============ The **Gaussian free field** (GFF) is a universal object believed (and in many cases proved) to govern the fluctuation statistics of many natural random surface models [@GOS; @NS; @MillerGL; @Kenyon_GFF; @dubedat_torsion; @BLRdimers; @BLRtorus; @DubedatGheissari; @Li] (see, e.g., [@LQGnotes; @LNWP] for an introduction and survey of some recent developments). Although the GFF can be defined in any dimension, this article is concerned with the planar continuum version, which satisfies two special properties; namely, **conformal invariance** and a **domain Markov property**. The former roughly entails that applying a conformal map to a GFF in any domain produces a GFF in the image domain. The latter says, informally, that for any $D' \subset D \subset \C$, the conditional law of the GFF on $D$ restricted to $D'$, given its behaviour outside of $D'$, is that of the harmonic extension of the GFF from $\partial D'$ to $D'$ plus an independent GFF in $D'$. However, one major technical issue with defining the GFF is that it cannot be made sense of as a random function. It is instead defined as a random generalised function, which in this article we view as a stochastic process indexed by smooth, compactly supported test functions. As a result, some preparation is required in order to rigorously formulate the above properties. We will now formally state our assumptions, which are essentially the same as in [@BPR18] except for the moment condition and the Dirichlet boundary condition (we will comment after the theorem on the necessity of this adaptation). Assume that for every simply connected domain $D\subset \C$, a stochastic process $h^D = (h^D_\phi)_{\phi \in C_c^\infty(D) }$ indexed by test functions is given. Assume further that each $h^D$ is linear in $\phi$: that is, for any $\lambda, \mu \in \R$ and $\phi, \phi'\in C_c^\infty(D)$, $$h^D_{\lambda \phi + \mu \phi'} = \lambda h^D_\phi + \mu h^D_{\phi'} \text{ almost surely. }$$ We then write, with an abuse of notation, $$( h^D, \phi) := h^D_\phi \text{ for } \phi \in C_c^\infty(D).$$ We denote by $\Gamma^D $ the law of the stochastic process $h^D$. Thus $\Gamma^D$ is a probability distribution on $\R^{C_c^\infty(D)}$ equipped with the product topology. By Kolmogorov’s extension theorem $\Gamma^D$ is characterised by its consistent finite-dimensional distributions: i.e., by the joint law of $(h^D, \phi_1), \ldots, (h^D, \phi_k)$ for any $k \ge 1$ and any $\phi_1, \ldots, \phi_k \in C_c^\infty(D)$. We finally recall that the $H^{-1}(D)$ norm of a function $f\in C_c^\infty(D)$ is given by $$\label{eqn:hminus1} (f,f)_{-1}:=((-\Delta)^{-1/2}f,(-\Delta)^{-1/2}f)=(f,(-\Delta^{-1})f)= \iint_{D\times D} G_D(x,y) f(x)f(y) \, dx dy$$ where $G_D$ is the Green function with Dirichlet boundary conditions in $D$. Let $D \subset \C$ be a proper simply connected open domain, and let $h^D$ be a sample from $\Gamma^D$. We make the following assumptions. \[ass:ci\_dmp\] (i) **(Moments)** For every $\phi\in C_c^\infty(D)$ and some $\xi>1$: $$\E[(h^D,\phi)]=0 \;\; \text{and} \;\; \E[|(h^D,\phi)|^\xi]<\infty.$$ (ii) **(Continuity and Dirichlet boundary conditions)** If $\phi_n\to \phi$ in $C_c^\infty(D)$, then $(h^D,\phi_n)\to (h^D,\phi)$ in probability as $n\to \infty$. Moreover, suppose that $(\phi_n)_{n \ge 1}$ is a sequence of non-negative test functions in $C_c^\infty(D)$, such that $d_n:=\sup\{d(z,\partial D)\, : \, z\in\text{Support}(\phi_n)\}\to 0$ as $n\to \infty$, and $\phi_n \to 0$ in $H^{-1}(D)$. Then we have that $(h^D,\phi_n) \to 0$ in probability and in $L^1$ as $n\to \infty$. (iii) **(Conformal invariance.)** Let $f: D \to D'$ be a bijective conformal map. Then $ \Gamma^{D} = \Gamma^{D'}\circ f, $ where $\Gamma^{D'} \circ f$ is the law of the stochastic process $(h^{D'}, |(f^{-1})'|^2 (\phi \circ f^{-1}))_{\phi \in C_c^\infty(D)}$. (iv) **(Domain Markov property)**. Suppose $D' \subset D$ is a simply connected Jordan domain. Then we can decompose $ h^D= h^{D'}_D+\ph_D^{D'}, $ where: - $ h^{D'}_D$ is independent of $\ph_D^{D'}$; - $(\ph_D^{D'},\phi)_{\phi\in C_c^\infty(D)}$ is a stochastic process indexed by $C_c^\infty(D)$ that is a.s. linear in $\phi$ and such that when we restrict to $C_c^\infty(D')$, $$(\ph_D^{D'},\phi)_{\phi\in C_c^\infty(D')}$$ a.s. corresponds to integrating against a harmonic function in $D'$. - $((h^{D'}_D,\phi))_{\phi\in C_c^\infty(D)}$ is a stochastic process indexed by $C_c^\infty(D)$, such that $(h^{D'}_D,\phi)_{\phi\in C_c^\infty(D')}$ has law $\Gamma^{D'}$ and $(h^{D'}_D,\phi)=0$ a.s. for any $\phi$ with $\operatorname{Support}(\phi)\subset D\setminus D'$. Observe that in light of *(iii)*, the Dirichlet boundary condition *(ii)* holds in one simply connected domain $D$ if and only if it holds in all simply connected domains. Indeed, suppose that it holds in $D$ and let $f:D\to D'$ be a conformal map. Then if $(\phi_n)_{n}\to 0\in H^{-1}(D')$, we have by conformal invariance of the Green function that $\tilde{\phi}_n:=|f|^2 (\phi_n\circ f)$ converges to $0$ in $H^{-1}(D)$, and since $(h^{D'},\phi_n)$ is equal in law to $(h^D,\tilde{\phi}_n)$, that $(h^{D'},\phi_n)\to 0$ in probability and in $L^1$ as $n\to \infty$. We now comment on the main changes with respect to the assumptions in [@BPR18]. As already mentioned, the main change is the fact that we have replaced a moment of order four in (i) with a moment of order $\xi$ where $\xi>1$. Beyond this, we have slightly adapted the Dirichlet boundary condition (assumption (ii)). Indeed, it may not even be apparent to the reader at first sight why we call (ii) a Dirichlet boundary condition. Suppose $\phi_n$ is a sequence of functions in $C_c^\infty(D)$, whose support converges to a subset of the boundary $\partial D$, in the sense that $d_n \to 0 $ (where $d_n$ is defined in (ii)). If $h$ is a Gaussian free field in $D$ (with Dirichlet boundary conditions), we may be tempted to believe that $(h, \phi_n) \to 0$. Unfortunately, without any additional assumption this is not necessarily the case, even if $\|\phi_n\|_1 $ is bounded (to see why, consider the uniform distribution in a ball of radius $\eps$ at distance $\eps$ from the boundary). Instead, in order for $(h, \phi_n)$ to converge to zero we need an extra condition which guarantees that the mass of $f_n$ is sufficiently “spread out”. In [@BPR18] we assumed that for $D= \D$, $(h, \phi_n)\to 0 $ for sequences $\phi_n $ which are bounded in $L^1$ and *rotationally symmetric*. However, in the present article, we will need $\phi_n$ to be asymptotically supported on a *proper* subset of the boundary (see the definition of $p_u$ in ) and so rotational invariance of the support of $\phi_n$ is not sufficient. Instead we need to quantify what “sufficiently spread out means”; this is exactly what convergence to 0 in $H^{-1}(D)$ ensures. Before stating our results, we recall the definition of a Gaussian free field (with Dirichlet boundary conditions) on a domain $D \subset \C$. \[def::gff\] A mean zero Gaussian free field $h_{\operatorname{GFF}} =h^D_{\operatorname{GFF}} $ with zero boundary conditions is a stochastic process indexed by test functions $(h_{\operatorname{GFF}}, \phi)_{\phi \in C_c^\infty(D)}$ such that: - $h_{\operatorname{GFF}}$ is a centered Gaussian field; for any $n\ge 1$ and any set of test functions $\phi_1,\cdots, \phi_n \in C_c^\infty(D)$, $((h_{\operatorname{GFF}},\phi_1),\cdots, (h_{\operatorname{GFF}},\phi_n))$ is a Gaussian random vector with mean ${\mathbf{0}}$; - for any two test functions $\phi_1,\phi_2 \in C_c^\infty(D)$, $$\E[(h_{\operatorname{GFF}},\phi_1) , (h_{\operatorname{GFF}},\phi_2)] = \int_{D} G^D(z,w) \phi_1(z)\phi_2(w)dzdw$$ where $G^D$ is the Green’s function with Dirichlet boundary conditions on $D$. The main technical content of this paper is summarised by the following proposition, whose most important aspect states that moments of order $\xi$ as in Assumptions \[ass:ci\_dmp\], together with domain Markov property and conformal invariance, imply a moment of order 4. \[prop:fourth\_moment\] Assume that $(\Gamma^D)_D$ satisfies Assumptions \[ass:ci\_dmp\]. Then in fact: (1) $\E[(h^D,\phi)^4]<\infty $ for every $\phi\in C_c^\infty(D)$; (2) the bilinear form $K_2^D$ on $C_c^\infty(D)\times C_c^\infty(D)$ defined by $$\E[(h^D,\phi)(h^D,\phi')]=K_2^D(\phi,\phi'), \quad \quad \phi, \phi' \in C_c^\infty(D)$$ is continuous; and (3) the convergence in (ii) of Assumptions \[ass:ci\_dmp\] also holds in $L^2$. As a direct consequence we obtain the following theorem, which is the main result of this paper. \[thm::characterisation\_gff\] Suppose the collection of laws $\{\Gamma^D\}_{D\subset \mathbb{C}}$ satisfy Assumptions \[ass:ci\_dmp\] and let $h^D$ be a sample from $\Gamma^D$. Then there exists $\sigma\ge 0$ such that $h^D = \sigma h_{\operatorname{GFF}}^D$ in law, as stochastic processes. This is a direct consequence of Proposition \[prop:fourth\_moment\] and [@BPR18 Theorem 1.6]. #### Proof idea: In order to explain the new ideas required for Theorem \[thm::characterisation\_gff\], it is helpful to first recall the main steps in the proof of [@BPR18 Theorem 1.6]. *Sketch of proof of [@BPR18 Theorem 1.6].* The proof of Theorem 1.6 in [@BPR18] can be broken into two distinct parts: (1) showing that the field is Gaussian (i.e., that $h^D$ is a Gaussian process for each $D$) and (2) showing that it has the correct covariance structure. In fact, once Gaussianity is known, proving (2) is rather straightforward. It boils down to the fact that the Greens’ function is characterised by harmonicity away from the diagonal and logarithmic blow-up along the diagonal – see [@BPR18]. Proving (1) is rather more challenging. The key step in [@BPR18] is to show that “circle averages" around points are jointly Gaussian. That is, for any finite set of points, the joint law of the circle averages is Gaussian. The circle average process of a Gaussian free field $h^D$ around a point $z\in D$ is, roughly speaking, the process $(h, \phi_t)_{t\ge 0}$, where $\phi_t$ is uniform measure on the circle of radius $\e^{-t}$ around $z$. More precision is required for a rigorous definition, since the $\phi_t$ are not smooth test functions, but this can be dealt with by approximating the $\phi_t$ appropriately. Once it is known that circle averages are jointly Gaussian, it is easy to deduce (1), because the field can be approximated by circle averages with small radii, and limits of Gaussians are Gaussian. To address the question of showing Gaussianity of circle averages, let us consider the case where $D=\D$ is the unit disc, and we take averages around a single point: the origin. It is well known and easy to see that for a GFF in $\D$, the circle average process around $z = 0$ is a constant multiple of Brownian motion. For our given process $h^\D$, the domain Markov property together with scale invariance shows that the circle average process has independent and stationary increments. However, one cannot immediately deduce that it is Brownian motion, which would of course yield Gaussianity. More work is required to eliminate processes with jumps (e.g. compound Poisson processes, symmetric stable processes etc.) In [@BPR18], a fourth moment assumption on the field was used to apply Kolmogorov’s criterion, and thereby prove that the circle average process possesses an almost surely continuous modification. This modification must then be Brownian motion and, in particular, Gaussian. In fact, we can generalise this argument to show that arbitrary linear combinations of circle averages around multiple points must also be Gaussian, which completes the key step of the proof. *Sketch of proof of Proposition \[prop:fourth\_moment\].* The major challenge in this article is to reach the same conclusion *without* the fourth moment assumption. In contrast to the above approach, we will simply aim to prove Gaussianity of single circle averages, rather than linear combinations of averages around multiple points. Note that this does not immediately imply *joint* Gaussianity of circle averages (for which significantly more work would be needed). However, it is enough (with a little extra work) to prove existence of fourth moments (\[prop:fourth\_moment\]) and given the result of [@BPR18], this concludes the proof of \[thm::characterisation\_gff\]. To summarise: the main step of the proof *in this article* is to show existence of an a.s. continuous modification of the circle average process around $z=0$ for $h^\D$ (the given field in the disk $\D$) assuming only $\xi$th moments of the field for some $\xi>1$. See \[cor:circ\_avg\_cont\] and \[prop:circ\_av\_bm\]. Achieving this is not merely a technical upgrade of the idea used in [@BPR18]; a new input is required. Namely, in we introduce a certain **sine-average process** for the field $h^\H$, on semi-circles in the upper half plane. Its value at a given semi-circle can be viewed as the average of $h^\H$ with respect to a hitting measure for half-plane **Itô excursions** from $0$. As a result, one can easily construct a parametrisation (with respect to the semi-circle radius), under which the resulting process satisfies: - (one-dimensional) Brownian scaling; and crucially - a certain **“harness”** property, as introduced by Hammersley in [@harness] (see also [@Williams_harness]). The increments of this process are easily checked to be independent; however, there is no reason *a priori* why they should be stationary. Nonetheless, we are able to formulate a (new) characterisation of Brownian motion in terms of this harness property and use this to show that the sine-average process must be a Brownian motion. This characterisation is given in Proposition \[prop:char\_BM\], and is an extension of a result proved in [@Wes93]. Crucially, our extension does not require as many moments as [@Wes93]; in fact moments of *any* order $\xi>0$ suffice. From this point, we use rotational invariance and the domain Markov property to “average out” the semi-circle sine-averages of $h^\H$ and relate them to circle averages of $h^\D$. The consequence is existence of a continuous modification of the circle-average process around $0$ for $h^\D$. For this last step, one needs to precisely control the behaviour of the harmonic part in a domain Markov decomposition of $h^\D$, which forms the main technical part of the argument. This is where the assumption $\xi >1$ is used. Having done this, the proof of Proposition \[prop:fourth\_moment\] is concluded. Consider a family of fields $(h^D)_D$ in simply connected domains $D$, that assign values $(h^D, \phi)$ to smooth test functions $\phi$. Theorem \[thm::characterisation\_gff\] shows that conformal invariance and the domain Markov property (in the sense of Assumptions \[ass:ci\_dmp\]) are incompatible with these $(h^D,\phi)$s having $\alpha$-stable (rather than Gaussian) distributions, for any value of the index $\alpha \in (1,2)$. Comparing to the better understood one-dimensional situation, a (1d) $\alpha$-stable process has different scaling properties to those of (1d) Brownian motion. Since scaling is a special type of conformal mapping, this suggests that “natural $\alpha$-stable analogues” of the GFF cannot enjoy conformal invariance. Our Theorem can be viewed as a rigourous justification of this informal heuristic when $\alpha \in (1,2)$. We mention here that some variants of higher dimensional stable fields have been defined and studied before, see [@kumar1972stable] and also [@cipriani2016divisible] for a limiting construction. It will be interesting to find a suitable characterisation theorem for such fields. In view of the above remark, it is natural to wonder whether *any* moments assumptions are needed to characterize the GFF. \[Q:xi\] What are the minimal moment assumption necessary for \[thm::characterisation\_gff\] to hold? Do moments of order $\xi$ for any $\xi>0$ suffice? #### Acknowledgements We thank Scott Sheffield and Juhan Aru for some inspiring discussions. Part of this work was carried while all three authors visited Banff on the occasion of the programme “Dimers, Ising Model, and their Interactions”. We would like to thank the organisers as well as the team in BIRS for this opportunity and their hospitality. Preliminaries ============= Independent random variables ---------------------------- \[lem:indep\_moments\] Suppose that $(X,Y)$ are real-valued random variables defined on the same probability space, and that $X$ and $Y$ are independent. Then for any $\xi>0$, $$\mathbb{E}[|X+Y|^\xi]<\infty \Rightarrow \mathbb{E}[|X|^\xi]<\infty \text{ and } \mathbb{E}[|Y|^\xi]<\infty.$$ Fix some $M$ such that $\mathbb{P}(|Y|\le M)\ge 1/2$ and note that $|X/(X+Y)|\mathbf{1}_{\{|Y|\le M, |X|\ge 2M\}}\le 2$ (it is less than 1 if $X$ and $Y$ have the same sign, and less than $2$ otherwise). Then $\mathbb{E}[|X|^\xi \mathbf{1}_{\{|X|\le 2M\}}]\le (2M)^\xi$ and $$\mathbb{E}\left[|X|^\xi \mathbf{1}_{\{|X|\ge 2M\}}\right]\le 2 \mathbb{E}\left[\left|\frac{X}{X+Y}\right|^\xi \, \left|X+Y\right|^\xi \mathbf{1}_{\{|Y|\le M, |X|\ge 2M\}}\right] \le 4 \mathbb{E}\left[\left|X+Y\right|^\xi \right]<\infty.$$ Symmetrically, $\mathbb{E}[|Y|^\xi]<\infty$. \[lem:vbe\] Let $r \in [1,2]$. (i) Suppose that $X,Y$ are random variables with $\mathbb{E}[|X|^r]<\infty, \mathbb{E}[|Y|^r]<\infty, \mathbb{E}[Y|X]=0$ a.s. Then $\mathbb{E}[|X+Y|^r]\ge \mathbb{E}[|X|^r]$. (ii) Suppose that $r\le 2$ and that $(X_1,\cdots, X_n)$ are independent, centred random variables with $\mathbb{E}[|X_j|^r]<\infty$ for $1\le j \le n$. Then $\mathbb{E}[|\sum_{j=1}^n X_j |^r]\le 2 \sum_{j=1}^n \mathbb{E}[|X_j|^r]$. Immediate consequences of the domain Markov property ---------------------------------------------------- \[lem:unicity\_decomposition\] The assumption of zero boundary conditions implies that the domain Markov decomposition from (iv) is unique. This is very similar to the proof of [@BPR18 Lemma 1.4], but we include it since some arguments are slightly different. Suppose that we have two such decompositions: $$\label{eqn::DMP_uniqueness} h^D = h^{D'}_D+\ph_D^{D'} = \tilde{h}^{D'}_D+\tilde{\ph}_D^{D'}.$$ Pick any $z\in D'$ and let $f:D'\to \D$ be a conformal map that sends $z$ to $0$. Further, let $(\phi_n)_{n\ge 1}$ be a sequence of nonnegative radially symmetric, mass one functions in $C_c^\infty(\D)$, that are eventually supported outside any $K\Subset \D$. It is easy to check that $\phi_n\to 0$ in $H^{-1}(\D)$ as $n\to \infty$, and if we set $\tilde{\phi}_n := |f'|^2 (\phi_n \circ f)$ for each $n$, then (as discussed below \[ass:ci\_dmp\]) $\tilde{\phi}_n$ converges to $0$ in $H^{-1}(D')$ as well. Hence, the assumption of Dirichlet boundary condition implies that $(h^{D'}_D-\tilde{h}^{D'}_D,\tilde{\phi}_n )\to 0$ in probability as $n\to \infty$. In turn, by (\[eqn::DMP\_uniqueness\]), this means that $(\ph_D^{D'}-\tilde{\ph}_D^{D'},\tilde{\phi}_n) \to 0$ in probability. However, since $(\ph_D^{D'} - \tilde{\ph}_D^{D'})$ restricted to $D'$ is a.s. equal to a harmonic function, and since the $\phi_n$’s are radially symmetric with mass one, we have that $$(\ph_D^{D'}-\tilde{\ph}_D^{D'},\tilde{\phi}_n) =((\ph_D^{D'}-\tilde{\ph}_D^{D'})\circ f^{-1}, \phi_n)=(\ph_D^{D'}-\tilde{\ph}_D^{D'})\circ f^{-1}(0)=\ph_D^{D'}(z)-\tilde{\ph}_D^{D'}(z)$$ for every $n$. This implies that for each fixed $z\in D'$, $\ph_D^{D'}(z)=\tilde{\ph}_D^{D'}(z)$ a.s. Applying this to a countable dense subset of $z\in D'$, together with the fact that $(h^D,\phi)=(\ph_D^{D'},\phi)=(\tilde{\ph}_D^{D'},\phi)$ a.s. for any $\phi$ supported outside of $D'$, then implies that $\ph_D^{D'}$ and $\tilde \ph_D^{D'}$ are a.s. equal as stochastic processes indexed by $C_c^\infty(D)$. Now, suppose that $D''\subset D'\subset D$ and $h^D$ is a sample from $\Gamma^D$. Applying the domain Markov property to $h^D$ in $D'$ and $D''$ respectively, we can write $h^D=h_D^{D'}+\varphi_D^{D'} \text{ and } h^D=h_D^{D''}+\varphi_{D}^{D''}.$ We can further decompose $h_D^{D'}=h_{D'}^{D''}+\varphi_{D'}^{D''}$ by applying the domain Markov property to $h_D^{D'}$ in $D''$. \[lem:nested\_dmp\] As stochastic processes indexed by $C_c^\infty(D)$, we have that $h_D^{D''}=h_{D'}^{D''}$ and $\varphi_D^{D''}=\varphi_D^{D'}+\varphi_{D'}^{D''}$ a.s. This follows by writing $h^D=h_D^{D''}+\varphi_D^{D''}$ and $h^D=h_D^{D'}+\varphi_{D}^{D'}=h_{D'}^{D''}+\varphi_{D'}^{D''}+\varphi_{D}^{D'}$ and applying Lemma \[lem:unicity\_decomposition\]. \[lem:harm\_ci\] Suppose $D$ is simply connected and that $D'\subset D$ is a simply connected Jordan domain. Then if $h^D=h^{D'}_D+\varphi_D^{D'}$ is the domain Markov decomposition of $h^D$ in $D'$ and $f:D\to f(D)$ is conformal, with $f(D')\subset f(D)$ a Jordan domain and $h^{f(D)}=h_{f(D)}^{f(D')}+\varphi_{f(D)}^{f(D')}$, we have that $$\varphi_D^{D'}=\varphi^{f(D')}_{f(D)}\circ f \text{ in law}$$ as harmonic functions in $D'$. For $\phi\in C_c^\infty(D')$ let us denote $\phi^f(z) = |(f^{-1})'|^2 \phi \circ f^{-1}(z) $, so that $\phi^f\in C_c^\infty(f(D'))$. Then by conformal invariance (\[ass:ci\_dmp\](iii)) it follows that $$(h^{D} , \phi ) \overset{(d)}{=} (h^{f(D)}, \phi^f) \text{ and } (h^{D'} , \phi ) \overset{(d)}{=} (h^{f(D')}, \phi^f).$$ By uniqueness of the domain Markov decomposition (\[lem:unicity\_decomposition\]), it then follows that $$(\varphi_{D}^{D'} , \phi) \overset{(d)}{=} (\varphi_{f(D)}^{f(D')}, \phi^f)$$ and since $\varphi$ is harmonic, this is exactly the statement that $$\int_{D'} \varphi_{D}^{D'} (z)\phi(z)dz \overset{(d)}{=} \int_{f(D')} \varphi_{f(D)}^{f(D')}(z)\phi^f(z)dz = \int_{D'} \varphi_{f(D)}^{f(D')} (f(w))\phi(w)dw,$$ where the last equality is just the change of variables formula. Since this holds for all $\phi \in C_c^\infty(D')$, this completes the proof. A priori moment bounds ---------------------- In the following, when $z$ lies in an open set $U\subset \C$, we write $d(z,\partial U):=\inf_{y\in \partial U} |y-z|$. We are going to give some bounds on the moments of harmonic functions arising from the domain Markov property. Note that if $z\in D'\subset D$ and $\varphi_D^{D'}$ is such a function, then by harmonicity we can write $\varphi_D^{D'}(z)=(\varphi_D^{D'},\phi)=(h^D,\phi)-(h_D^{D'},\phi)$ for some properly chosen $\phi\in C_c^\infty(D')\subset C_c^\infty(D)$ (e.g., take $\phi$ to be a spherically symmetric bump function which integrates to 1). Therefore $$\mathbb{E}[|\varphi_D^{D'}(z)|^p]<\infty$$ for all $0\le p\le \xi$. Moreover, if $D''\subset D'$, then by \[lem:nested\_dmp\] and \[lem:vbe\](i), we have $$\label{eq:mono_moments} \mathbb{E}[|\varphi_D^{D'}(z)|^p] \le \mathbb{E}[|\varphi_D^{D''}(z)|^p]$$ for all $p\in [1,\xi\vee 2]$. \[lem:moment\_bound\] Suppose that $D'\subset D$ and that $z\in D'$. Then there exists a universal constant $C$ such that for all $p\in [0,\xi\vee 2]$ $$\mathbb{E}[|\varphi_D^{D'}(z)|^p]\le C \left(\log\left(\frac{d(z,\partial D)}{d(z,\partial D')}\right)\vee 1\right)$$ Let $r:=d(z,\partial D')/2$ and $R:=d(z,\partial D)/2$. By Jensen’s inequality we need only consider the case $p=\xi$. In this case, since $\xi>1$ and $B_z(r)\subset D'$, we may further assume by that $D'=B_z(r)$. Now we iteratively apply \[lem:nested\_dmp\]. Let $B_k=B_z(2^k r)$ for $k\in \mathbb{N}_0$, and let $N:=\sup_{k\in \mathbb{N}_0} B_k\subset D$ so that $ N\le \log(R/r)/\log(2)$. Then we may write $$\varphi_{D}^{D'}(z)=\varphi_D^{B_N}(z)+\sum_{k=0}^{N-1} \varphi_k(z)$$ where the $\varphi_k(z)$ are independent and each distributed as $\varphi_{\D}^{\D/2}(0)$. Therefore by \[lem:vbe\](ii), it follows that $$\mathbb{E}[|\varphi_D^{D'}(z)|^\xi]\le \mathbb{E}[|\varphi_D^{B_N}(z)|^\xi] + N \mathbb{E}[|\varphi_{\D}^{\D/2}(0)|^\xi].$$ Now $\mathbb{E}[|\varphi_{\D}^{\D/2}(0)|^\xi]$ is bounded by some absolute constant, and so is $\mathbb{E}[|\varphi_D^{B_N}(z)|^\xi]$ (since by conformal invariance, the Koebe quarter theorem and \[lem:vbe\](i), it is less than or equal to $\mathbb{E}[|\varphi_{\D}^{(1/16)\D}(0)|^\xi]$). This completes the proof. Sine-averages and harmonic functions {#sec:sine_avgs} ==================================== In the following we will denote the unit disc $\{z: |z|<1\}$ of $\C$ by $\D$, and the upper unit semi disc $\D\cap \H$ by $\D^+$. For $r>0$, we denote by $r\D^+$ the scaled disc $\{z\in \C: |z|<r\}$. For $u>0$, we define $p_u$ to be the measure that integrates against $\phi\in C_c(\C)$ as $$\label{eq:sine} (\phi,p_u)= p_u(\phi):=\sqrt{u} \int_{0}^{\pi} \sin(\theta) \phi\left(\frac{\e^{i\theta}}{\sqrt{u}}\right) \, d\theta.$$ Note that $p_u$ is supported on the circle of radius $r_u = 1/ \sqrt{u}$ and that its total mass is $2/r_u = 2 \sqrt{u}$. The motivation for defining these measures comes from the fact that $f(re^{i\theta}) = \frac1r\sin(\theta)$ is harmonic in the upper half plane with zero boundary conditions (except at the origin). In fact, $f$ can be interpreted as the hitting density on a circle of radius $r$, for an Itô excursion in the upper-half plane starting from zero. While we our proofs can be written without referring to this interpretation, it may be useful for the intuition nonetheless, so we will now explain how to state this more precisely. We start by recalling some background about such excursions (see Chapter 5.2 in [@Lawler-book] for further details). Let $\P_{i \eps}$ denote the law of Brownian motion starting from $i\eps$, killed when it leaves the upper-half plane $\H$. By definition, the **Itô excursion measure** from zero is the (infinite) measure $\N$ obtained as the vague limit $$\N : = \lim_{\eps \to 0} \frac1{\eps} \P_{i \eps}$$ which is supported on continuous trajectories $\omega $ starting from zero, such that $\omega (t) \in \H$ for $t \in (0, \zeta)$ where $\zeta = \zeta(\omega)$ is the lifetime of the excursion, and such that $\omega(t) = \omega(\zeta) \in \R$ for any $t \ge \zeta$. A “sample” from $\N$ will later be called a half-plane excursion. More generally, the corresponding excursion measure can be defined on any simply connected domain $D$ from a nice boundary point $z \in \partial D$, and we then denote it by $\N_{z, D}$. Note that even though $\N$ has infinite mass we can easily make sense of conditional laws $\N(\cdot | E)$ when $\N(E)\, \in(0,\infty)$, thus resulting in probability measures. We record the following lemma. \[L:exc\] The total mass of half-plane excursions reaching $\partial (r \D) \cap \H$ is $4/(\pi r)$. In fact, the mass of excursions leaving $r\D \cap \H$ through the arc $(re^{ia}, re^{ib})$ is precisely $$\frac2{\pi r} \int_a^b \sin (\theta) d\theta$$ for any $0 \le a \le b \le \pi$. Note that $\N_{z;D}$ is conformally covariant: applying a conformal map $f: D \to D'$ such that $f$ is sufficiently nice near $z$, the image of $\N_{z, D}$ under $f$ is given by $|f'(z)| \N_{f(z); D' }$. Note also that when $D = \H$ and $z = \infty$, the measure $\N_{\infty, \H} ( X(\zeta_\H) \in [a, b]) = b-a$ on $\R$, is nothing but Lebesgue measure (here $\zeta_D$ denotes the first time that the excursion $X$ leaves the domain $D$, i.e., its lifetime). This is easy to check, as starting from a point $ir$ (with $r>0$) the hitting distribution of $\R$ by a Brownian motion has the Cauchy distribution scaled by $r$, which tends to $\pi^{-1}$ times Lebesgue measure on $\R$ as $r \to \infty$. For $r >0$, consider the conformal maps $$f(z) = z + \frac{r^2}{z} = r( \frac{r}{z} + \frac{z}{ r}),$$ that map $\H \setminus (r\D)$ to $\H$ and satisfy $f(\infty) = \infty$ with $|f'(\infty) | =1$. Note that $f(r e^{i \theta}) = 2 r \cos (\theta)$. In particular $f$ sends the semicircle of radius $r$ to the interval $[-2r, 2r]$, of length $4r$. Hence if $\tau_r$ is the first hitting time of this circle, we have $$\N_{\infty, \H} ( \tau_{r} < \zeta) = 4r/\pi.$$ The first claim of the lemma follows from this after applying the inversion map $z \mapsto -1/z$ (which sends $\infty$ to 0, leaves $\H$ invariant, and transforms $r\D$ into $(1/r) \D$). The second claim follows easily after noting that the derivative in $\theta$ of $f(re^{i \theta})$ is $-2 r\sin (\theta)$. \[R:exc\] For later reference, it may be useful to note that half-plane excursions enjoy the following Markov property: conditionally upon hitting the circle of radius $r$, the law of an excursion after this time is simply that of Brownian motion killed upon leaving $\H$. Combined with the domain Markov property and scale invariance of our fields, the result is that when we “integrate $h^\H$ against $f$ on the semi-circle of radius $1/\sqrt{u}$ around $0$” - equivalently “test $h^\H$ against $p_u$” - and view this as a process in $u$, it will satisfy both Brownian scaling and a certain Markovian property (note that $u = 0$ corresponds to testing $h$ near the point at $\infty$). As a consequence, we may deduce that the process is Brownian motion – see \[sec:char\_BM\]. However, the reader may recall from the introduction that we really want *circle averages*, say for $h^\D$, to be Brownian motions. Since these processes are easily shown to have independent and stationary increments, this would be immediate if we knew that *they* satisfied Brownian scaling. Unfortunately, this seems very hard to deduce directly from \[ass:ci\_dmp\]. So, we introduce the measures $p_u$ (and associated sine-averages for $h^\H$, see below) instead, and will later relate them to circle averages in \[sec:circ\_avg\_gaussian\]. We remark that alternative measures to $p_u$, for example correctly defined variants in cones, could play the same role. The current set-up has been chosen as it seems to be the neatest. Now, in order to make sense of “testing $h^\H$ against $p_u$” we need to first approximate $p_u$ by smooth test functions. For $\delta\in (0,\pi/2)$ we let $p_u^\delta$ be defined in the same way as $p_u$, but replacing $\sin(\theta)$ in the integral above with $\sin(\theta) \chi^\delta(\theta)$, where $\chi^\delta:[0,\pi]\to [0,1]$ is smooth, equal to $1$ in $[\delta, \pi-\delta]$, and equal to $0$ in $[0,\delta/2]\cup [\pi-\delta/2,\pi]$. Finally, for $\eta:[0,1]\to [0,1]$ a smooth bump function with $\int_{0}^{1}\eta(y) \, dy=1$, we define $\eta^{\delta}(\cdot):=\frac{1}{\delta}\eta(\frac{\cdot}{\delta})$ and denote by $p_u^{\delta,in},p_u^{\delta,out}$ the measures that integrate against $\phi\in C_c(\C)$ as $$(\phi,p_u^{\delta,in\,}):= \int_0^\delta (\phi,p^\delta_{u(1+x)})\, \eta^{\delta}(x) \, dx \;\; ; \;\; (p_u^{\delta,out},\phi):=\int_0^\delta (\phi,p^\delta_{u(1-x)})\, \eta^{\delta}(x) \, dx.$$ Thus $p_u^{\delta,in},p_u^{\delta,out}$ are smooth “fattenings” of the measure $p_u$ to the inside and outside of the arc $\partial (\frac1{\sqrt{u} } \D^+)$ respectively, that are also “cut off” away from the real line (so as to have compact support in $\H$). The reason for these definitions is the following: \[rmk:fattening\_smooth\] We have that $$(p_u^{\delta,in/out},\phi)=\int_{\C} p_u^{\delta,in/out}(z)\phi(z) \, dz$$ for some $p_u^{\delta,in/out}\in C_c^\infty(\C)$ (note the abuse of notation $p_u^{\delta, in/out}$ for both measure and density here). We remark that it is possible to write down an explicit expression for $p_u^{\delta,in/out}(z)$, but we do not need it. The upshot is that we can define $$(h^D,p_u^{\delta,in/out})$$ for any $D$ such that $\mathrm{Support}(p_u^{\delta,in/out})\Subset D$ (e.g., $D=\D^+$ or $D=\H$). From here on in, we use the notation $$\mathsf D_u:=\frac{1}{\sqrt{u}}\D^+; \;\;\; u>0.$$ \[lem:harmonic\_sines\] (a) Suppose that $u>0$ and $\varphi$ is a harmonic function in $\sou$, that can be extended continuously to a function on $\H\cup(-\frac{1}{\sqrt{u}}, \frac{1}{\sqrt{u}})$ that is equal to zero on $(-\frac{1}{\sqrt{u}}, \frac{1}{\sqrt{u}})$. Then $(\varphi,p_r)_{r\in (u,\infty)}$ is constant. (b) Suppose that $u>0$ and $\varphi$ is a harmonic function in $\H\setminus \overline{\sou}$ that can be extended continuously to $0$ on $(-\infty,- \frac{1}{\sqrt{u}})\cup (\frac{1}{\sqrt{u}},\infty)$. Then $(\varphi,p_s)_{s\in (0,u)}$ is a linear function of $s$. (c) Suppose that $0<s<r<\infty$ and $\varphi$ is a harmonic function in $\sos\setminus \overline{\sor}$ that can be extended continuously to 0 on $(-\frac{1}{\sqrt{s}},-\frac{1}{\sqrt{r}})\cup (\frac{1}{\sqrt{r}},\frac{1}{\sqrt{s}})$. Then $(\varphi,p_u)_{u\in (r,s)}$ is a linear function of $u$. We observe that (a) is easily seen from the perspective of Itô excursions. By \[L:exc\], we can represent $(\varphi,p_r)$ for any $r>u$ by $\frac{\pi}{2}\N_{0,\H}(\varphi(X_{\tau_{(1/\sqrt{r})}\wedge \zeta}))$ where $\tau_{(1/\sqrt{r})}$ is the first hitting time of the semicircle of radius $(1/\sqrt{r})$ centred at $0$. For $s\ge r$, since $\varphi$ is assumed to be 0 on $(-1/\sqrt{u},1/\sqrt{u})$, we can apply the Markov property, \[R:exc\], of the excursion $X$ at $\tau_{(1/\sqrt{s})}\wedge \zeta$. This gives $(\varphi,p_r)=\sqrt{s} \int_0^\pi \sin(\theta) \E_{e^{i\theta/\sqrt{s}}}[\varphi(B_{\tau_{\sor}})] \, d\theta$ for $B$ a complex Brownian motion. By harmonicity of $\varphi$, this quantity is equal to $(\varphi,p_s)$ as required. Actually, it can be seen from the argument above that the constant value of $(\varphi,p_r)$ for $r>u$, is equal to $\pi/2$ times the normal derivative, directed *into* $\H$, of $\varphi$ at the origin. Indeed, we saw that for any such $r$, $$(\varphi,p_r)=\frac{\pi}{2}\N_{0,\H}(\varphi(X_{\tau_{(1/\sqrt{r})}\wedge \zeta}))=\frac{\pi}{2}\lim_{\eps\to 0}\eps^{-1}\mathbb{E}_{i\eps}(\varphi(B_{\tau_{(1/\sqrt{r})}\wedge \zeta}))=\frac{\pi}{2}\lim_{\eps\to 0} \eps^{-1}\varphi(i\eps).$$ where the second equality is by definition of $\N_{0,\H}$ (with $B$ a Brownian motion) and the third is by harmonicity of $\varphi$. [Since it is simpler for (b) and (c), the full proof of \[lem:harmonic\_sines\] below is of a more deterministic nature.]{} Write $\varphi(r\e^{i\theta})=\varphi(r,\theta)$ and $f(u)=(\varphi,p_u)=\sqrt{u} \int_0^\pi \sin(\theta)\varphi(1/\sqrt{u},\theta) \, d\theta$. We will show that $f''\equiv 0$ on $(s,t)$, which implies (c). Take any $u\in (s,r)$. Let us first remark, in order to justify differentiation under the integral and integration by parts in what follows, that $\varphi$ is in fact very regular in open neighbourhoods of $\pm (1/\sqrt{u})$ inside $\sos\setminus \overline{(\sor)}$. Indeed since $\varphi$ extends continuously to $0$ on neighbourhoods of $\pm (1/\sqrt{u})$ in $\R$, it can be extended by Schwarz reflection to a harmonic function in open balls $B_\eps(\pm 1/\sqrt{u})\subset \C$ for some $\eps$. See, for example, [@krantz §7.5.2]. In particular $\frac{\partial \varphi }{\partial \theta} $ remains bounded in neighbourhoods of $\pm 1/\sqrt{u}$. Now we compute $$\begin{aligned} \frac{d^2}{du^2}(\sqrt{u} \varphi(1/\sqrt{u},\theta)) & = & \frac{1}{4u^{5/2}}\left( \frac{\partial^2}{\partial r^2}\varphi(1/\sqrt{u},\theta)+ \sqrt{u}\frac{\partial}{\partial r}\varphi(1/\sqrt{u},\theta)-u\varphi(1/\sqrt{u},\theta) \right) \\ & = & -\frac{1}{4u^{3/2}} \left(\frac{\partial^2}{\partial \theta^2}\varphi(1/\sqrt{u},\theta)+\varphi(1/\sqrt{u},\theta)\right) ,\end{aligned}$$ using harmonicity of $\varphi$ for the final identity. Differentiating under the integral in the expression for $f(u)$, and apply integration by parts twice with respect to $\theta$, we see that $f''(u)=0$. \[prop:def\_sine\_avg\] Let $h^\H$ be a sample from $\Gamma^\H$. Then for any $u\in (0,\infty)$ the limits $$\label{eq:hpu_def} \lim_{\delta\downarrow 0}(h^\H,p_u^{\delta,in}) \text{ and } \lim_{\delta\downarrow 0} (h^\H,p_u^{\delta,out})$$ exist in probability and in $L^1$, and are equal a.s. We define this limiting quantity to be the $(1/\sqrt{u})$-**sine average** of $h^\H$, and denote it (with a slight abuse of notation) by $(h^\H, p_u)$. Recall the notation $h^\H=h_\H^D+\varphi_\H^D$ for the domain Markov decomposition of $h^\H$ in $D\subset \H$. We also have that with probability one: $$\label{eq:hpu_alt_def} (h^\H,p_u)=(\varphi_{\H}^{\sou},p_r) \text{ for all } r> u \, \text{ \bfseries{and} } \, (h^\H,p_u)=\frac{u}{s}(\varphi_{\H}^{\H\setminus \overline{\sou}},p_s) \text{ for all } s< u.$$ \[rmk:hpu\_process\] This directly implies that for any finite collection $u_1,\cdots, u_n\in (0,\infty)$, the limits in hold jointly in probability, and holds jointly almost surely. In particular, this defines a consistent family of finite dimensional marginals, from which we may define the stochastic process $$(h^\H,p_u)_{u\in (0,\infty)}.$$ Before we begin the proof of \[prop:def\_sine\_avg\], we need the following lemma. It says (albeit in a more specific setting) that if we apply the domain Markov property to our field in a subdomain that shares a section of boundary with the original domain, then the harmonic function can be extended continuously to $0$ on the common section of boundary. This should seem very intuitive, but the proof is a little trickier than one might guess (see for example Fatou’s theorem for the kind of conditions that guarantee existence of non-tangential limits for harmonic functions at the boundary). \[lem:harm\_0\_boundary\] Suppose that $h^\H=h_\H^{\D^+}+\varphi_{\H}^{\D^+}$ is the domain Markov decomposition of $h^\H$ in $\D^+$. Then $\varphi_{\H}^{\D^+}$ can almost surely be extended continuously to $0$ on $(-1,1)$. We first show that for any $y\in (-1,1)$: $$\label{eq:bc_harm} \varphi_{\H}^{\D^+}(y+i\delta)\to 0 \text { in distribution (so also in probability) as } \delta\to 0.$$ Without loss of generality, the other cases being very similar, let us assume that $y=0$. Observe that by \[lem:harm\_ci\] and harmonicity we have that $$\varphi_{\H}^{\D^+}(i\delta)\overset{(d)}{=}\varphi_{\H}^{(1/\delta)\D^+}(i)=(\varphi_{\H}^{(1/\delta)\D^+},\psi),$$ where $\psi\in C_c^\infty(\C)$ is non-negative with $\int_{\C} \psi=1$, supported in $B(i,1/2)$ and rotationally symmetric about $i$. Moreover, by definition of the domain Markov decomposition, we have that $$(h^\H, \psi_i)\overset{(d)}{=} (h^{(1/\delta)\D^+},\psi)+(\varphi_{\H}^{(1/\delta)\D^+},\psi) \text{ with } h^{(1/\delta)\D^+},\, \varphi_{\H}^{(1/\delta)\D^+} \text{ independent.}$$ On the other hand, it is easy to see by conformal invariance of $h$ that $(h^{(1/\delta)\D^+},\psi_i)$ converges in distribution to $(h^\H, \psi_i)$ as $\delta\to 0$. This implies (for example, by considering characteristic functions) that $$(\varphi_{\H}^{(1/\delta)\D^+},\psi)\to 0$$ in distribution and probability as $\delta\to 0$. This completes the proof of . We immediately observe that the sequence in is uniformly integrable by \[lem:moment\_bound\], and so can be strengthened to say that $$\label{eq:bc_harm_e} \mathbb{E}[|\ph(y+i\delta)|]\to 0 \text { as } \delta\to 0$$ With in hand, let us now take $I=[a,b]\subset (-1,1)$ arbitrary: we will show that $\varphi_{\H}^{\D^+}$ can almost surely be continuously extended to $0$ on $I$. We denote $\ph=\varphi_{\H}^{\D^+}$ from now on, and fix $J$ such that $I\subset J\subsetneq [-1,1]$. First, observe that by dominated convergence and \[lem:moment\_bound\], implies that $\mathbb{E}[\int_J |\ph(y+i\delta)| \, dy]\to 0$ as $\delta\to 0$ and hence that for some sequence $\delta_k\to 0$, $a_k:=\int_J |\ph(y+i\delta_k)| \, dy$ converges to $0$ almost surely. We also have by \[lem:moment\_bound\] that if $S_J$ is the semicircle centered on $J$, then $M:=\int_{S_J} |\varphi(z)| \, dz$ is almost surely finite. Finally, by harmonicity we know that there exists some constant $C$ (deterministic, depending on $I,J$) such for any $z\in \D^+$ that is sufficiently close to $I$, $|\varphi(z)| \le M P(z) +C \Im(z)^{-1} a_k$ for all $k$ large enough, where $P(z)$ is the probability that a Brownian motion started from $z$ hits $S_J$ before $J$. Taking $k\to 0$ gives that $|\varphi(z)|\le MP(z)$ a.s. for all such $z$, and so $\ph$ can almost surely be continuously extended to $0$ on $I$. Now we can use \[lem:harmonic\_sines\] to prove \[prop:def\_sine\_avg\]. Observe that for any $u >0$, $\varphi_\H^{\sou}$ can a.s. be extended continuously to $0$ on $(-1/\sqrt{u},1/\sqrt{u})$ by scaling and \[lem:harm\_0\_boundary\]. Hence by Lemma \[lem:harmonic\_sines\], on an event of probability one, $$\label{eq:vp_const} (\varphi_\H^{\sou},p_r)=:c$$ is constant for all $r>u$. This implies (since $\eta^\delta$ has mass one and by definition of $p_u^{\delta, in}$) that with probability one, $$(\varphi_\H^{\sou},p_u^{\delta,in})-c= \int_0^\delta \left(\varphi_\H^{\sou},p^\delta_{u(1+x)})-(\varphi_\H^{\sou},p_{u(1+x)})\right)\, \eta^{\delta}(x) \, dx$$ for all $\delta$ small enough. Noting by \[lem:moment\_bound\] that the right-hand side goes to $0$ in $L^1$ as $\delta\to 0$, we can deduce that $$(\varphi_\H^{\sou},p_u^{\delta,in})\to c \text{ in probability and in } L^1$$ as $\delta\to 0$. Therefore, to show that the first limit in exists in probability and in $L^1$, and is equal to $c$ almost surely, we need only show that $$\lim_{\delta\downarrow 0}(h^\H-\varphi_\H^{\sou},p_u^{\delta,in})= \lim_{\delta\downarrow 0}(h_\H^{\sou},p_u^{\delta,in})= 0$$ in probability and in $L^1$. However, this follows by applying the zero boundary condition assumption to the field $h_\H^{\sou}$. An almost identical line of reasoning using part (b) of Lemma 3.2 implies that the second limit in exists a.s. and is equal to the constant value of the second expression in . Observe that $$(\varphi_{\H}^{\H\setminus \overline{\sou}},p_s)\to 0$$ in probability and in $L^1$ as $s\to 0$ (for example, by bounding its first moment using \[lem:moment\_bound\]). Thus all that remains is to show that the two limits in (or equivalently in ) coincide a.s. For this, we will prove that $$\label{eqn:letters} c\overset{(a)}{=}\lim_{\delta \downarrow 0} (\varphi_{\H}^{\mathsf D_{u-\delta}},p_u)\overset{(b)}{=}\lim_{\delta\downarrow 0} (\varphi_{\H}^{\mathsf D_{u-\delta}},p_u^{\sqrt{\frac{u}{u-\delta}}-1, out})\overset{(c)}{=}\lim_{\delta\downarrow 0} (h^\H,p_u^{\sqrt{\frac{u}{u-\delta}}-1, out}),$$ where all limits are in probability. From this we may conclude, since we already showed that the first limit in was a.s. equal to $c$, and the right hand side above is equal to the second limit in (which we also know exists in probability.) We will now prove the equalities (a), (b) and (c) from \[eqn:letters\] in turn. For (a), note that by \[lem:harmonic\_sines\] and scale invariance, $$\label{eqn:a}(\varphi_{\H}^{\mathsf D_{u-\delta}},p_u^{\delta,in})-(\varphi_{\H}^{\mathsf D_{u-\delta}},p_u) \overset{(d)}{=} (\varphi_{\H}^{\D^+}, f_\delta ),$$ where $f_\delta$ are a sequence of uniformly bounded smooth functions supported in vanishing neighbourhoods of $\{ \pm 1\}$. The difference therefore converges to $0$ in probability as $\delta\to 0$. Moreover, by Lemma \[lem:nested\_dmp\], we have $$(\varphi_{\H}^{\mathsf D_{u-\delta}},p_u^{\delta,in})-(\varphi_{\H}^{\mathsf D_{u}},p_u^{\delta,in}) \overset{a.s.}{=} (\varphi_{\mathsf D_{u-\delta}}^{\mathsf D_{u}},p_u^{\delta,in}) \overset{(d)}{=} (h^{\mathsf D_{u-\delta}},p_u^{\delta, in} )- (h_{\mathsf D_{u-\delta}}^{\mathsf D_{u}}, p_u^{\delta, in}).$$ Both terms on the right-hand side also converge to $0$ in probability as $\delta\to 0$ by scaling again, and the Dirichlet boundary condition assumption. Putting these facts together gives (a). Equality (b) follows by a very similar distributional equality to , again using Lemma \[lem:harmonic\_sines\]. Finally (c) holds, since $$(\varphi_{\H}^{\mathsf D_{u-\delta}},p_u^{\sqrt{\frac{u}{u-\delta}}-1, out})- (h^\H,p_u^{\sqrt{\frac{u}{u-\delta}}-1, out})=-(h_{\H}^{\mathsf D_{u-\delta}}, p_u^{\sqrt{\frac{u}{u-\delta}}-1, out})$$ almost surely and the right hand side (again by scaling) can be seen to converge to 0 in probability as $\delta \downarrow 0$. A characterisation of Brownian motion {#sec:char_BM} ===================================== \[prop:char\_BM\] Suppose that $(Y(u))_{u\in (0,\infty)}$ is a centred stochastic process. Write $\mathcal{F}_u^+:=\sigma(Y_s: s\ge u)$, $\mathcal{F}_u^-:=\sigma(Y_s: s\le u)$, and for $s<r$ let $\mathcal{F}_{s,r}$ be the $\sigma$-algebra generated by $\mathcal{F}_s^-$ and $\mathcal{F}_r^+$. Suppose that: (i) $(Y(u))_{u\in (0,\infty)}$ is stochastically continuous, i.e., for any $u_0\in (0,\infty)$, $Y_u\to Y_{u_0}$ in probability as $u \to u_0$; (ii) for some $\xi>0$, $\mathbb{E}[|Y(u)|^\xi]<\infty$ for all $u\in (0,\infty)$; (iii) $Y$ satisfies Brownian scaling, that is, $(Y(cu))_{u> 0}$ has the same law as $(\sqrt{c}Y(u))_{u> 0}$ for any $c>0$; (iv) for any $u>0$, $(Y(s)-Y(u))_{s\ge u}$ is independent of $\mathcal{F}_u^-$; (v) for any $u>0$, $(Y(s)-\frac{s}{u}Y(u))_{s\le u}$ is independent of $\mathcal{F}_u^+$; (vi) for any $s<r$ $(Y(u)-(\frac{u-s}{r-s}Y(r)+\frac{r-u}{r-s}Y(s)))_{u\in (s,r)}$ is independent of $\mathcal{F}_{s,r}$. Then there exists a modification of $Y$ that is equal to $\sigma B$ in law for some $\sigma\ge 0$, where $B$ is a standard one-dimensional Brownian motion. Observe that for this characterisation we only require $\xi>0$, we will comment later on why we need existence of $1+\eps$ moments for the main result of this paper. Also observe that by scaling, for any process $Y$ as in the statement of the proposition, $Y(\delta)$ is equal in distribution to $\sqrt{\delta} Y(1)$ for every $\delta$, and so tends to $0$ in probability as $\delta \to 0$. This proposition is very close to the main result of [@Wes93], which is essentially the same but requires square-integrability of the process $Y$. Indeed, we will prove the proposition by showing square-integrability and then appealing to [@Wes93]. We also remark that there is a similar characterisation of Brownian motion in [@BPR18 Theorem 1.9]; the major difference being item $(vi)$. In [@BPR18] we assumed that the process in $(vi)$ has the law of a scaled version of the original process. This is stronger than the statement here, which assumes nothing about the law. On the other hand, only finiteness of logarithmic moments was assumed in [@BPR18], which is (slightly) weaker than the moment assumption $(ii)$ above. For some motivation, let us first see the important corollary of this characterisation for the purposes of the present article. The proof of \[prop:char\_BM\] will follow immediately after. \[cor:sine\_avg\_gaussian\] Let $h^\H$ be a sample from $\Gamma^\H$, and define the process $Y$ via $$Y(u):=(h^\H, p_u) \text{ for } u\ge 0,$$ where the right hand side is as defined in \[prop:def\_sine\_avg\] and \[rmk:hpu\_process\]. Then $Y$ satisfies the conditions of \[prop:char\_BM\], and hence has a modification with the law of $\sigma$ times a Brownian motion for some $\sigma\ge 0$. We note that this result actually holds even if we only have $\xi>0$ in Assumption \[ass:ci\_dmp\], (i). This suggests that the answer to Question \[Q:xi\] is positive. Since $Y(u)$ is the $L^1$ limit of $(h^\H,p_u^{\delta,in})$ as $\delta\to 0$, and $(h^\H,p_u^{\delta,in})$ is centred for every $\delta$ and $u$, it follows that $Y$ is a centred process. So, it suffices to prove the conditions (i)-(vi) of \[prop:def\_sine\_avg\]. (i) Equality (a) from in the proof of \[prop:def\_sine\_avg\], plus Lemma \[lem:harmonic\_sines\], tells us that $$(h^\H,p_1)-(h^\H,p_{1-\delta})\to 0$$ in probability as $\delta\to 0$. Moreover by scale invariance (see (iii) below) we have that $|(h^\H,p_s)-(h^\H,p_t)|$ is equal in distribution to $\sqrt{s\vee t}\, |(h^\H,p_1)-(h^\H,p_{(s\wedge t)/(s \vee t)})|$. This gives the stochastic continuity. (ii) This holds with $\xi=1$ since $Y(u)$ is defined as a limit in $L^1$ for all $u$. (iii) (Scale invariance) We assume without loss of generality that $c>1$. First, we claim that $$\label{eq:SI_harm} (z\mapsto \varphi^{\mathsf D_{cu}}_\H(z), z \in \mathsf D_{cu})_{u\ge 0} \text{ and } (z\mapsto \varphi^{\sou}_\H(\sqrt{c}z) , z \in \mathsf D_{cu})_{u\ge 0}$$ have the same law as processes (of harmonic functions) in $u$, in the sense that the finite dimensional marginals of both sides have the same laws. The statement for one dimensional marginals is a special case of \[lem:harm\_ci\]. For the higher dimensional marginals, since the argument with $n$ points is very similar, we will just show equality in law for the joint distribution at two points $u < u'$. For this, we use uniqueness of the domain Markov decomposition to write $$( \varphi_{\H}^{\mathsf D_{cu}} , \varphi_{\H}^{\mathsf D_{cu'}} ) \overset{(d)}{=} (\varphi_{\H}^{\mathsf D_{cu}} , \varphi_{\H}^{\mathsf D_{cu}} + \varphi_{\mathsf D_{cu}}^{\mathsf D_{cu'}} ) \; \text{ and } \; ( \varphi_{\H}^{\mathsf D_{u}} , \varphi_{\H}^{\mathsf D_{u'}} ) \overset{(d)}{=} (\varphi_{\H}^{\mathsf D_{u}} , \varphi_{\H}^{\mathsf D_{u}} + \varphi_{\mathsf D_{u}}^{\mathsf D_{u'}} )$$ where $\varphi_{\mathsf D_{cu}}^{\mathsf D_{cu'}}$ is independent of $ \varphi_{\H}^{\mathsf D_{cu}}$ and $\varphi_{\mathsf D_{u}}^{\mathsf D_{u'}}$ is independent of $ \varphi_{\H}^{\mathsf D_{u}}$ . Using this independence, and Lemma \[lem:harm\_ci\]/the one dimensional marginal case again, we obtain . Now we complete the proof of scale invariance as follows. Fix $u>0$. By definition of the measures $p_u$, $$\begin{aligned} \left((h^\H, p_{cu})\right) & \overset{\eqref{eq:hpu_alt_def}}{=} & \left((\varphi_{\H}^{\frac{1}{\sqrt{cu}}\D^+},\; p_{2cu})\right)\\ & = & \left(\sqrt{2cu} \int_0^\pi \sin(\theta) \varphi^{\frac{1}{\sqrt{cu}}\D^+}_\H (\frac{e^{i\theta}}{\sqrt{2cu}}) \, d\theta\right) \\ &= &\left(\sqrt{c}\sqrt{2u} \int_0^\pi \sin(\theta) \varphi^{\frac{1}{\sqrt{u}}\D^+}_\H (\sqrt{c}\frac{e^{i\theta}}{\sqrt{2cu}}) \, d\theta\right) \\ & = & \left(\sqrt{c} (\varphi_{\H}^{\sou},\;p_{2u})\right) \\ & \overset{\eqref{eq:hpu_alt_def}}{=} & \left(\sqrt{c} (h^\H, p_u) \right) \end{aligned}$$ where we used \[eq:SI\_harm\] in the third equality. Applying the same string of equalities for finite dimensional marginals, we get the result. (iv) Fix $u\ge 0$ and observe that since $Y(s)=\lim_{\delta\downarrow 0}(h^\H,p_s^{\delta,out})=\lim_{\delta \downarrow 0} (\varphi_{\H}^{\sou},p_s^{\delta,out})$ for $s\le u$, $\mathcal{F}_u^-$ is independent of $h_\H^{\sou}$. This means that when we write (see \[lem:nested\_dmp\]) $$\varphi_{\H}^{\sor}=\varphi_{\H}^{\sou}+\varphi_{\sou}^{\sor}\; ; \;\; r\ge u,$$ we have that $\varphi_{\sou}^{\sor}$ is independent of $\mathcal{F}_u^-$. Then since $$Y(r) \overset{\eqref{eq:hpu_alt_def}}{=}(\varphi_{\H}^{\sor},p_{2r})=(\varphi_{\H}^{\sou},p_{2r})+(\varphi_{\sou}^{\sor},p_{2r})\overset{\eqref{eq:hpu_alt_def}}{=}Y(u)+(\varphi_{\sou}^{\sor},p_{2r}),$$ we reach the desired conclusion. (v) Very similar to (iv). (vi) Let us write $A_{r,s}:= \sos\setminus\overline{\sor}$. Reasoning as in the proof of (iv), we see that in the decomposition $$h^\H=h_\H^{A_{r,s}}+\varphi_\H^{A_{r,s}},$$ $h_\H^{A_{r,s}}$ is independent of $\mathcal{F}_{s,r}$. Hence, we must argue that $$\label{eq:harness_cond}(\varphi_\H^{A_{r,s}},p_u)=\frac{u-s}{r-s}Y(r)+\frac{r-u}{r-s}Y(s) \text{ for all } u\in (s,r).$$ Now, by \[lem:harmonic\_sines\] we know that the left hand side of is a.s.a linear function of $u\in (s,r)$, so we just need to prove that its limit as $u\downarrow s$ is equal to $Y(s)$, and as $u\uparrow r$ is equal to $Y(r)$. Let us prove the first limit, the second one being very similar. For this, write $$\lim_{u\downarrow s} (\varphi_\H^{A_{r,s}},p_u)=\lim_{u\downarrow s} (\varphi_\H^{\sos},p_u)+\lim_{u\downarrow s}(\varphi_{\sos}^{A_{r,s}}, p_u)=Y(s)+\lim_{u\downarrow s}(\varphi_{\sos}^{A_{r,s}}, p_u)$$ and observe that by \[ass:ci\_dmp\] (iv), $$\varphi_{\sos}^{A_{r,s}} \text{ is harmonic in } A_{r,s} \text{ and } \to 0 \text{ on } \partial(\sos)\cup(-\frac{1}{\sqrt{s}},-\frac{1}{\sqrt{r}})\cup(\frac{1}{\sqrt{r}},\frac{1}{\sqrt{s}}).$$ This implies that $|\varphi_{\sos}^{A_{r,s}}|$ is uniformly bounded in a neighbourhood of $\partial(\sos)$ in $\sos$, and hence by dominated convergence that $\lim_{u\downarrow s}(\varphi_{\sos}^{A_{r,s}}, p_u)=0.$ This almost follows from [@Wes93 Theorem 1], except for the square integrability condition. So first, we will prove that $$\label{eqn:second_moment} \mathbb{E}[|Y(u)|^2]<\infty \;\; \forall u\in [0,\infty).$$ To do this, pick some $n$ such that $2^{-n}\le \xi$, so that by assumption $\mathbb{E}[|Y(u)|^{2^{-n}}]<\infty$ for all $u$. We will prove that for any $m\ge 0$, $$\label{eq:induction} \mathbb{E}[|Y(u)|^{2^{-m}}]<\infty \;\; \forall u\in [0,\infty) \; \Rightarrow \mathbb{E}[|Y(u)|^{2^{-m+1}}]<\infty \;\; \forall u\in [0,\infty),$$ from which the result follows by induction, starting with $m=n$. So, let us take some $m\ge 0$ and assume that the left hand side of holds. Denote $\eta:=2^{-m}$ and first observe that $\mathbb{E}[|Y(2)-Y(1)|^\eta]<\infty$, since $|x+y|^\eta\le |x|^\eta + |y|^\eta$. By independence of $(Y(2)-Y(1))$ and $Y(1)$ (condition (iv) of \[prop:char\_BM\]), this implies that $\mathbb{E}[|Y(1)(Y(2)-Y(1))|^\eta]<\infty$. Now we apply condition (v) of \[prop:char\_BM\]. This tells us that we can write $Y(1)=Y(2)/2+Z$, where $Z$ is independent of $Y(2)$. Hence $$Y(1)(Y(2)-Y(1))=(\frac{Y(2)}{2}+Z)(\frac{Y(2)}{2}-Z)=\frac{Y(2)^2}{4} - Z^2$$ has a finite moment of order $\eta$. Applying \[lem:indep\_moments\], we obtain that $|Y(2)|^2$ has a finite moment of order $\eta$, and hence by scale invariance (condition (iii) of \[prop:char\_BM\]), that $\mathbb{E}[|Y(u)|^{2\eta}]<\infty$ for all $u\in [0,\infty)$. This completes the proof of the induction step, , and therefore of . From here, we can appeal to the characterisation in [@Wes93 Theorem 1] of stochastic processes with linear conditional expectation and quadratic conditional variance. This says that if $Y$ is a process as in \[prop:char\_BM\], that in addition - [is defined and stochastically continuous on $[0,\infty)$ with $Y(0)=0$,]{} - has $Y(u)$ square integrable for every $u$, - has $\mathbb{E}[Y(u)Y(s)]=\mathbb{E}[Y(u\wedge s)^2]=\sigma (u\wedge s)$ for some $\sigma\ge 0$ and all $u,s\in [0,\infty)$ then $Y$ must be $\sigma$ times a standard Brownian motion. [Note that by the discussion immediately after the statement of \[prop:char\_BM\], we can extend $Y$ to a stochastically continuous process on $[0,\infty)$ with $Y(0)=0$.]{} We also get the third point above by the assumption of Brownian scaling, plus the fact that the process is centred with independent increments. Hence [@Wes93 Theorem 1] provides the result. Gaussianity of circle averages {#sec:circ_avg_gaussian} ============================== In this section we work with a sample $h^\D$ from $\Gamma^\D$. For any $\eps>0$ we can define the circle average $h_\eps(0)$ at radius $\eps$ around $0$ via $$h^\D_\eps(0):= \varphi_\D^{B_0(\eps)}(0)$$ as in [@BPR18]. Our next goal is to relate these circle averages to the sine averages from Section \[sec:sine\_avgs\]. This will allow us to show (using \[cor:sine\_avg\_gaussian\]) that the circle average process possesses a modification that is continuous in $\eps$, and will in turn imply that $(h^\D_{\e^{-t}}(0))_{t\ge 0}$ (which has independent and stationary increments by conformal invariance and the domain Markov property) is a Brownian motion. From this it will follow that $h^\D_\eps(0)$ is Gaussian for any $\eps>0$. To begin, we will explain how the sine averages from \[sec:sine\_avgs\] can make sense for $h^D$ with some specific domains $D\ne \H$. Essentially, this is due to the domain Markov property, which allows us to relate $h^D$ with $h^\H$ in such a way that the sine average of one is the sine average of the other plus the sine average of a harmonic function. For example, let us start with $D=\D^+$. By the domain Markov property, we can decompose $h^\H$ in the upper unit semi disc $\D^+$ as the independent sum $$h^\H=h_\H^{\D^+}+\varphi_\H^{\D^+},$$ and we already know that: - for any $u\ge 1$, $(h^{\H},p_u^{\delta, in})\to (h^\H,p_u)$ in probability and in $L^1$ as $\delta\to 0$; - for any $u>1$, $(\varphi_{\H}^{\D^+},p_u^{\delta,in})\to (\varphi_{\H}^{\D^+},p_u) \text{ a.s.\ and in } L^1 \text{ as } \delta\to 0$, where $(\varphi_\H^{\D^+},p_u)$ is a.s. constant in $u>1$; - $(\varphi_{\H}^{\D^+},p_1^{\delta,in})$ converges to this constant value in probability and in $L^1$ as $\delta\to 0$ (using and the argument explained just after). For the first bullet point we have used \[prop:def\_sine\_avg\], and for the second, \[lem:harmonic\_sines\] plus the fact that $\varphi_{\H}^{\D^+}$ is almost surely harmonic in $\D_+$ and can be extended continuously to $0$ on $(-1,1)$ (\[lem:harm\_0\_boundary\]). This implies that for each $u\ge 1$, $$\lim_{\delta\to 0}(h_\H^{\D^+},p_u^{\delta, in})=:(h^{\D^+},p_u)$$ exists in probability and in $L^1$. Similarly, the joint limit can be defined for $(u_1,\cdots, u_n)$ with each $u_i\in [1,\infty)$, and the resulting process is equal in law to $(h^\H,p_u)_{u\ge 1}$ plus a (random) constant. On the other hand, we have that $(h_{\H}^{\D^+},p_1^{\delta,in})\to 0$ in $L^1$ and in probability as $\delta\downarrow 0$ (by the Dirichlet boundary condition assumption), so that $(h^{\D^+},p_1)=0$. Putting all this together with \[cor:sine\_avg\_gaussian\], we obtain the following: \[lem:bmdplus\] Let $h^{\D^+}$ be a sample from $\Gamma^{\D^+}$. Then for any $(u_1,\cdots, u_n)$ with $u_i\in [1,\infty)$ for $1\le i \le n$, the limit $$\lim_{\delta\downarrow 0}\left((h^{\D^+},p_{u_1}^{\delta,in}),\ldots, (h^{\D^+},p_{u_n}^{\delta,in})\right)=\left((h^{\D^+},p_{u_1}),\ldots,(h^{\D^+},p_{u_n}) \right)$$ exists in probability. Moreover, $(h^{\D^+},p_{1+t})_{t\ge 0}$ has the same finite dimensional distributions as some multiple (which is the same as that in \[cor:sine\_avg\_gaussian\]) of Brownian motion. Next, we make sense of sine averages for $h^{\D}$. Again we can use the domain Markov property, and decompose $$\label{eqn:hDdecomp} h^\D=h_\D^{\D^+}+\varphi_{\D}^{\D^+}.$$ However, deducing something from this is not quite so simple, since $\varphi_{\D}^{\D^+}$ is does *not* extend continuously to $0$ on $(-1,1)$. For example, since $(\varphi_{\D}^{\D^+},p_u)$ should correspond to integrating $\varphi_{\D}^{\D^+}$ on a contour that *does* touch the real line, it is not immediately obvious that this integral is well defined. We can manage this using that (a) $\varphi_{\D}^{\D^+}$ is not too badly behaved, and (b) the density $\sin(\theta)$ converges to $0$ as $\theta\to \{0,\pi\}$. For this some quantitative estimates are required, and we summarise them in the following lemma: \[lem:phi\_der\] There exists a universal constant $C\in (0,\infty)$, such that for all $\eps>0$, $$\label{eqn:bound_sup_phi} \mathbb{E}[\sup_{w\in \D^+;\, \Im(w)>\eps}|\varphi_{\D}^{\D^+}(w)|]\le C \eps^{-1/\xi} \log(1/\eps)^{1/\xi}; \text{ and }$$ $$\label{eqn:bound_der_phi} \mathbb{E}[\sup_{r\in [0,1],\theta\in [0,\pi]; \, \Im(r\e^{i\theta})>\eps}|\frac{\partial}{\partial r}\varphi_{\D}^{\D^+}(r\e^{i\theta})|]\le C \eps^{-1-1/\xi}\log(1/\eps)^{1/\xi},$$ where $\xi>1$ is such that $\mathbb{E}[|(h^D,\phi)|^\xi]<\infty$ for all $D$ and $\phi\in C_c^\infty(D)$ (\[ass:ci\_dmp\](i)). It is a standard fact (see e.g., [@Eva98 §2.2, Theorem 7]) that for a universal $C'>0$, for any function $\varphi$ that is harmonic in $B_z(r)\subset \C$ and for any $\mathbf{v}$ with modulus $1$, $|\partial_{\mathbf{v}}\varphi(z)|\le (C'/r) \sup_{y\in B_z(r)}|\varphi(y)|$. Hence follows from . To prove , let $w\in \D^+$ with $\Im(w)>\eps$ be arbitrary, and denote by $D_\eps$ the domain $\D^+\cap\{z: \Im(z)>\eps/2\}$. Then by harmonicity and \[lem:harm\_0\_boundary\], if $f_w(y)$ is the density at $y+i\eps/2$, of the exit position from $D_\eps$ for a Brownian motion started from $w$, we have that $$\varphi_{\D}^{\D^+}(w)= \int_{-1}^{1} f_w(y)\varphi_{\D}^{\D^+}(y+i \eps/2) \, dy$$ and so $$|\varphi_{\D}^{\D^+}(w)|\le \left(\int_{-1}^1 f_w(y)dy\right)^{1/\xi^*} \left(\int_{-1}^{1} f_w(y)|\varphi_{\D}^{\D^+}(y+i \eps/2)|^\xi \, dy\right)^{1/\xi}$$ where $\xi^*$ is such that $1/\xi+1/\xi^*=1$. Moreover, by domination with respect to a Cauchy density, there exists a constant $M$ not depending on $\eps>0$, such that $0\le f_w(y)\le M/\eps$ for all $y\in [-1,1]$ and $w$ with $\Im(w)>\eps$. Putting this together, along with the fact that $\int_{-1}^1 f_w(y) \, dy \le 1$, we obtain that $$\sup_{w\in \D^+;\, \Im(w)>\eps} |\varphi_{\D}^{\D^+}(w)|^\xi \le \frac{M}{\eps} \int_{-1}^{1} |\varphi_{\D}^{\D^+}(y+i \eps/2)|^\xi \, dy.$$ To conclude, we observe that by \[lem:moment\_bound\] $$\mathbb{E}[|\varphi_{\D}^{\D^+}(y+i \eps/2)|^\xi]\le C'' \log(1/\eps) \;\; \forall y\in [-1,1],$$ with constant $C''$ not depending on $\eps>0$, so that $$\mathbb{E}[\sup_{w\in \D^+;\, \Im(w)>\eps} |\varphi_{\D}^{\D^+}(w)|]\le\mathbb{E}[\sup_{w\in \D^+;\, \Im(w)>\eps} |\varphi_{\D}^{\D^+}(w)|^\xi]^{1/\xi}\le C \eps^{-1/\xi} \log(1/\eps)^{1/\xi}$$ for some universal constant $C$, as required. This allows us to deduce the following: \[lem:cont\_sin\_avg\] Let $h^\D$ be a sample from $\Gamma^\D$ and recall the decomposition . Then for each $(u_1,\cdots, u_n)$ with $u_i\in [1,\infty)$ for $1\le i \le n$ the limit $$\label{eqn:sa_hd_1} \lim_{\delta\downarrow 0}\left((h^{\D^+}_{\D},p_{u_1}^{\delta,in}),\ldots,(h^{\D^+}_{\D},p_{u_n}^{\delta,in})\right) =:\left((h^{\D^+}_\D,p_{u_1}),\ldots,(h^{\D^+}_\D,p_{u_n})\right)$$ exists in probability, and the resulting finite dimensional distributions are those of a multiple (which is the same as that in \[cor:sine\_avg\_gaussian\]) of Brownian motion. Furthermore, on an event of probability one, $$\label{eqn:sa_hd_2} \left((\varphi^{\D^+}_{\D},p_{u}^{\delta,in})\right)_{u\ge 1} \text{ has a pointwise (in u) limit } \left((\varphi^{\D^+}_\D,p_{u})\right)_{u\ge 1} \text{ as } \delta\to 0,$$ and this limit is a continuous function. Finally, for any $1<v<w<\infty$, there exists $M(v,w)$ such that, $$\label{eqn:sa_hd_3}\mathbb{E}[ \sup_{s,t\in [v,w]}\frac{|(\varphi_{\D}^{\D^+},p_s)-(\varphi_{\D}^{\D^+},p_t)|}{|s-t|}] \le M(v,w) .$$ In words, this tells us that the sine-average process of $h^\D$ (defined by joint limits of $(h^\D,p_u^{\delta, in})$ as $\delta\to 0$) makes sense and is a Brownian motion plus a nicely behaved continuous function whose derivative is bounded in expectation, . The role of this key lemma is to show that when we “average" the sine-average process over rotations (as will soon be made precise) we obtain a process with a continuous modification. The control given by is important here to ensure that we retain continuity after averaging, and it is for this that we need the existence of moments with order strictly greater than $1$. (We remark that we have also used it in several other places for simplicity). This is really the crux of the proof, since the resulting “averaged” process will actually turn out to be the circle average process for $h^\D$ around $0$ (recall from the introduction that establishing continuity of circle averages is the main step in our argument.) Since $h_{\D}^{\D^+}$ has the same law as $h^{\D^+}$, the statement concerning the limit follows from \[lem:bmdplus\]. To show that holds with probability one note that by Markov, for any $\xi^{-1}<a<1$, $$\mathbb{P}[\sup_{w\in \D^+;\, \Im(w)>\eps}|\varphi_{\D}^{\D^+}(w)| > \ve^{-a}]\le C \eps^{a-1/\xi} \log(1/\eps)^{1/\xi}$$ Thus applying Borel–Cantelli (to the sequence $\ve_n = 2^{-n}$) we conclude that a.s., for any $\xi^{-1}<a<1$, $$|\varphi_{\D}^{\D^+}(z)|\le \Im(z)^{-a}$$ for all $z$ with $\Im(z)$ sufficiently small. This implies (since $\sin(\arg(z))\Im(z)^{-a}\to 0$ as $\Im(z)\to 0$). Similarly, an application of Borel–Cantelli and allows us to deduce that, on an event of probability one, $F(u):=(\varphi_{\D}^{\D^+},p_u)$ is differentiable in $u$, and for some finite deterministic constants $\{M'(v,w)\}_{1<v<w<\infty}$, $$|F'(r)|\le M'(v,w) \int_0^\pi \sin(\theta) |\frac{\partial}{\partial r}\varphi_{\D}^{\D^+}(\e^{i\theta}/\sqrt{r})| \, d\theta \text{ for all } r\in [v,w]$$ From this and , follows in a straightforward manner. Now we will relate these quantities to circle averages, by averaging over rotations. Let $h^\D$ be a sample from $\Gamma^\D$ and for $\alpha \in [0,2\pi)$, let $h^{\D,\alpha}$ be the image of $h^\D$ under an anti-clockwise rotation by angle $\alpha$. That is, $(h^{\D,\alpha},\phi)_{\phi\in C_c^\infty(\D)}=(h^{\D},\phi\circ f_\alpha)_{\phi\in C_c^\infty(\D)}$ where $f_\alpha$ denotes the isometry $z\mapsto \e^{-i\alpha}z$. Then by conformal (specifically, rotation) invariance, $$\label{eq:law_ind_a} h^{\D,\alpha}\overset{(d)}{=}h^\D$$ for each fixed $\alpha$. Write $h^{\D^+}_{\D,\alpha}+\varphi^{\D^+}_{\D,\alpha}$ for the domain Markov domain decomposition of $h^{\D,\alpha}$ in $\D^+$. Now let $A$ be uniformly distributed on the interval $[0,2\pi]$ (independently from $h^\D$). Then we have that: - for each $(u_1,\cdots, u_n)$ with $u_i\in [1,\infty)$ for $1\le i \le n$ $$\lim_{\delta\downarrow 0}\left((h^{\D^+}_{\D,A},p_{u_1}^{\delta,in}),\ldots, (h^{\D^+}_{\D,A},p_{u_n}^{\delta,in})\right)=:\left((h^{\D^+}_{\D,A},p_{u_1}),\cdots, (h^{\D^+}_{\D,A},p_{u_n})\right)$$ exists a.s. and for any $s,t\ge 1$ $$\label{eqn:fourth_moment_circ} \mathbb{E}[|(h_{\D,A}^{\D^+},p_s)-(h_{\D,A}^{\D^+},p_t)|^4]\le c|s-t|^2$$ for some universal constant $c$ (because for each angle $\alpha$ the process $(h^{\D^+}_{\D,\alpha},p_s)_s$ is a fixed, i.e. not depending on $\alpha$, multiple of Brownian motion); - $((\varphi^{\D^+}_{\D,A},p_{u}^{\delta,in}))_{u\ge 1}$ has a pointwise limit $((\varphi^{\D^+}_{\D,A},p_{u}))_{u\ge 1}$ with probability one as $\delta\to 0$, and for any $1<v<w<\infty$, there exists $M(v,w)$ such that, $$\label{eqn:circ_der_bound}\mathbb{E}[ \sup_{s,t\in [v,w]}\frac{|(\varphi_{\D,A}^{\D^+},p_s)-(\varphi_{\D,A}^{\D^+},p_t)|}{|s-t|}] \le M(v,w) .$$ This allows us to reach the following conclusion. For every $u\in [1,\infty)$, the conditional expectation $$\mathbb{E}[(h^{\D,A},p_u) \, | \, h^\D ]:=\mathbb{E}[(h_{\D,A}^{\D^+},p_u)+(\varphi_{\D,A}^{\D^+},p_u) \, | \, h^\D ]$$ is well defined. This defines a stochastic process in $u$ which possesses an a.s. continuous modification. Since $(h_{\D,A}^{\D^+},p_u)$ and $(\varphi_{\D,A}^{\D^+},p_u)$ are random variables in $L^1(\P\times dA)$ (as can be seen using , by first taking expectation over the field given $A$, and then over $A$) the conditional expectations $$\mathbb{E}[(h_{\D,A}^{\D^+},p_u) \, | \, h^\D ] \text{ and } \mathbb{E}[(\varphi_{\D,A}^{\D^+},p_u) \, | \, h^\D ]$$ are well defined for any fixed $u$. By , the fact that conditioning is a contraction in $L^4$, and Kolmogorov’s continuity criterion, the first of these two stochastic processes has an a.s. continuous modification. To deal with the second process, observe that by , for any $1<v<w<\infty$, we have $$\begin{aligned} & \mathbb{E}\left[\sup_{s,t\in [v,w]} \frac{\left|\mathbb{E}[(\varphi_{\D,A}^{\D^+},p_t) \, | \, h^\D]-\mathbb{E}[ (\varphi_{\D,A}^{\D^+},p_s)\, | \, h^\D]\right|}{|s-t|}\right] \\ & \le \mathbb{E}\left[ \mathbb{E}[\sup_{s,t\in [v,w]}\frac{|(\varphi_{\D,A}^{\D^+},p_t) - (\varphi_{\D,A}^{\D^+},p_s)|}{|s-t|}\, | \, h^\D]\right] \le M(v,w).\end{aligned}$$ Hence the process $\mathbb{E}[(\varphi_{\D,A}^{\D^+},p_u) \, | \, h^\D ]$ in $u$ has a modification which is a.s. continuous. The connection to circle averages is the following. Recall that $h_\eps^\D(0)$ denotes the radius $\eps$ circle average of $h^\D$ around $0$. Recall that this is defined to be equal to $\varphi_{\D}^{\eps\D}(0)$ if $h^\D$ has domain Markov decomposition $h^{\eps\D}_\D+\varphi_{\D}^{\eps\D}$ in $\eps\D$. \[lem:cond\_sin\_av\_equals\_circ\_av\] For any $u\in [1,\infty)$, $\mathbb{E}[(h^{\D,A},p_u) \, | \, h^\D]=\sqrt{u}h^\D_{\frac{1}{\sqrt{u}}}(0)$ a.s. Fix $u\in [1,\infty)$. Since $(h^{\D,A},p_u^{\delta,in})\to (h^{\D,A},p_u)$ in probability and in $L^1$ as $\delta\to 0$, we have that $$\mathbb{E}[(h^{\D,A},p_u) \, | \, h^\D]=\mathbb{E}[\lim_{\delta\downarrow 0}(h^{\D,A},p_u^{\delta,in}) \, | \, h^\D]=\lim_{\delta\downarrow 0} \mathbb{E}[(h^{\D,A},p_u^{\delta,in})\, | \, h^\D]$$ where the rightmost limit holds in probability and in $L^1$. By definition of $A$, the right hand side is equal to $$\lim_{\delta\downarrow 0}\frac{1}{2\pi}\int_0^{2\pi} (h^{\D,\alpha},p_u^{\delta,in}) \, d\alpha = \lim_{\delta\downarrow 0}\frac{1}{2\pi}\int_0^{2\pi} (h^{\D},p_u^{\delta,in}\circ f_\alpha) \, d\alpha$$ where $f_\alpha(z)=e^{-i\alpha}z$ is rotation by $\alpha$. By linearity of $h^\D$ this is equal to $$\lim_{\delta\downarrow 0}(h^\D, \frac{1}{2\pi} \int_0^{2\pi} p_u^{\delta,in}\circ f_\alpha \, d\alpha)=\lim_{\delta\downarrow 0}(\varphi_{\D}^{\frac{1}{\sqrt{u}}\D},\frac{1}{2\pi}\int_0^{2\pi} p_u^{\delta,in}\circ f_\alpha \, d\alpha)+\lim_{\delta\downarrow 0}(h_{\D}^{\frac{1}{\sqrt{u}}\D},\frac{1}{2\pi}\int_0^{2\pi} p_u^{\delta,in}\circ f_\alpha \, d\alpha),$$ where the second term above goes to $0$ in probability as $\delta\to 0$ by the Dirichlet boundary condition assumption. Moreover, the function $\frac{1}{2\pi}\int_0^{2\pi} p_u^{\delta,in}\circ f_\alpha \, d\alpha$ is radially symmetric with total mass tending to $\sqrt{u}$ as $\delta\to 0$. By harmonicity, it then follows that $$\lim_{\delta\downarrow 0}(\varphi_{\D}^{\frac{1}{\sqrt{u}}\D},\frac{1}{2\pi}\int_0^{2\pi} p_u^{\delta,in}\circ f_\alpha \, d\alpha)= \sqrt{u} \varphi_{\D}^{\frac{1}{\sqrt{u}}\D}(0)=\sqrt{u}h^\D_{\frac{1}{\sqrt{u}}}(0)$$ a.s., as required. (We emphasise that the process in \[lem:cond\_sin\_av\_equals\_circ\_av\] above is not Brownian motion, but rather a time change of it). The corollary is the following: \[cor:circ\_avg\_cont\] The process $(h^\D_\eps(0))_{\eps\in (0,1]}$ possesses a continuous modification. \[prop:circ\_av\_bm\] The process $(h_{e^{-t}}^\D(0))_{t\ge 0}$ has a modification with the law of $(\sigma B_t)_{t\ge 0}$, where $\sigma\ge 0$ and $B$ is a standard one-dimensional Brownian motion. By the assumptions of conformal invariance and the domain Markov property, this process has independent increments, and it is also centred. By \[cor:circ\_avg\_cont\], it possesses a continuous modification. Since any continuous centred Lévy process must be a multiple of Brownian motion, this implies the result. \[cor:circ\_avg\_gaussian\] For any $D$ and $z\in D$, let $F_z^D$ be the conformal map from $D\to \D$ with $z\mapsto 0$ and $(F_z^D)'(z)\in \R_+$. Then the process $$\label{eqn:he} \hat{h}_{e^{-t}}^D(z)=\varphi_D^{(F_z^D)^{-1}(B_0(e^{-t}))}(z)$$ defined for $t\ge 0$, has a modification with the law of $\sigma$ times a Brownian motion. This follows from conformal invariance, \[ass:ci\_dmp\](iii). Conclusion of the proof ======================= Without loss of generality we assume that $D=\D$. For $z\in \D$ and $\eps = \eps(z) <d(z,\partial \D)$. Let $$\label{eq:rz} r_z(\eps):=\sup\{r\in [0,1]\, : (F_z^\D)^{-1}(B_0(r)) \subset B_z(\eps)\}.$$ Also set $h^\D_{\eps}(z)=\varphi_{\D}^{B_z(\eps)}(z)$ and define $\hat{h}^\D_{r_z(\eps)}(z)$ via and . For $\delta>0$, define $\eta_\delta$ to be a smooth radially symmetric function that approximates uniform measure on the unit circle as $\delta\to 0$. For concreteness, $\eta_\delta$ can be taken to be a smooth radially symmetric function equal to 1 on the annulus $\{z: 1 - \delta \le |z| \le 1 -\delta/2 \}$ that is 0 outside a $\delta/10$ neighbourhood of this annulus. We assume that each $\eta_{\delta}$ is normalised to have total integral one. For $\eps\in (0,1)$, further define $$\eta^\eps_\delta(\cdot)=\frac{1}{\eps^2} \eta_{\delta}(\frac{\cdot}{\eps})$$ Take $\phi\in C_c^\infty(\D)$. Recall that for Proposition \[prop:fourth\_moment\](1) we need to show that $(h^\D, \phi)$ has finite fourth moment. The idea is to show that $$\label{eq:fourth_moment_1} \int \hat{h}_{r_\eps(z)}^\D(z) \phi(z) \, dz \to (h^\D, \phi) \text{ in probability as } \eps\to 0$$ and that $$\label{eq:fourth_moment_two} \left(\int_{\D} \phi(z) \hat{h}^\D_{r_\eps(z)}(z)\, dz \right)^4 \text{ is uniformly integrable in } \eps$$ This means that $(\int_{\D} \phi(z) \hat{h}^\D_{r_\eps(z)}(z))^4$ converges in $L^1$ to $(\phi,h^\D)^4$, and in particular, that $(\phi,h^\D)^4$ is integrable. *Proof of .* We bound, for $\delta>0$: $$\begin{aligned} \label{eq:triangle} &\left| \int \hat{h}_{r_\eps(z)}^\D(z) \phi(z)\, dz - (h^\D,\phi)\right| \nonumber \\ \le & \left| \int (\hat{h}_{r_\eps(z)}^\D(z)-h^\D_{\eps}(z))\phi(z) \, dz \right| + \left| \int h^\D_{\eps}(z)\phi(z)\, dz - (h^\D, \phi*\eta^\eps_\delta)\right| + \left| (h^\D, \phi*\eta^\eps_\delta)-(h^\D,\phi)\right| \end{aligned}$$ We start by showing that the first term in goes to 0 in probability as $\eps\to 0$. For this, one can check explicitly that for every $\delta<d(z,\partial \D)$ we must have $r_z(\delta)\ge \delta/(\delta+\textrm{CR}(z,D))$, and therefore (by another calculation) that $(F_z^\D)^{-1}(B_0(r_z(\delta)))$ contains the ball of radius $\delta(1-\frac{\delta}{\delta+\frac{1}{2}\textrm{CR}(z,D)})$ around $z$. Hence, by conformal invariance and \[lem:nested\_dmp\], $h_\delta^\D(z)-\hat h_{r_z(\delta)}^\D(z)$ is distributed as $\varphi_{\D}^{D_\delta^z}(0)$, where for some $f(\delta)$ tending to $0$ as $\delta\to 0$ and every $z$ in the support of $\phi$, $D_\delta^z\subset \D$ contains the ball of radius $1-f(\delta)$ around 0. By , it then follows that $$\mathbb{E}[|h_\delta^\D(z)-\tilde{h}_{r_z(\delta)}^\D(z)|]\le \mathbb{E}[|\varphi_{\D}^{B_0(1-f(\delta))}(0)|]= \mathbb{E}[|h_{(1-f(\delta))}^\D(0)|],$$ and this tends to $0$ as $\delta\to 0$ by \[prop:circ\_av\_bm\]. By boundedness of $\phi$, this proves that the first term of goes to 0 in probability as $\eps\to 0$. We also have that the third term of goes to 0 in probability as $\eps\to 0$, for any fixed $\delta$. Indeed, $\phi*\eta_{\delta}^\eps \to \phi$ in $C_c^\infty(\D)$ as $\eps\to 0$ because $\eta_{\delta}$ is a smooth approximation to the identity for every $\delta$: see, eg. [@Eva98 §5.3]. Thus by \[ass:ci\_dmp\](i) (stochastic continuity), $(h^\D,\phi*\eta^\eps_\delta)\to (h^\D,\phi)$ in probability as $\eps\to 0$. So to show we are left to prove that the middle term of goes to $0$ in probability as $\delta\to 0$, *uniformly* in $\eps$. (That is, for any $c>0$ the probability that this term is bigger than $c$ goes to $0$ as $\delta\to 0$, uniformly in $\eps$.) To do this, we note that $\phi*\eta_\delta^\eps(z)=\int \phi(w) \eta_{\delta}^\eps(w-z) \, dw$ and so by linearity of $h^\D$, $$(h^\D,\phi*\eta^\eps_\delta)=\int_w (h^\D, \eta_{\delta}^\eps(w-\cdot))\phi(w) \, dw.$$ Moreover, by the Dirichlet boundary condition assumption and scale invariance, for every $w$ in the support of $\phi$ $$(h^\D,\eta_{\delta}^\eps(w-\cdot))-h^\D_{\delta}(w)\to 0$$ in probability and in $L^1$ as $\delta\to 0$, uniformly in $\eps$. Combined with the boundedness of $\phi$, this completes the proof. *Proof of .* For this, we will show that $\int_{\D} \phi(z) \hat{h}^\D_{r_\eps(z)}(z)\, dz$ is uniformly bounded in $L^6$. For $(z_1,\cdots, z_6)$ in $\operatorname{Support}(\phi)^6$, write $R=R(z_1,\cdots, z_6)$ for the largest $r$ such that the balls $B_{z_i}(r)$ are all disjoint. Then for $\eps<R$, by the domain Markov property and \[lem:nested\_dmp\], we have that $$\mathbb{E}[\prod_{i=1}^6 \hat h_{r_\eps(z_i)}^\D(z_i)]=\mathbb{E}[\prod_{i=1}^6 \hat h_R^\D(z_i)].$$ By repeated application of Hölder’s inequality, the term on the right hand side above is less than $\prod_{i=1}^6(\mathbb{E}[(h^\D_R(z_i))^6])^{1/6}$, and since each $h_R^\D(z_i)$ is Gaussian with variance less than some universal constant times $\log(1/R)$, we obtain that $$\mathbb{E}[\left(\int \hat{h}^\D_{r_\eps(z)}(z) \phi(z) \, dz \right)^6]= C(\phi) \left(1+ \iint_{D^6} |\log(R(z_1,\cdots, z_6))|^3 \, d\mathbf{z} \right) <\infty$$ where $C(\phi)$ is a finite constant depending on $\phi$ but not $\eps$. Since this bound is uniform in $\eps$, the proof is complete. Suppose that $\phi_n$ is a sequence of functions in $C_c^\infty(\D)$ converging to $\phi\in C_c^\infty(\D)$. Then by the previous part of this proof, $$\mathbb{E}[(h^\D,\phi_n)^4]=\lim_{\eps\to 0}\mathbb{E}[(\int_{\D} \phi(z) \hat{h}^\D_{r_\eps(z)}(z)\, dz )^4]$$ for each $n$, and this expectation is easily seen to be uniformly bounded in $n$ (using Hölder’s inequality and the fact that we know the marginal distributions of the $\hat{h}^\D$’s; as above). By the stochastic continuity assumption, we have that $(h^\D,\phi_n)\to (h^\D,\phi)$ in probability as $n\to \infty$. Putting this together with the uniform boundedness in $L^4$, we can deduce in particular that $(h^D,\phi_n)$ converges in $L^2$ to $(h^D,\phi)$ as $n\to \infty$. This implies the continuity of $K_2^D$ by Cauchy–Schwarz. The same arguments can be used to show that $(h^\D,\phi_n)$ is uniformly bounded in $L^4$ when $\phi_n$ is as in \[ass:ci\_dmp\](ii). This implies that the convergence of this assumption also holds in $L^2$. [^1]: Supported in part by EPSRC grant EP/L018896/1, the University of Vienna, and FWF grant “Scaling limits in random conformal geometry”. [^2]: Supported in part by NSERC 50311-57400 and University of Victoria start-up 10000-27458
{ "pile_set_name": "ArXiv" }
--- abstract: 'To better understand the correlation between network topological features and the robustness of network controllability in a general setting, this paper suggests a practical approach to searching for optimal network topologies with given numbers of nodes and edges. Since theoretical analysis is impossible at least in the present time, exhaustive search based on optimization techniques is employed, firstly for a group of small-sized networks that are realistically workable, where *exhaustive* means 1) all possible network structures with the given numbers of nodes and edges are computed and compared, and 2) all possible node-removal sequences are considered. An empirical necessary condition (ENC) is observed from the results of exhaustive search, which shrinks the search space to quickly find an optimal solution. ENC shows that the maximum and minimum in- and out-degrees of an optimal network structure should be almost identical, or within a very narrow range, i.e., the network should be extremely homogeneous. Edge rectification towards the satisfaction of the ENC is then designed and evaluated. Simulation results on large-sized synthetic and real-world networks verify the effectiveness of both the observed ENC and the edge rectification scheme. As more operations of edge rectification are performed, the network is getting closer to exactly satisfying the ENC, and consequently the robustness of the network controllability is enhanced towards optimum.' author: - 'Yang Lou,  Lin Wang,  Kim Fung Tsang,  and Guanrong Chen, [^1][^2][^3] [^4]' bibliography: - 'ref.bib' title: 'Towards Optimal Robustness of Network Controllability: An Empirical Necessary Condition' --- [Y. Lou : Towards Optimal Robustness of Network Controllability]{} network controllability, robustness, empirical necessary condition, node degree, optimization Introduction {#sec:intro} ============ networks have gained growing popularity and accelerating momentum since late 1990s, becoming a self-contained discipline integrating network science, systems engineering, statistical physics, applied mathematics, social sciences and the like [@Barabasi2016NS; @Newman2010N; @Chen2014Book; @Chen2019Book]. The ultimate goal of understanding complex networks is to control them for utilization. In this regard, whether or not they can be controlled is essential, which leads to the fundamental concept of network controllability. Consequently, network controllability has become a focal issue in the studies of complex networks [@Liu2011N; @Yuan2013NC; @Posfai2013SR; @Menichetti2014PRL; @Motter15CHAOS; @Wang2016AUTO; @Liu2016RMP; @Wang2017RSPTA; @Wang2017SR; @Zhang2017TAC; @Xiang2019CSM], where the concept of *controllability* refers to the ability of a network in moving from any of its initial state to any desired target state under an admissible control input within a finite duration of time. It was shown [@Liu2011N] that identifying the minimum number of external control inputs (recalled driver nodes) to achieve a full control of a directed network requires searching for a maximum matching of the network, which quantifies the network *structural controllability*. Practically, however, finding a maximum matching of a large-scale network is computationally time-consuming or even impossible. Along the same line of research, in [@Yuan2013NC], an efficient measure to assess the *state controllability* of a large-scale sparse network is suggested, based on the rank of the controllability matrix of the network. It has been quite a long time for people to understand the intrinsic relation between topology and controllability of a general directed network. In [@Posfai2013SR], it demonstrates that clustering and modularity have no discernible impact on the network controllability, while underlying degree correlations have certain effects. In [@Menichetti2014PRL], it reveals that random networks of any topology are controllable by an infinitesimal fraction of driver nodes, if both of its minimum in- and out-degrees are greater than two. The underlying hierarchical structure of such a network leads to an effective random upstream (or downstream) attack, which removes the hierarchical upstream (or downstream) node of a randomly-picked one, since this attack strategy would remove more hubs than a random attack strategy does [@Liu2012PO]. In [@Liu2012PO], a control centrality is defined to measure the importance of nodes, discovering that the upstream (or downstream) neighbors of a node are usually more (or less) important than the node itself. Interestingly, it is recently found that the existence of special motifs such as loops and chains is beneficial for enhancing the robustness of the network controllability [@Lou2018TCASI; @Chen2019TCASII; @Lou2019R]. The network controllability of some canonical graph models is studied and compared quite thoroughly in [@Wu2018JNS]. As for growing networks, the evolution of network controllability is investigated in [@Zhang2019PA]. Moreover, the controllability of multi-input/multi-output networked systems is studied in [@Wang2016AUTO; @Hao2018IJRNC], with necessary and sufficient conditions derived. A comprehensive overview of the subject is presented in a recent survey [@Xiang2019CSM]. On the other hand, random failures and malicious attacks on complex networks have become concerned issues today [@Holme2002PRE; @Shargel2003PRL; @Schneider2011PNAS; @Liu2012PO; @Bashan2013NP; @Xiao2014CPB]. To resist attacks, strong robustness is desirable and even necessary for a practical network. A measure for the network controllability is quantified by the number of external control inputs needed to recover or to retain the network controllability after the occurrence of an attack, while its robustness is quantified by a sequence of values that record the remaining levels of the network controllability after a sequence of attacks. To optimize the network robustness, one aims to enhance and maintain a highest possible *connectedness* of the network against various attacks [@Schneider2011PNAS]. Given the degree-preserving requirement or constraint (i.e., the degree of each node remains unchanged through the process of optimization), an edge-rewiring method is proposed in [@Liang2015CPL] to increase the number of edges between high-degree nodes, thus generating a new network with a highest $k$-shell component. In [@Chan2016DMKD], the structure of a network is modified by degree-preserving edge-rewiring, where spectral measures are used as the objective for optimization. By optimizing a specified spectral measure of the network through random edge-rewiring, the robustness of the resultant network is accordingly enhanced. It is however noticed that, although widely used as an estimator of the robustness for real-world networks, the correlation between spectral measures and the robustness remains unclear [@Yamashita2019COMPSAC]. Nevertheless, given a reliable predictive measure or indicator of the network robustness, optimization algorithms can be applied [@Liu2019ECCN; @Wang2019IS]; while if there are more than one predictive measures, multi-objective optimization schemes can be applied instead [@Gunasekara2018MOO]. In [@Zeng2012PRE], it is shown that both edge-robustness and node-robustness (i.e., the robustness against edge- and node-removals, respectively) can be enhanced simultaneously. A common observation is that heterogeneous networks with *onion-like* structure are robust against attacks [@Schneider2011PNAS; @Wu2011PRE; @Tanizawa2012PRE; @Hayashi2018SR]. The evolution of alternative attack and defense is studied in [@Ma2016PA], where attack means edge-removal and defense means edge-replenishment. The connectedness of the largest-sized connected cluster is the typically-used measure for such robustness [@Schneider2011PNAS]. It should be noted that, for the study to be presented in this article, although the robustness of network connectedness has a certain positive correlation with the robustness of network controllability, they have very different measures and objectives. Regarding the controllability of a complex network, it refers to a static property that reflects how well the network can be controlled. Yet, the robustness of network controllability is a dynamic process that reflects how well the network can maintain its controllability against destructive attacks by means of node-removal or edge-removal. Reportedly, intentional degree-based node-removal attacks in the sense of removing nodes with highest degrees are more effective than random attacks on network structural controllability over directed random-graph networks and also directed scale-free networks [@Pu2012PA]. In [@Wang2013EPL], the optimization of robustness of controllability is transformed into the transitivity maximization for control routes. In [@Liang2016EPJB], edge directionality is considered as the only operation to enhance the robustness of controllability, preserving the underlying topology meanwhile. In [@Chen2017PA], the change of controllability of random networks and scale-free networks in the processes of cascading failures is studied. For networks with different topologies, the results show that the robustness of network controllability will become stronger through allocating different control inputs and edge capacities. Both random and intentional edge-removal attacks have also been studied by many. In [@Nie2014PO], for example, it shows that the intentional edge-removal attack by removing highly-loaded edges is very effective in reducing the network controllability. It is further observed, in e.g. [@Buldyrev2010NAT], that intentional edge-based attacks are usually able to trigger cascading failures in scale-free networks but not necessarily in random-graph networks. These observations have motivated some recent in-depth analysis of the robustness of the network controllability [@Lou2018TCASI]. In this regard, both random and intentional attacks as well as both node-removal and edge-removal attacks were investigated. Specifically, for a random upstream (or downstream) attack that removes the upstream (or downstream) node of a randomly-picked one, the upstream and downstream relationship is determined by the underlying hierarchical structure of the network. This type of random attack has a non-uniform distribution, since the hub nodes are more likely being attacked in this scenario [@Liu2012PO]. In particular, it was found that redundant edges, which are not included in any of the maximum matchings, can be rewired or re-directed so as to possibly enlarge a maximum matching such that the needed number of driver nodes is reduced [@Xu2014CCDC; @Hou2013ISDEA]. Although the relations between network topology and network controllability have been investigated in some studies, there is no prominent theoretical indicator or performance index that can well describe the robustness of network controllability with a measures based on such relations. Under different attacks, the robustness of network controllability behaves differently. The nature of the attack methods leads to different measures of the *importance* of a node (or an edge) in a network. Generally, degree and betweenness are two commonly-used measures for the importance [@Pu2012PA]. This paper continues the above investigations to further explore the network topological properties that affect or even determine the optimal robustness of both state and structural controllability, against random node-removal attack. First, an exhaustive search is performed on a group of small-sized networks. Then, an empirical necessary condition (ENC) is observed and summarized. A simple yet effective edge rectification strategy, namely the random edge rectification (RER), is proposed for modifying the network topology to satisfy the ENC, so that the robustness of network controllability is enhanced. Similarly to [@Shi2013CSM], where the optimal network topology with best possible synchronizability is observed and summarized through extensive empirical experiments, the observed ENC is confirmed by extensive numerical simulations here, since it is impossible to theoretically prove, and probably no one could do so at this time. Finally, both ENC and RER are verified by optimizations of a number of synthetic and real-world networks with different properties. The rest of the paper is organized as follows. Section \[sec:robust\] reviews the network controllability and its robustness against various destructive attacks. Section \[sec:nec\] introduces the ENC and the RER. Section \[sec:exp\] investigates both ENC and RER by extensive numerical simulations, on both synthetic and real-world networks. Section \[sec:end\] concludes the investigation. Network Controllability and its Robustness {#sec:robust} ========================================== Network Controllability and Criteria {#sub:nc} ------------------------------------ Consider a linear time-invariant (LTI) networked system described by $ \dot{{\bf x}}=A{\bf x}+B{\bf u}$, where $A$ and $B$ are constant matrices of compatible dimensions, $\bf x$ is the state vector, and $\bf u$ is the control input. The system is *state controllable* if and only if the controllability matrix $[B\ AB\ A^2B\ \cdots A^{N-1}B]$ has a full row-rank, where $N$ is the dimension of $A$. The concept of *structural controllability* is a slight generalization dealing with two parameterized matrices $A$ and $B$, in which the parameters characterize the structure of the underlying networked system: if there are specific parameter values that can ensure the system described by the two parameterized matrices be state controllable, then the system is structurally controllable. In case the system is controllable, its state vector $\bf x$ can be driven from any initial state to any target state in the state space within finite time by a suitable control input $\bf u$. The controllability of a network, or networked system, is measured by the density of the controlled nodes, $n_D$, defined by $$\label{eq:nd} n_D\equiv \frac{N_D}{N}\,,$$ where $N_D$ is the number of external control inputs (driver nodes) needed to retain the network controllability after the occurrence of an attack to the network, while the denominator $N$ is the current network size that ensures the controllability. This measure $n_D$ allows networks with different sizes can be compared. The network size does not change during an edge-removal attack but would be reduced by a node-removal attack. It is noted that the smaller the $n_D$ value is, the better the network controllability will be. Generally, there are two ways to calculate the number $N_D$ of driver nodes, for structural controllability and exact (state) controllability, respectively. To introduce these two criteria, first recall from graph theory that in a directed network, a matching is a set of edges that do not share common start nodes or common end nodes. A maximum matching is a matching that contains the largest possible number of edges, which cannot be further extended in the network. A node is matched if it is the end of an edge in the matching; otherwise, it is unmatched. A perfect matching is a (maximum) matching that matches all nodes in the network. According to the *minimum inputs theorem* [@Liu2011N], when a maximum matching is found, the number $N_D$ of driver nodes is determined by the number of unmatched nodes, i.e., $$\label{eq:sc} N_D=\text{max}\{1, N-|E^*|\},$$ where $|E^*|$ is the number of edges in the maximum matching $E^*$. If a network has a perfect matching, then the number of driver nodes is $N_D=1$ and the control input can be put at any node; otherwise, $N_D=N-|E^*|$ control inputs are needed, which should be put at those unmatched nodes. As for exact controllability, if a network is sparse, its number $N_D$ of driver nodes can be calculated by $N_D=\text{max}\{1, N-\text{rank}(A)\}$. Here, a network is considered to be sparse if the number of edges $M$ (i.e., the number of nonzero elements of the adjacency matrix) is much smaller than the possible maximum number of edges, $M_{max}=N\cdot (N-1)$. Usually, if $M/M_{max}\leq 0.05$, then it is considered as a sparse network. Robustness of Network Controllability {#sub:rnc} ------------------------------------- The robustness of network controllability is evaluated after a node or an edge is removed, one by one, yielding a sequence of values that reflect how robust (or vulnerable) a networked system is against the destructive attacks. Different attack strategies result in different damages to the network topology and also its controllability. An attack strategy is chosen according to the “importance” of nodes or edges in the network, but there are different concepts of importance in applications. In this paper, the importance is the controllability of the network. Generally, there are two types of attacks, namely intentional and random node-removal attacks. An intentional attack aims at removing a node that is the most important to maintain the network controllability; for example, the node with the largest degree or betweenness. A random attack removes a randomly-picked node at each time step. In this paper, only *random node-removal* attacks are considered, while intentional node-removal can be similarly discussed. The comparison of robustness of network controllability among different networks will be performed by 1) observing the resultant curves of the controllability matrices, and 2) comparing the robustness measure $R_c$ [@Schneider2011PNAS; @Ruths2013CNIV; @Xiao2014CPB; @Chen2019TCASII], defined as follows: $$\label{eq:rc} R_c= \frac{1}{N-1} \sum_{i=1}^{N-1}f(i)\,,$$ where, as an extension of the robustness measure defined in [@Schneider2011PNAS], $f(i)$ can be the density of the required driver nodes [@Ruths2013CNIV; @Xiao2014CPB] or the rank of the comparison of controllability [@Chen2019TCASII], when a total of $i$ nodes are removed. In both cases, a smaller value of $R_c$ means better robustness against attacks. Empirical Necessary Condition for Optimal Topology {#sec:nec} ================================================== In this section, the relation between topological structure and robustness of controllability is first studied, by observing the attack simulations on some very small-sized networks. In this case, an exhaustive attack strategy can be applied. Then, the observations are summarized as the ENC, as presented by Eq. (\[eq:ki\]) in Section \[sub:enc\]. To rectify an arbitrarily given network towards the satisfaction of ENC, a simple rectification strategy is proposed in Section \[sub:erec\]. Exhaustive Attack {#sub:exat} ----------------- To understand a full picture of the controllability change under *all* possible node-removal attacks, an exhaustive attack to a set of small-sized networks is first simulated and evaluated. Given a network of $N$ nodes, there are $(N-1)!$ permutations of the node-removal sequence, e.g., $1,2,\ldots,N-1$, and $3,N-1,\ldots,5$, etc. Note that an intentional attack (e.g., a degree-based attack) is a specific case in such (permuted) sequences. By performing a node-removal sequence, it generates a resultant curve of controllability values (denoted by $\delta$), which is the controllability measure for the remaining network after a total of $i$ ($i=1,2,\ldots,N-1$) nodes were removed sequentially. The robustness of controllability of a network under the exhaustive attack is then obtained by averaging all these $(N-1)!$ curves, i.e., a mean curve denoted by $\bar{\delta}$ as follows: $$\label{eq:exh} \bar{\delta}= \frac{1}{(N-1)!} \sum_{k=1}^{(N-1)!}\delta_k\,,$$ where $\delta_k$ represents the $k$-th resultant curve of controllability values. The exhaustive attack strategy considers all the sequences equally. Thus, the mean resultant curve of the exhaustive attack is equivalent to the mean of random attacks when the number of repeated runs is large enough. Each node is considered of equal importance in the network controllability study, where each node $i$ ($i=1,2,\ldots,N$) has an equal probability to be removed at the $j$th ($j=1,2,\ldots,N-1$) step within any attack sequence. Table \[tab:exat\] shows the results of performing the exhaustive attack on small-sized networks, where the network size is set to $4$, $5$, and $6$ only. In the table, ‘PI’ represents the number of possible instances with given $N$ and $M$, after filtering out isomorphs. For example, when $N=5$ and $M=10$, there are $1665$ possible combinations to form a unique network instance. For each instance, all $(N-1)!$ attack sequences are implemented and recorded. In the table, ‘ENC’ represents the number of network instances that exactly satisfy the ENC (\[eq:ki\]) in Section \[sub:enc\]); ‘O’ represents the number of optimal robustness instances. The topology of the optimal instance presented in Table \[tab:exat\] can be found in Figs. S1–S3 of the Supplementary Information (SI). A phenomenon can be clearly observed from Table \[tab:exat\] and the optimal instance presented in SI, as summarized in Fig. \[fig:circles\]. Empirically, it is observed that the optimal instance set is a subset of the instances that exactly satisfy the ENC, which is a subset of the full set of all possible network instances with the given values of $N$ and $M$. N M PI ENC O N M PI ENC O --- ---- ------ ----- --- --- ---- ------ ----- --- 4 22 1 1 5 108 1 1 5 37 5 1 6 326 10 1 6 47 11 2 7 667 47 2 7 38 5 1 8 1127 69 2 8 27 2 1 9 1477 26 1 9 13 3 2 10 1665 5 1 10 5 3 2 11 1489 26 1 11 1 1 1 12 1154 70 2 6 582 1 1 13 707 48 2 24 1043 4 2 14 379 12 1 25 288 7 4 15 154 2 1 26 76 8 5 16 61 5 3 27 17 5 4 17 16 4 3 28 5 3 2 18 5 3 2 29 1 1 1 19 1 1 1 : Results of performing exhaustive attack on small-sized networks. PI represents the number of possible network instances with given $N$ and $M$. ENC represents the number of network instances that satisfy Eq. (\[eq:ki\]); O represents the number of optimal robustness instances. The relationships of PI, O, and ENC sets are shown in Fig. \[fig:circles\]. \[tab:exat\] ![\[Color online\] Given a network with $N$ nodes and $M$ edges, the relationships among 1) all the possible instances, 2) the instances that satisfy ENC, and 3) the optimal instances.[]{data-label="fig:circles"}](fig/circles.pdf){width=".5\linewidth"} Empirical Necessary Condition {#sub:enc} ----------------------------- It returns $(N-1)!$ controllability curves after the exhaustive attack is performed. The mean curve is calculated to be the average robustness performance. An illustrative example is given in Fig. \[fig:cmp\_eg\], where the means and standard deviations are given. Then, the robustness measure $R_c$ of each network is calculated according to Eqs. (\[eq:rc\]) and (\[eq:exh\]). In this example, network Net1 is recognized to have better robustness of controllability than Net2, since Net1 has a lower mean curve. ![\[Color online\] An illustrative example of robustness comparison. The network size is $N=5$. The blue circle represents the mean controllability curve of network Net1, and the red diamond represents that of Net2. The same color-shaded region indicates the range of all $4!=24$ curves. []{data-label="fig:cmp_eg"}](fig/cmp_eg.pdf){width=".8\linewidth"} Since both the number of possible network instances and the number of attack sequences increase drastically as network size increases, only very small-sized networks are examined, with results presented in Table \[tab:exat\]. For given $N$ and $M$, the number of instances with optimal robustness is much fewer than the number of all possible instances. Considering together the reported observations [@Lou2018TCASI; @Lou2019R; @Chen2019TCASII], it can be concluded that an optimal instance has the following characteristics: 1) it contains a directed global loop that connects all the $N$ nodes; 2) both the in- and out-degrees are extremely-homogeneously distributed with extremely small differences, if any. Here, the first observation may be integrated into the second. Since the in- and out-degrees of nodes in a directed loop are both extremely-homogeneously distributed, each node has in-degree one and out-degree one. This observation is also consistence with the observations reported earlier in [@Lou2018TCASI; @Lou2019R; @Chen2019TCASII], where it was found that multiple-loop and multiple-chain structures enhance the robustness of network controllability. Therefore, based on all these observations, for a directed network with $N$ nodes and $M$ edges, the network topology with optimal robustness of controllability (against exhaustive or random node attacks) should satisfy the following condition: $$\label{eq:ki} \floor{M/N}\leq k_{i}^{in/out}\leq\ceil{M/N}, i=1,2,\ldots,N,$$ where $k_{i}^{in/out}$ means both in- and out-degrees, in which as a standard notation the floor function $\floor{x}$ returns the greatest integer less than or equal to $x$, and the ceiling function $\ceil{x}$ returns the least integer greater than or equal to $x$. Equation (\[eq:ki\]) suggests that the degree distribution of optimal instances should be allocated in a very narrow slot in the plot. For example, as illustrated by Fig. \[fig:eg3\], given $N$ nodes and $N$ edges, the only instance satisfying Eq. (\[eq:ki\]) is a directed global loop (Fig. \[fig:eg3\](A)), where each node has in-degree one and out-degree one, and any tree structure (Fig. \[fig:eg3\](B)) or reverse edge in a ring-shaped structure (Fig. \[fig:eg3\](C)) does not belong to the global loop hence will significantly alter the extremely-homogeneous distribution of node degrees. This is also obvious from Table \[tab:exat\], where for each case of $M=N=4,5,6$, there is only one instance satisfying the ENC, which is also the optimal instance. ![\[Color online\] Given equal numbers of nodes and edges, possible topologies include: (A) a directed global loop, (B) a tree, and (C) a ring-shaped network with a reverse edge. []{data-label="fig:eg3"}](fig/eg3.pdf){width=".95\linewidth"} Empirically, it is observed that all instances with optimal robustness of controllability satisfy the ENC. But, for a network instance satisfying ENC, it is not necessarily optimal, as illustrated by Fig. \[fig:circles\]. Therefore, this is only a *necessary* condition. With the restriction of ENC, the number of candidate instances in searching for optimal instances is significantly reduced. For example, as can be seen from Table \[tab:exat\], given $N=5$ and $M=10$, the probability for a random network instance has the optimal robustness is $1/1665$. By searching only the instances that satisfy ENC, the probability increases to $1/5$. Thus, the probability of success is largely improved and the computational cost is significantly reduced. It can also be observed from Table \[tab:exat\] that, as $N$ and $M$ increase, the number of instances satisfying ENC remains relatively small, compared to the total number of possible instances. Thus, with the objective of searching for the optimal instances, many instances that do not satisfy ENC can be eliminated from the candidate pool. When $N$ and $M$ are not very small, the number of possible instances could be tremendously huge. Therefore, the ENC provides an efficient means of improving the performance of searching for optimal instances from a large pool of candidates. Edge Rectification {#sub:erec} ------------------ Since it is computational impossible to review all the possible instances when $N$ and $M$ are large, a simple yet effective strategy called the random edge rectification (RER) is proposed here to rectify a synthetic or real-world network for its satisfaction of the ENC. It is a variant network, one with the same $N$ and $M$ but different topology. On the other hand, it is also impossible to apply exhaustive attacks on a large-sized network, so a random attack is applied with many repeated runs instead. For any node $i$, if its in- or out-degree does not satisfy Eq. (\[eq:ki\]), edge rectification is needed. There are four possible edge rectification operations: 1. If $k_{i}^{out}<\floor{M/N}$, then find another node $k$ with out-degree greater than $\ceil{M/N}$, and randomly pick one of its out-edges, $A_{k,l}$. Delete this edge $A_{k,l}$ and add an edge $A_{i,l}$. This increases $k_{i}^{out}$ by one and decreases $k_{k}^{out}$ by one. 2. If $k_{i}^{out}>\ceil{M/N}$, then randomly pick one of its out-edges $A_{i,j}$, and find another node $k$ with out-degree less than $\floor{M/N}$. Delete this edge $A_{i,j}$ and add an edge $A_{k,j}$. This decreases $k_{i}^{out}$ by one and increases $k_{k}^{out}$ by one. 3. If $k_{i}^{in}<\floor{M/N}$, then find another node $k$ with in-degree greater than $\ceil{M/N}$, and randomly pick one of its in-edges $A_{l,k}$. Delete this edge $A_{l,k}$ and add an edge $A_{l,i}$. This increases $k_{i}^{in}$ by one and decreases $k_{k}^{in}$ by one. 4. If $k_{i}^{in}>\ceil{M/N}$, then randomly pick one of its in-edges $A_{j,i}$, and find another node $k$ with in-degree less than $\floor{M/N}$. Delete this edge $A_{j,i}$ and add an edge $A_{j,k}$. This decreases $k_{i}^{in}$ by one and increases $k_{k}^{in}$ by one. An execution of any of the above four rectifications is said to be an RER operation. Specifically, the RER strategy is defined as follows: pick a random operation of the four edge rectification operations, until the stop criterion is met. The stop criterion is either “the maximum number of RER operations is reached” or “the network has satisfied the ENC”. Given two networks, the one requiring less number of RER operations to exactly satisfy the ENC is said to be closer to satisfying ENC. Experimental Studies {#sec:exp} ==================== In this section, the RER strategy is applied to several different complex network topologies, including six synthetic and two real-world networks, to improve their robustness of controllability. The influences of RER is investigated on 1) modifying the networks’ degree distributions towards satisfaction of ENC, 2) enhancing the robustness of controllability, and 3) enhancing the robustness of connectedness. Six typical directed synthetic network models are adopted for simulation, namely the Erd[ö]{}s–R[é]{}nyi random graph (ER) [@Erdos1964RG], Newman–Watts small-world (SW) network [@Newman1999PLA], generic scale-free (SF) network [@Pu2012PA; @Goh2001PRL; @Sorrentino2007CH], *q*-snapback network (QSN) [@Lou2018TCASI; @Lou2019R], random triangle network (RTN) [@Chen2019TCASII], and random rectangle network (RRN) [@Chen2019TCASII]. Two real-world networks are used for verification, namely the email-Eu-core (EE)[^5] network and the Gnutella peer-to-peer (GP)[^6] network, which will be detailed in Section \[sub:rwn\]. In the following, the generation methods and parameters of the six synthetic networks are introduced. ### Erd[ö]{}s–R[é]{}nyi Random Graph Networks An ER network is generated as follows: - Start with $N$ isolated nodes. - Pick up all possible pairs of nodes from the $N$ given nodes, denoted by $i$ and $j$ ($i\neq j$, $i,j=1,2,...,N$), once and once only. Connect each pair of nodes by a directed edge with probability $p_{RG}\in[0,1]$, where the edge has the same probability directing from $i$ to $j$, or $j$ to $i$. Given the numbers of $N$ and $M$, let $p_{RG}=\frac{M}{N(N-1)}$. To exactly control the number of generated edges to be $M$, uniformly-randomly adding or removing edges can be performed. Here, when adding an edge, the direction can be random. ### Newman–Watts Small-world Networks An SW network is generated as follows: - Start with a directed $N$-node loop having $K$ connected nearest-neighbors on each side of each node. - Additional edges with random directions are added without removing any existing edges. Set $K=2$ in the following, namely, a node $i$ is connected to its two nearest neighbors on each side, with nodes $i-1$, $i+1$, $i-2$ and $i+2$, via edges $A_{i-1,i}$, $A_{i,i+1}$, $A_{i-2,i}$ and $A_{i,i+2}$. ### Scale-Free Networks An SF network is generated as follows: - Start with $N$ isolated nodes. - A weight $w_{i}=(i+\theta)^{-\sigma}$ is assigned to node $i$, with $\sigma\in[0,1)$ and $\theta\ll N$. - Two nodes $i$ and $j$ ($i\neq j$, $i,j=1,2,...,N$) are randomly picked from the pool with a probability proportional to the weights $w_i$ and $w_j$, respectively. Then, an edge $A_{ij}$ from $i$ to $j$ is added (if the two nodes are already connected, do nothing). - Repeat Step 3), until $M$ edges have been added. The resulting network has a power-law distribution $k^{-\gamma}$ with $\gamma=1+\frac{1}{\sigma}$, where $k$ is the degree variable, which is independent of $\theta$. Here, $\sigma$ is set to $0.999$, and thus $\gamma=2.001$. ### *q*-Snapback Networks Consider a *q*-snapback network (QSN) with only one layer $r_{QSN}$ for simplicity. This QSN is generated as follows: - Start with a directed chain of $N$ nodes, where each node $i$ ($i=1,2,...,N-1$) has an edge $A_{i,i+1}$. - For each node $i=r_{QSN}+1,\, r_{QSN}+2, \ldots, N$, it connects backward to the previously-appeared nodes $i-l\times r_{QSN}$ ($l=1, 2, \ldots, \floor{i/r_{QSN}}$), with the same probability $q\in[0,1]$. In the following experimental study, $r_{QSN}$ is set to $2$. Given $N=1000$ and $M=5000$, $q$ is estimated to be $0.008$ for fair comparisons. To exactly generate $M$ edges, uniformly-randomly edge-adding with random direction should be applied. ### Random Triangle Networks Triangular structure, which has been observed benefit to the robustness of controllability [@Lou2018TCASI] and network stability [@Schultz2014NJP; @Nitzbon2017NJP], is frequently observed in real-life situations. A directed random triangle network (RTN) is generated as follows: - Start with $N-3$ isolated nodes, with the other 3 nodes connected in a directed triangle. - Randomly pick up two nodes, $i$ and $j$, without edge $A_{ij}$ or $A_{ji}$ (otherwise, do nothing). Then, randomly pick up a node $k$ from all the neighbors of node $j$. If there is an edge $A_{jk}$, then add two edges $A_{ij}$ and $A_{ki}$; otherwise (e.g., with an edge $A_{kj}$), add two edges $A_{ji}$ and $A_{ik}$. - Repeat Step 2), until $M$ edges have been added. ### Random Rectangle Networks The above directed RTN is extended to a random rectangle network (RRT), as follows: - Start with $N-4$ isolated nodes, and the other 4 nodes are connected in a directed rectangle. - Randomly pick up three nodes, $i$, $j$ and $k$, without edges between any pair of them (otherwise, do nothing). Then, randomly pick up a node $w$ from the neighbors of node $k$. If there is an edge $A_{kw}$, then add edges $A_{wi}$, $A_{ij}$, and $A_{jk}$; otherwise (e.g., with an edge $A_{wk}$), add edges $A_{ki}$, $A_{ij}$, and $A_{jw}$. - Repeat Step 2), until $M$ edges have been added. Since at each time step, two edges are added into RTN, and three edged are added into RRN, uniformly-randomly adding or removing edges can be performed to control the number of edges exactly. In the simulation below, the network size is $N=1000$ with average out-degree $\langle K^{out}\rangle=5$, i.e., $M=5000$. To minimize the influence of stochasticity, for each configuration with the given $N$, $M$ and the number RER operations, referred to as a network *instance*, repeat $30$ independent runs. For each network, the random attack is performed $50$ times independently. Thus, each statistic datum is averaged from $1500$ runs. Simulation results with different network sizes of $N=\{500,2000\}$, and different average out-degrees of $\langle K^{out}\rangle=\{3,8\}$, are shown in Figs. S6–S13 of the SI. Towards Satisfaction of the ENC {#sub:t_enc} ------------------------------- ![\[Color online\] Number of RER operations to rectify a network to satisfy the ENC. The network configuration has $N=1000$ and $M=5000$; the number of repeated runs is $100$.[]{data-label="fig:rer_vs_network"}](fig/rer_vs_net.pdf){width=".7\linewidth"} Figure \[fig:rer\_vs\_network\] presents the boxplot of the needed number of RER operations for a network to exactly satisfy the ENC. In a boxplot, the blue box indicates that the central $50\%$ samples lie within this section; the red bar inside the box is the median; the upper and lower black bars are the greatest and least values, excluding outliers; and finally the red pluses represent the outliers. It can be seen from the figure that, for a network configuration with $N=1000$ and $M=5000$, ER, SW, QSN, RTN, and RRN require a median of rounds of $0.8\times10^4$ to $0.9\times10^4$ for rectification, while SF requires rounds of $1\times10^4$ to $1.1\times10^4$. SW is the closest to satisfying the ENC, while SF is the farthest. Referring to Fig. \[fig:nd(sc)\_vs\_pn\](A), SW shows the best robustness of controllability, while SF shows the worst. Or, simply put, a network with better robustness of controllability needs less number of RER operations towards satisfaction of the ENC. Given different values of $N$ and $M$, the needed number of RER operations for ER and SF can be found in Fig. S14 of the SI. Towards Optimal Robustness of Controllability {#sub:t_orc} --------------------------------------------- Now, the change of robustness of controllability is studied, as the number of RER operations changes. First, note from Fig. \[fig:rer\_vs\_network\] that it requires about ten thousands of RER operations for a network with $N=1000$ and $M=5000$ to satisfy the ENC. Then, to see the changes, the following four situations are compared: 1) no RER operation is implemented, 2) $1000$ RER operations are implemented, which are around one tenth of the number of the needed RER operations to exactly satisfy the ENC, 3) $5000$ RER operations are implemented, which are around a half of the number of operations to exactly satisfy the ENC, and 4) unlimited RER operations (denoted by Inf) until the ENC is exactly satisfied. Figure \[fig:nd(sc)\_vs\_pn\] shows that the robustness of structural controllability of the six networks is enhanced as the number of RER operations increases. In Fig. \[fig:nd(sc)\_vs\_pn\](A), case 1), shows different curves of controllability; case 2), the robustness of all networks is improved and the performance difference becomes smaller; cases 3) and 4), the robustness is significantly enhanced and the difference of curves becomes indistinguishable, for which although there is not guarantee that the rectified networks have optimal robustness of controllability, it is obvious that the robustness is significantly enhanced. Figures \[fig:ho\_vs\_pn\] and \[fig:hi\_vs\_pn\] show the change of heterogeneity of out- and in-degrees (HO and HI), against the proportion of removed nodes. It can be seen from the figures that both HO and HI increase as nodes are gradually removed. As can also be seen from Figs. \[fig:ho\_vs\_pn\](A) and \[fig:hi\_vs\_pn\](A), the original SF network has the highest HO and HI, suggesting that low HO and HI values imply better robustness of controllability. As the number of RER operations increases (Figs. \[fig:ho\_vs\_pn\](B,C) and \[fig:hi\_vs\_pn\](B,C)), both HO and HI are reduced, finally resulting in extremely-homogeneous networks (Figs. \[fig:ho\_vs\_pn\](D) and \[fig:hi\_vs\_pn\](D)), which show the best robustness of controllability. ![image](fig/ndsc_vs_pn.pdf){width="\linewidth"} ![image](fig/ho_vs_pn.pdf){width="\linewidth"} ![image](fig/hi_vs_pn.pdf){width="\linewidth"} Since connectedness is also important in the regard of network controllability and other issues, the proportion of node-removals needed to disconnect a network, namely, the minimum proportion of nodes to be removed in order to break the network into disjoint components, under *random* attacks, is examined next. As can be seen from Fig. \[fig:t\_vs\_network\](A), different networks show different behaviors against random attacks, where SF is the easiest to be disconnected among the six networks, while SW is the hardest. This phenomenon is consistent with what was presented in Fig. \[fig:rer\_vs\_network\](A), where SF shows the worst robustness against attacks, while SW has the best. In Fig. \[fig:t\_vs\_network\](B), the proportion of node-removals to disconnect all networks is increased, meaning that $1000$ RER operations improve the connectedness of all networks against attacks. Finally, in Fig. \[fig:t\_vs\_network\](C,D), the value of $P_N$ is further increased. Fig. \[fig:t\_vs\_network\] demonstrates that the robustness of network connectedness is improved as the number of RER operations increases. ![image](fig/pn_vs_net.pdf){width="\linewidth"} Extensive Simulations on ER and SF {#sub:er_n_sf} ---------------------------------- ![\[Color online\] Robustness of *structural* controllability of (A) ER, and (B) SF. $n_D$ represents the density of controlled-nodes calculated by Eq. (\[eq:nd\]); $P_N$ represents the proportion of removed nodes. (Robustness of *exact* controllability can be found in Fig. S5 of the SI.)[]{data-label="fig:er_sf_ndSC"}](fig/ersf_ndsc.pdf){width="\linewidth"} Next, only ER and SF are discussed. A small step length of RER increase is set, such that the subtle influences of RER on the robustness of both controllability and connectedness can be clearly evaluated. Figure \[fig:er\_sf\_ndSC\] shows the curves of the robustness of controllability of ER and SF, as the number of RER operations increases, namely, with RER $=\{0,100,200,500,1000,2000,5000,\text{Inf}\}$. The results are averaged on $100$ repeated runs. It is obvious that increasing the number of RER operations significantly enhances the robustness of controllability. The curves are distinguishable even when $100$ RER operations are performed on the networks, showing that the operations have clear impacts. ![\[Color online\] Proportion of random node-removals (denoted by $P_N$) against the number of RER operations to disconnect a network: (A) ER, and (B) SF.[]{data-label="fig:er_sf_t_vs_r"}](fig/ersf_pn_vs_rer.pdf){width="\linewidth"} Figure \[fig:er\_sf\_t\_vs\_r\] shows the proportion of random node-removals to disconnect a network, against the number of RER operations. The data are averaged on $100$ repeated runs. In this figure, a higher $P_N$ value means more node-removals is needed to disconnect the network, namely, the network has better robustness of connectedness. For both ER and SF, the robustness of connectedness is improved as the number RER operations increases. ![\[Color online\] Out-degree distribution changes as the number of RER operations increases: (A) ER, and (B) SF. (In-degree distribution can be found in Fig. S15 of the SI.)[]{data-label="fig:er_sf_pKo"}](fig/er_sf_pKo.pdf){width="\linewidth"} As shown in Figs. \[fig:ho\_vs\_pn\] and \[fig:hi\_vs\_pn\], the operation of RER makes a network gradually become homogeneous. Fig. \[fig:er\_sf\_pKo\] shows the change of out-degree distributions of ER and SF. Fig. \[fig:er\_sf\_pKo\](A) shows that the out-degree distribution of the original ER is Poisson, but as the number of RER operations increases, the out-degree distribution becomes concentrated at $M/N$ ($M/N=5$ in the figure). Fig. \[fig:er\_sf\_pKo\](B) shows that the original power-law distribution of SF also concentrates at $M/N$. Unlimited number of RER operations make both ER and SF (and any other network) become extremely homogeneous. Two real-world Networks {#sub:rwn} ----------------------- [|l|l|c|c|]{} & & $N$ & $M$\ --------------- email-Eu-core (EE) --------------- : Parameters and descriptions of the two real-world networks (the number of edges $M$ of EE is $25571$ in [@SNAP2014]; after discarding self-loops, it becomes $24929$) & ------------------------------- a directed network generated using email data from European research institution ------------------------------- : Parameters and descriptions of the two real-world networks (the number of edges $M$ of EE is $25571$ in [@SNAP2014]; after discarding self-loops, it becomes $24929$) & $1005$ & $24929$\ -------------- Gnutella peer-to-peer (GP) -------------- : Parameters and descriptions of the two real-world networks (the number of edges $M$ of EE is $25571$ in [@SNAP2014]; after discarding self-loops, it becomes $24929$) & ------------------------------ a directed network generated by snapshots of Gnutella peer-to-peer file sharing network in August 2002 ------------------------------ : Parameters and descriptions of the two real-world networks (the number of edges $M$ of EE is $25571$ in [@SNAP2014]; after discarding self-loops, it becomes $24929$) & $6301$ & $20777$\ \[tab:rwn\] ![\[Color online\] Robustness of *structural* controllability of (A) EE, and (B) GP. $n_D$ represents density of controlled-nodes calculated by Eq. (\[eq:nd\]); $P_N$ represents the proportion of removed nodes. []{data-label="fig:rwn_ndsc_vs_pn"}](fig/rwn_ndsc_vs_pn.pdf){width="\linewidth"} ![\[Color online\] Proportion of random node-removals (denoted by $P_N$) against the number of RER operations to disconnect a network: (A) EE, and (B) GP.[]{data-label="fig:rwn_pn_vs_rer"}](fig/rwn_pn_vs_rer.pdf){width="\linewidth"} Consider two real-world networks, namely email-Eu-core (EE) network and Gnutella peer-to-peer (GP) network [@SNAP2014]. Their parameters and brief descriptions are presented in Table \[tab:rwn\]. The original EE and GP (with RER$=0$) are compared to the settings with RER $=\{100,1000,5000,10000,\text{Inf}\}$. The controllability curves are shown in Fig. \[fig:rwn\_ndsc\_vs\_pn\]. A phenomenon similar to the simulations on synthetic networks is observed: the robustness of controllability is improved as the number of RER operations increases. The controllability curves of the original EE and GP (with RER$=0$) are far away from the curves that exactly satisfy the ENC (with RER$=\text{Inf}$). This clearly shows that real-world networks are far away from having optimal robustness of controllability. In Fig. \[fig:rwn\_pn\_vs\_rer\], the boxplot presents the proportion of node-removals needed to break the network into disjoint components, for EE and GP respectively. It can be observed that the original real-world networks are quite fragile, but after rectified with RER operations, their robustness of connectedness is significantly improved. Conclusions {#sec:end} =========== This paper presents a search for the network configuration with optimal robustness of controllability against random node-removal attacks. Since analytical approach is impossible at least in this time, the exhaustive attack strategy that applies all possible attack sequences is applied. Since this too is an intractable attempt even for the numerical approach, the work is performed on some very small-size networks. This nevertheless yields clear determined patterns of optimal solutions, which suggests an empirical necessary condition (ENC), indicating that the optimal instance of network configuration should be extremely homogeneous. ENC rules out the network instances that would not be candidates having optimal robustness of controllability. A random edge rectification (RER) strategy is then proposed to rectify synthetic and real-world networks towards exact satisfaction of the ENC, which also provides a way to enhance the robustness of controllability. The observed ENC may be useful in designing future network models. The phenomenon observed in this paper has an important implication that real-world networks as well as the commonly-used synthetic models are actually far away from the topologies with optimal robustness of controllability. Future work along the same line may be extended to other scenarios of malicious attacks, e.g., edge-removal and intentional attacks. [^1]: Y. Lou, K. F. Tsang and G. Chen are with the Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China (e-mails: felix.lou@my.cityu.edu.hk; ee330015@cityu.edu.hk; eegchen@cityu.edu.hk). [^2]: L. Wang is with the Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China, and also with the Key Laboratory of System Control and Information Processing, Ministry of Education, Shanghai 200240, China (e-mail: wanglin@sjtu.edu.cn). [^3]: Supported by the Hong Kong ITF Grant CityU ITP/058/17LP, the National Natural Science Foundation of China under Grant No. 61873167, and the Natural Science Foundation of Shanghai (No. 17ZR1445200). [^4]: (*Corresponding author: Guanrong Chen.*) [^5]: <https://snap.stanford.edu/data/email-Eu-core.html> [^6]: <https://snap.stanford.edu/data/p2p-Gnutella08.html>
{ "pile_set_name": "ArXiv" }
--- abstract: 'We observed, for the first time, solar neutrinos in the 1.0–1.5MeV energy range. We measured the rate of  solar neutrino interactions in Borexino to be (3.1$\pm$0.6$_{\rm stat}$$\pm$0.3$_{\rm syst}$) and provided a constraint on the  solar neutrino interaction rate of $<$7.9 (95% C.L.). The absence of the solar neutrino signal is disfavored at 99.97% C.L., while the absence of the  signal is disfavored at 98% C.L. This unprecedented sensitivity was achieved by adopting novel data analysis techniques for the rejection of cosmogenic , the dominant background in the 1–2MeV region. Assuming the MSW-LMA solution to solar neutrino oscillations, these values correspond to solar neutrino fluxes of  and $<$ (95% C.L.), respectively, in agreement with the Standard Solar Model. These results represent the first measurement of the  neutrino flux and the strongest constraint of the  solar neutrino flux to date.' author: - 'G. Bellini' - 'J. Benziger' - 'D. Bick' - 'S. Bonetti' - 'G. Bonfini' - 'D. Bravo' - 'M. Buizza Avanzini' - 'B. Caccianiga' - 'L. Cadonati' - 'F. Calaprice' - 'C. Carraro' - 'P. Cavalcante' - 'A. Chavarria' - 'D. D’Angelo' - 'S. Davini' - 'A. Derbin' - 'A. Etenko' - 'K. Fomenko' - 'D. Franco' - 'C. Galbiati' - 'S. Gazzana' - 'C. Ghiano' - 'M. Giammarchi' - 'M. Goeger-Neff' - 'A. Goretti' - 'L. Grandi' - 'E. Guardincerri' - 'S. Hardy' - Aldo Ianni - Andrea Ianni - 'D. Korablev' - 'G. Korga' - 'Y. Koshio' - 'D. Kryn' - 'M. Laubenstein' - 'T. Lewke' - 'E. Litvinovich' - 'B. Loer' - 'F. Lombardi' - 'P. Lombardi' - 'L. Ludhova' - 'I. Machulin' - 'S. Manecki' - 'W. Maneschg' - 'G. Manuzio' - 'Q. Meindl' - 'E. Meroni' - 'L. Miramonti' - 'M. Misiaszek' - 'D. Montanari' - 'P. Mosteiro' - 'V. Muratova' - 'L. Oberauer' - 'M. Obolensky' - 'F. Ortica' - 'K. Otis' - 'M. Pallavicini' - 'L. Papp' - 'L. Perasso' - 'S. Perasso' - 'A. Pocar' - 'J. Quirk' - 'R.S. Raghavan' - 'G. Ranucci' - 'A. Razeto' - 'A. Re' - 'A. Romani' - 'A. Sabelnikov' - 'R. Saldanha' - 'C. Salvo' - 'S. Schönert' - 'H. Simgen' - 'M. Skorokhvatov' - 'O. Smirnov' - 'A. Sotnikov' - 'S. Sukhotin' - 'Y. Suvorov' - 'R. Tartaglia' - 'G. Testera' - 'D. Vignaud' - 'R.B. Vogelaar' - 'F. von Feilitzsch' - 'J. Winter' - 'M. Wojcik' - 'A. Wright' - 'M. Wurm' - 'J. Xu' - 'O. Zaimidoroga' - 'S. Zavatarelli' - 'G. Zuzel' title: First evidence of  solar neutrinos by direct detection in Borexino --- Over the past 40 years solar neutrino experiments [@bib:rchem-cl; @bib:rchem-ga; @bib:kamiokande; @bib:sno; @bib:bxbe7] have proven to be sensitive tools to test both astrophysical and elementary particle physics models. Solar neutrino detectors have demonstrated that stars are powered by nuclear fusion reactions. Two distinct processes, the main  fusion chain and the sub-dominant  cycle, are expected to produce solar neutrinos with different energy spectra and fluxes. Until now only fluxes from the  chain have been measured: , , and, indirectly, . Experiments involving solar neutrinos and reactor anti-neutrinos [@bib:kamland] have shown that solar neutrinos undergo flavor oscillations. Results from solar neutrino experiments are consistent with the Mikheyev-Smirnov-Wolfenstein Large Mixing Angle (MSW-LMA) model [@bib:msw], which predicts a transition from vacuum-dominated to matter-enhanced oscillations, resulting in an energy dependent $\nu_e$ survival probability, $P_{ee}$. Non-standard neutrino interaction models formulate $P_{ee}$ curves that deviate significantly from MSW-LMA, particularly in the 1–4MeV transition region, see e.g. [@bib:nonstandard]. The mono-energetic 1.44MeV  neutrinos, which belong to the  chain and whose Standard Solar Model (SSM) predicted flux has one of the smallest uncertainties (1.2%) due to the solar luminosity constraint [@bib:ssm2011], are an ideal probe to test these competing hypotheses. The detection of neutrinos resulting from the  cycle has important implications in astrophysics, as it would be the first direct evidence of the nuclear process that is believed to fuel massive stars ($>$1.5$M_{\odot}$). Furthermore, its measurement may help to resolve the solar metallicity problem [@bib:metallicity; @bib:ssm2011]. The energy spectrum of neutrinos from the  cycle is the sum of three continuous spectra with end point energies of 1.19 (), 1.73 () and 1.74MeV (), close to the  neutrino energy. The total  flux is similar to that of the  neutrinos but its predicted value is strongly dependent on the inputs to the solar modeling, being 40% higher in the High Metallicity (GS98) than in the Low Metallicity (AGSS09) solar model [@bib:ssm2011]. Neutrinos interact through elastic scattering with electrons in the $\sim$278 ton organic liquid scintillator target of Borexino [@bib:bxdetectorpaper]. The electron recoil energy spectrum from  neutrino interactions in Borexino is a Compton-like shoulder with end point of 1.22MeV. High light yield and unprecedentedly low background levels [@bib:bxbe7; @bib:bxliquid] make Borexino the only detector presently capable of performing solar neutrino spectroscopy below 2MeV. Its potential has already been demonstrated in the precision measurement of the 0.862MeV  solar neutrino flux [@bib:bxbe7; @bib:daynight]. The detection of  and  neutrinos is even more challenging, as their expected interaction rates are $\sim$10 times lower, a few counts per day in a 100ton target. ![ Top: energy spectra of the events in the FV before and after the TFC veto is applied. The solid and dashed blue lines show the data and estimated  rate before any veto is applied. The solid black line shows the data after the procedure, in which the  contribution (dashed) has been greatly suppressed. The next largest background, , and the electron recoil spectra of the best estimate of the  neutrino rate and of the upper limit of the  neutrino rate are shown for reference. Rate values in the legend are quoted in units of counts/(day$\cdot$100metricton). Bottom: residual energy spectrum after best-fit rates of all considered backgrounds are subtracted. The electron recoil spectrum from  neutrinos at the best-fit rate is shown for comparison. []{data-label="fig:tfc"}](tfc.pdf "fig:"){width="\linewidth"} ![ Top: energy spectra of the events in the FV before and after the TFC veto is applied. The solid and dashed blue lines show the data and estimated  rate before any veto is applied. The solid black line shows the data after the procedure, in which the  contribution (dashed) has been greatly suppressed. The next largest background, , and the electron recoil spectra of the best estimate of the  neutrino rate and of the upper limit of the  neutrino rate are shown for reference. Rate values in the legend are quoted in units of counts/(day$\cdot$100metricton). Bottom: residual energy spectrum after best-fit rates of all considered backgrounds are subtracted. The electron recoil spectrum from  neutrinos at the best-fit rate is shown for comparison. []{data-label="fig:tfc"}](residuals.pdf "fig:"){width="\linewidth"} ![Experimental distribution of the pulse shape parameter (black). The best-fit distribution (black dashed) and the corresponding $e^-$ (red) and $e^+$ (blue) contributions are also shown.[]{data-label="fig:bdt"}](bdt.pdf){width="\linewidth"} We adopted novel analysis procedures to suppress the dominant background in the 1–2MeV energy range, the cosmogenic $\beta^+$-emitter  (lifetime: 29.4 min).  is produced in the scintillator by cosmic muon interactions with  nuclei. The muon flux through Borexino is $\sim$4300$\mu$/day, yielding a  production rate of $\sim$27. In 95% of the cases at least one free neutron is spalled in the  production process [@bib:c11cris], and then captured in the scintillator with a mean time of 255$\mu$s [@bib:bxmuon]. The  background can be reduced by performing a space and time veto after coincidences between signals from the muons and the cosmogenic neutrons [@bib:deutsch; @bib:pep-ctf], discarding exposure that is more likely to contain  due to the correlation between the parent muon, the neutron and the subsequent  decay (the Three-Fold Coincidence, TFC). The technique relies on the reconstructed track of the muon and the reconstructed position of the neutron-capture $\gamma$-ray [@bib:bxmuon]. The rejection criteria were chosen to obtain the optimal compromise between  rejection and preservation of fiducial exposure, resulting in a  rate of (2.5$\pm$0.3), (9$\pm$1)$\%$ of the original rate, while preserving 48.5% of the initial exposure. The resulting spectrum (Fig. \[fig:tfc\], top) corresponds to a fiducial exposure of 20409 ton$\cdot$day, consisting of data collected between January 13, 2008 and May 9, 2010. The  surviving the TFC veto is still a significant background. We exploited the pulse shape differences between $e^-$ and $e^+$ interactions in organic liquid scintillators [@bib:annihilation; @bib:positronium] to discriminate  $\beta^+$ decays from neutrino-induced $e^-$ recoils and $\beta^-$decays. A slight difference in the time distribution of the scintillation signal arises from the finite lifetime of ortho-positronium as well as from the presence of annihilation $\gamma$-rays, which present a distributed, multi-site event topology and a larger average ionization density than electron interactions. An optimized pulse shape parameter was constructed using a boosted-decision-tree algorithm [@bib:tmva], trained with a TFC-selected set of  events ($e^+$) and  events ($e^-$) selected by the fast  $\alpha$-$\beta$ decay sequence. We present results of an analysis based on a binned likelihood multivariate fit performed on the energy, pulse shape, and spatial distributions of selected scintillation events whose reconstructed position is within the fiducial volume (FV), i.e. less than 2.8m from the detector center and with a vertical position relative to the detector center between -1.8m and 2.2m. We confirmed the accuracy of the modeling of the detector response function used in the fit by means of an extensive calibration campaign with $\alpha$, $\beta$, $\gamma$ and neutron sources deployed within the active target [@bib:bxbe7]. The distribution of the pulse shape parameter (Fig. \[fig:bdt\]) was a key element in the multivariate fit, where decays from cosmogenic  (and ) were considered $e^+$ and all other species $e^-$. The energy spectra and spatial distribution of the external $\gamma$-ray backgrounds have been obtained from a full, Geant4-based Monte Carlo simulation, starting with the radioactive decays of contaminants in the detector peripheral structure and propagating the particles into the active volume. We validated the simulation with calibration data from a high-activity $^{228}$Th source [@maneschg] deployed in the outermost buffer region, outside the active volume. The non-uniform radial distribution of the external background was included in the multivariate fit and strongly constrained its contribution. Neutrino-induced $e^-$ recoils and internal radioactive backgrounds were assumed to be uniformly distributed. Fig. \[fig:rdist\] shows the radial component of the fit. ![ Experimental distribution of the radial coordinate of the reconstructed position within the FV (black). The best-fit distribution (black dashed) and the corresponding contributions from bulk events (red) and external $\gamma$-rays (blue) are also shown. []{data-label="fig:rdist"}](rdist.pdf){width="\linewidth"} We removed $\alpha$ events from the energy spectrum by the method of statistical subtraction [@bib:bxbe7]. We excluded from the fit all background species whose rates were estimated to be less than 5% of the predicted rate from  neutrinos in the energy region of interest. Furthermore, we constrained all rates to positive values. The thirteen species left free in the fit were the internal radioactive backgrounds , , , , , , and  (from  decay chain), electron recoils from , , and  solar neutrinos, and external $\gamma$-rays from , , and . We fixed the contribution from  solar neutrinos to the SSM predicted rate (assuming MSW-LMA with $\tan^2\theta_{12}$=0.47$^{+0.05}_{-0.04}$, $\Delta m^2_{12}$=eV$^2$ [@bib:pdg2010]) and the contribution from  neutrinos to the rate from the measured flux [@bib:sno]. We fixed the rate of the radon daughter  using the measured rate of  delayed coincidence events. Simultaneously to the fit of events surviving the TFC veto, we also fit the energy spectrum of events rejected by the veto, corresponding to the remaining 51.5% of the exposure. We constrained the rate for every non-cosmogenic species to be the same in both data sets, since only cosmogenic isotopes are expected to be correlated with neutron production. Fits to simulated event distributions, including all species and variables considered for the data fit, returned results for the  and  neutrino interaction rates that were unbiased and uncertainties that were consistent with frequentist statistics. These tests also yielded the distribution of best-fit likelihood values, from which we determined the p-value of our best-fit to the real data to be 0.3. Table \[tab:results-summary\] summarizes the results for the  and  neutrino interaction rates. The absence of the solar neutrino signal was rejected at 99.97% C.L. using a likelihood ratio test between the result when the  and  neutrino interaction rates were fixed to zero and the best-fit result. Likewise, the absence of a  neutrino signal was rejected at 98% C.L. Due to the similarity between the electron-recoil spectrum from CNO neutrinos and the spectral shape of , whose rate is $\sim$10 times greater, we can only provide an upper limit on the  neutrino interaction rate. The 95% C.L. limit reported in Table \[tab:results-summary\] has been obtained from a likelihood ratio test with the  neutrino rate fixed to the SSM prediction [@bib:ssm2011] under the assumption of MSW-LMA, (2.80$\pm$0.04), which leads to the strongest test of the solar metallicity. For reference, Fig. \[fig:dchi2\] shows the full $\Delta \chi^2$ profile for  and  neutrino interaction rates. ------- ----------------------------------------------- ------------------ ------------- $\nu$ Interaction rate Solar-$\nu$ flux Data/SSM \[\] \[\] ratio $3.1 \pm 0.6_{\rm stat} \pm$ 0.3$_{\rm syst}$ $1.6\pm0.3$ $1.1\pm0.2$ $<7.9$ ($<7.1_{\rm stat\,only}$) $<7.7$ $<1.5$ ------- ----------------------------------------------- ------------------ ------------- : The best estimates for the  and  solar neutrino interaction rates. For the results in the last two columns both statistical and systematic uncertainties are considered. Total fluxes have been obtained assuming MSW-LMA and using the scattering cross-sections from [@bib:BahcallRadiativeCorrection; @bib:pdg2010; @bib:erlerRadCorr] and a scintillator $e^-$ density of ton$^{-1}$. The last column gives the ratio between our measurement and the High Metallicity (GS98) SSM [@bib:ssm2011]. []{data-label="tab:results-summary"} --------------- ------------------ ----------------------- Background Interaction rate Expected rate \[\] \[\] $19^{+5}_{-3}$ $30\pm6$ [@bib:bxbe7] $55^{+3}_{-5}$ – $27.4\pm0.3$ $28\pm5$ $0.6\pm0.2$ $0.54\pm0.04$ $<2$ $0.31\pm0.04$ $<0.4$ – $<0.5$ $0.57\pm0.05$ Ext. $\gamma$ $2.5\pm0.2$ – --------------- ------------------ ----------------------- : The best estimates for the total rates of the background species included in the fit. The statistical and systematic uncertainties were added in quadrature. The expected rates for the cosmogenic isotopes ,  and  have been obtained following the methodology outlined in [@bib:bxb8]. The expected  rate was determined from the  measured coincidence rate, under the assumption of secular equilibrium. Ext. $\gamma$ includes the estimated contributions from ,  and  external $\gamma$-rays. []{data-label="tab:bkg"} The estimated  neutrino interaction rate is consistent with our measurement [@bib:bxbe7]. Table \[tab:bkg\] summarizes the estimates for the rates of the other background species. The higher rate of  compared to [@bib:bxbe7] is due to the exclusion of data from 2007, when the observed decay rate of  in the FV was smallest. Table \[tab:syst\] shows the relevant sources of systematic uncertainty. To evaluate the uncertainty associated with the fit methods we have performed fits changing the binning of the energy spectra, the fit range and the energy bins for which the radial and pulse-shape parameter distributions were fit. This has been done for energy spectra constructed from both the number of PMTs hit and the total collected charge in the event. Further systematic checks have been carried out regarding the stability of the fit over different exposure periods, the spectral shape of the external $\gamma$-ray background and electron recoils from  neutrinos, the fixing of  and  and  neutrinos to their expected values, and the exclusion of minor radioactive backgrounds (short-lived cosmogenics and decays from the  chain) from the fit. ![ $\Delta\chi^2$ profile obtained from likelihood ratio tests between fit results where the  and  neutrino interaction rates are fixed to particular values (all other species are left free) and the best-fit result. []{data-label="fig:dchi2"}](dchi2.pdf){width="\linewidth"} Source \[%\] -------------------------------------------------------- ------------------ Fiducial exposure $^{+0.6}_{-1.1}$ Energy response $\pm4.1$  spectral shape $^{+1.0}_{-5.0}$ Fit methods $\pm5.7$ Inclusion of independent  estimate $^{+3.9}_{-0.0}$ $\gamma$-rays in pulse shape distributions $\pm2.7$ Statistical uncertainties in pulse shape distributions $\pm5$ Total systematic uncertainty $\pm10$ : Relevant sources of systematic uncertainty and their contribution in the measured  neutrino interaction rate. These systematics increase the upper limit in the  neutrino interaction rate by 0.8. []{data-label="tab:syst"} Table \[tab:results-summary\] also shows the solar neutrino fluxes inferred from our best estimates of the  and  neutrino interaction rates, assuming the MSW-LMA solution, and the ratio of these values to the High Metallicity (GS98) SSM predictions [@bib:ssm2011]. Both results are consistent with the predicted High and Low Metallicity SSM fluxes assuming MSW-LMA. Under the assumption of no neutrino flavor oscillations, we would expect a  neutrino interaction rate in Borexino of (4.47$\pm$0.05); the observed interaction rate disfavors this hypothesis at 97% C.L. If this discrepancy is due to $\nu_e$ oscillation to $\nu_\mu$ or $\nu_\tau$, we find =0.62$\pm$0.17 at 1.44MeV. This result is shown alongside other solar neutrino  measurements in Fig. \[fig:pee\]. The MSW-LMA prediction is shown for comparison. ![ Electron neutrino survival probability as a function of energy. The red line corresponds to the measurement presented in this letter. The  and  measurements of  given in [@bib:bxbe7] are also shown. The  measurements of  were obtained from [@bib:kamiokande; @bib:sno; @bib:bxb8], as indicated in the legend. The MSW-LMA prediction band is the $1\sigma$ range of the mixing parameters given in [@bib:pdg2010]. []{data-label="fig:pee"}](pee.pdf){width="\linewidth"} We have achieved the necessary sensitivity to provide, for the first time, evidence of the rare signal from  neutrinos and to place the strongest constraint on the  neutrino flux to date. This has been made possible by the combination of the extremely low levels of intrinsic background in Borexino, and the implementation of novel background discrimination techniques. This result raises the prospect for higher precision measurements of  and  neutrino interaction rates, if the next dominant background, , is further reduced by scintillator re-purification. The Borexino program is made possible by funding from INFN (Italy), NSF (USA), BMBF, DFG and MPG (Germany), NRC Kurchatov Institute (Russia), and MNiSW (Poland). We acknowledge the generous support of the Gran Sasso National Laboratories (LNGS). [00]{} B.T. Cleveland et al., Ap. J. [**496**]{}, 505 (1998); K. Lande and P. Wildenhain, Nucl. Phys. B (Proc. Suppl.) [**118**]{}, 49 (2003); R. Davis, Nobel Prize Lecture (2002). F. Kaether et al., Phys. Lett. B [**685**]{}, 47 (2010); W. Hampel et al. (GALLEX Collaboration), Phys. Lett. B [**447**]{}, 127 (1999); J.N. Abdurashitov et al. (SAGE collaboration), Phys. Rev. C [**80**]{}, 015807 (2009). K.S. Hirata et al. (KamiokaNDE Collaboration), Phys. Rev. Lett. [**63**]{}, 16 (1989); Y. Fukuda et al. (Super-Kamiokande Collaboration), Phys. Rev. Lett. [**81**]{}, 1562 (1998); J.P. Cravens et al. (SuperKamiokaNDE Collaboration), Phys. Rev. D [**78**]{}, 032002 (2008). Q.R. Ahmad et al. (SNO Collaboration), Phys. Rev. Lett. [**87**]{}, 071301 (2001); B. Aharmim et al. (SNO Collaboration), Phys. Rev. C [**75**]{}, 045502 (2007); B. Aharmim et al. (SNO Collaboration), Phys. Rev. C [**81**]{}, 055504 (2010); B. Aharmin et al. (SNO Collaboration), . C. Arpesella et al. (Borexino Collaboration), Phys. Lett. B [**658**]{}, 101 (2008); C. Arpesella et al. (Borexino Collaboration), Phys. Rev. Lett. [**101**]{}, 091302 (2008); G. Bellini et al. (Borexino Collaboration), Phys. Rev. Lett. [**107**]{}, 141302 (2011). S. Abe et al. (KamLAND Collaboration), Phys. Rev. Lett. [**100**]{}, 221803 (2008). S.P. Mikheyev and A.Yu. Smirnov, Sov. J. Nucl. Phys. [**42**]{}, 913 (1985); L. Wolfenstein, Phys. Rev. D [**17**]{}, 2369 (1978); P.C. de Holanda and A.Yu. Smirnov, JCAP [**0302**]{}, 001 (2003). A. Friedland et al., Phys. Lett. B [**594**]{}, 347 (2004); S. Davidson et al., JHEP [**0303**]{}, 011 (2003); P.C. de Holanda and A. Yu. Smirnov, Phys. Rev. D [**69**]{}, 113002 (2004); A. Palazzo and J.W.F. Valle, Phys. Rev. D [**80**]{}, 091301 (2009). A.M. Serenelli, W.C. Haxton and C. Peña-Garay, . S. Basu, ASP Conference Series [**416**]{}, 193 (2009). G. Alimonti et al. (Borexino Collaboration), Nucl. Instr. and Meth. A [**600**]{}, 568 (2009). G. Alimonti et al. (Borexino Collaboration), Nucl. Instr. and Meth. A [**609**]{}, 58 (2009). G. Bellini et al. (Borexino Collaboration), . C. Galbiati, A. Pocar, D. Franco, A. Ianni, L. Cadonati, and S. Schönert, Phys. Rev. C [**71**]{}, 055805 (2005). G. Bellini et al. (Borexino Collaboration), JINST [**6**]{}, P05005 (2011). M. Deutsch, [*“Proposal for a Cosmic Ray Detection System for the Borexino Solar Neutrino Experiment”*]{}, Massachusetts Institute of Technology, Cambridge, MA (1996). H. Back et al. (Borexino Collaboration), Phys. Rev. C [**74**]{}, 045805 (2006). Y. Kino et al., Jour. Nucl. Radiochem. Sci [**1**]{}, 63 (2000). D. Franco, G. Consolati, and D. Trezzi, Phys. Rev. C [**83**]{}, 015504 (2011). TMVA Users Guide, [http://tmva.sourceforge.net/docu/ TMVAUsersGuide.pdf](http://tmva.sourceforge.net/docu/ TMVAUsersGuide.pdf). W. Maneschg et al., . Review of Particle Physics, K. Nakamura et al. (Particle Data Group), J. Phys. G [**37**]{}, 075021 (2010). J.N. Bahcall, M. Kamionkowski and A. Sirlin, Phys. Rev. D [**51**]{}, 6146 (1995). J. Erler and M.J. Ramsey-Musolf, Phys. Rev. D [**72**]{}, 073003 (2005). G. Bellini et al. (Borexino Collaboration), Phys. Rev. D [**82**]{}, 033006 (2010).
{ "pile_set_name": "ArXiv" }
--- author: - | M. J. Gillan$^{1,2,3}$, D. Alfè$^{1,2,3,4}$, P. J. Bygrave$^{5,6}$, C. R. Taylor$^{5}$ and F. R. Manby$^{5}$\ $^1$London Centre for Nanotechnology, Gordon St., London WC1H 0AH, UK\ $^2$Department of Physics and Astronomy, University College London\ Gower St., London WC1E 6BT, UK\ $^3$Thomas Young Centre, University College London\ Gordon St., London WC1H 0AH, UK\ $^4$Department of Earth Sciences, University College London\ Gower St., London WC1E 6BT, UK\ $^5$Centre for Computational Chemistry, School of Chemistry\ University of Bristol, Bristol BS8 1TS, UK\ $^6$Department of Chemistry, University of Southampton\ Highfield, Southampton SO17 1BJ, UK title: | Energy benchmarks for water clusters and ice structures\ from an embedded many-body expansion --- Introduction {#sec:intro} ============ Accurate energy benchmarks have long been important in calibrating methods for treating the energetics of molecular systems (see e.g. Refs. [@curtiss91; @lynch03; @jurecka06; @karton11]). We are concerned here with benchmarks for water systems, whose subtle hydrogen-bonding energetics has proved remarkably difficult to characterize [@schwegler04; @sit05; @santra08; @schmidt09; @wangj11; @santra11; @ma12]. Accurate energy benchmarks for small water clusters have often been used both to parameterize force fields [@burnham99; @bukowski07; @fanourgakis08; @kumar10; @wangy11; @wangy09; @babin12] and to assess electronic-structure methods [@santra08; @anderson06; @dahlke08; @wang10; @gillan12; @medders13]. However, cooperative many-body effects are very important in water [@ojamae94; @xantheas94; @lagutchenkov05; @santra07], and to characterize them fully we need benchmarks for larger aggregates of molecules, including condensed phases. We show here how recently developed embedding methods [@bygrave12] enable accurate energy benchmarks to be computed for large water clusters and ice structures using correlated wavefunction-based methods. The simplest wavefunction-based method for treating electron correlation is second-order Møller-Plesset theory (MP2) [@moller34; @szabo82; @helgaker00], which by good fortune is already quite accurate for water. The computational effort required by MP2 for a chosen basis set scales rapidly as $N^5$ with number of molecules $N$, but nevertheless benchmark MP2 energies within $\sim 0.1$ m$E_{\rm h}$/monomer of the complete basis-set (CBS) limit have been reported for clusters of up to about $20$ molecules. Exploratory MP2 calculations on ice structures have also been reported [@erba09; @hermann08]. The coupled-cluster technique at the CCSD(T) level (coupled cluster with single and double excitations and perturbative triples) [@szabo82; @helgaker00] is considerably more accurate than MP2, and is generally regarded as the “gold standard” for treating correlation. However, its challenging $N^7$ scaling has so far made it difficult to achieve a high degree of basis-set convergence for anything larger than the water hexamer [@olson07; @bates09], though ambitious attempts to apply it to large water aggregates have been reported [@yoo10]. We shall describe a method for computing MP2 and CCSD(T) benchmarks for large water aggregates which employs an embedded version of the widely used many-body expansion (MBE) [@xantheas94; @hankins70; @pedulla96]. In the standard form of MBE, the total energy $E_{\rm tot} ( 1, 2, \ldots N )$ of a system of $N$ monomers is expressed as: $$E_{\rm tot} ( 1, 2, \ldots N ) = \sum_i E^{(1)} ( i ) + \sum_{i < j} E^{(2)} ( i, j ) + \sum_{i < j < k} E^{(3)} ( i, j, k ) + \ldots \; . \label{eqn:MBE}$$ Here, the first term on the right is the sum of 1-body energies, where $E^{(1)} ( i )$ denotes the distortion energy of monomer $i$ in the absence of all the other monomers as a function of the relative coordinates specifying its geometry. Similarly, the second term is the sum of all the 2-body interaction energies, where $E^{(2)} ( i, j )$ is the energy of the dimer consisting of monomers $i$ and $j$ in the absence of all the other monomers minus the sum of 1-body energies $E^{(1)} ( i ) + E^{(1)} ( j )$. The higher-body terms are defined in a similar way, as explained in detail in many previous papers. A popular strategy [@bukowski07; @wangy11; @wangy09; @babin12; @medders13] for exploiting MP2 or CCSD(T) benchmarks to create force fields for water has been to use the benchmarks to create accurate parameterized representations of the low-order terms in the MBE, usually up to 3-body terms, and then to use a model for the monomer multipole moments and polarizabilities for the higher-body terms. Recent work [@babin12; @medders13] has made it clear that an accurate description of cooperative effects represented by these higher-body terms is important. The concept of “embedded” versions of MBE (we abbreviate to EMBE) has been discussed in a number of previous papers [@bygrave12; @manby12; @hirata05; @leverentz09; @wen12]. These versions have the same formal structure as eqn (\[eqn:MBE\]), but the $n$-body terms $E^{(n)}$ no longer refer to the energies of clusters of $n$ monomers in free space, but instead to the energies of these clusters embedded in an approximate representation of the potential due to all the other monomers in the system. In some molecular systems, the electron density distribution on each monomer changes substantially when the molecules form large aggregates. This is a strong effect in water [@coulson66; @silvestrelli99], where the dipole moment of the monomers increases from $1.86$ D in free space to $\sim 2.6$ D in ice, so that the interaction between a pair of molecules is appreciably affected by the presence of their neighbors. In the standard MBE, such effects are represented by higher-body terms in the series, but in EMBE the embedding potential causes them to appear already in the 1- and 2-body terms. The EMBE that we use here is the one reported recently by Bygrave *et al.* [@bygrave12], in which the embedding potential is a sum of the Coulomb field due to the electron densities of the monomers and a confining field arising from the Pauli repulsion due to these electron densities. A summary of this EMBE will be given in Sec. \[sec:techniques\]. Our strategy in the present work is to use the EMBE only for the correlation part of the energy, the Hartree-Fock part being computed accurately using other methods, as explained below. The same general idea underlies the incremental correlation method of Stoll and co-workers [@stoll92; @paulus06], a version of which has already been shown to be highly successful for the energetics of ice Ih [@hermann08]. We have two main aims in this work. First, we want to test the accuracy of EMBE truncated at 2-body level for the computation of the MP2 correlation energy of water systems. We do this by comparing the MP2 correlation energy of water clusters calculated directly by standard methods with values given by EMBE. The clusters used for these comparisons range from the 6-mer to the 16-mer, and we study both equilibrium configurations and configurations drawn from random thermal samples. We shall show that the 2-body-truncated EMBE appproximation is remarkably accurate for MP2. However, CCSD(T) corrections to the correlation energy are not fully captured by truncation of the present form of EMBE at the 2-body level. The second aim of this work is to use EMBE to find out whether MP2 near the CBS limit gives an accurate account of the energetics of ice structures. We approach this question by computing the HF energy and MP2 correlation energy of the ice Ih, II and VIII crystal structures and comparing their sum with the results of experiment and benchmark quantum Monte Carlo (QMC) calculations. These ice structures are of great topical interest, because it has recently been shown that standard DFT methods have difficulty in accounting for their relative energies [@santra11]. We shall see that MP2 is surprisingly accurate for the energetics of the chosen ice structures, but that a full description requires the inclusion of beyond-2-body correlation, which is provided by CCSD(T). Techniques {#sec:techniques} ========== The total energy $E_{\rm tot} ( 1, 2, \ldots N)$ of an assembly of $N$ monomers is the sum of its Hartree-Fock energy and and its correlation energy: $$E_{\rm tot} ( 1, 2, \ldots N ) = E_{\rm HF} ( 1, 2, \ldots N ) + E_{\rm corr} ( 1, 2, \ldots N ) \; .$$ Our EMBE techniques for computing $E_{\rm corr}$ and the completely separate techniques used for $E_{\rm HF}$ will be outlined in this Section. Correlation energy {#sec:correlation} ------------------ Consider first the correlation energy of a single monomer $i$. In standard MBE, this would be the correlation energy $E_{\rm corr}^{(1)} ( i )$ of the monomer in free space, with the collection of atomic positions specifying the monomer geometry denoted by the symbol $i$. However, the electronic state of monomer $i$ is changed by the presence of all the other monomers in the system, and the change can be approximately represented by defining the correlation energy ${\tilde{E}}_{\rm corr}^{(1)} ( i )$ of monomer $i$ in a suitably defined field due to the other monomers. This field depends on the collection of atomic positions of all the $N$ monomers except for $i$. The total embedded 1-body correlation energy is then the sum of all the ${\tilde{E}}_{\rm corr}^{(1)} ( i )$. In the same way, we can define the embedded 2-body correlation energy ${\tilde{E}}_{\rm corr}^{(2)} ( i, j )$ of dimer $( i, j )$, which depends on the field produced by all monomers except for $i$ and $j$. This is the total correlation energy ${\tilde{E}}_{\rm corr} ( i, j )$ of dimer $( i, j )$ in the field of all the other monomers minus the embedded 1-body correlation energies of $i$ and $j$: $${\tilde{E}}_{\rm corr}^{(2)} ( i, j ) = {\tilde{E}}_{\rm corr} ( i, j ) - {\tilde{E}}_{\rm corr}^{(1)} ( i ) - {\tilde{E}}_{\rm corr}^{(1)} ( j ) \; .$$ By extension, we can define embedded 3-body correlation energies ${\tilde{E}}_{\rm corr}^{(3)} ( i, j, k )$ and higher-body correlation energies. The total correlation energy of the $N$-monomer system is thus decomposed according to the identity: $$E_{\rm corr} = \sum_i {\tilde{E}}_{\rm corr}^{(1)} ( i ) + \sum_{i < j} {\tilde{E}}_{\rm corr}^{(2)} ( i, j ) + \sum_{i < j < k} {\tilde{E}}_{\rm corr}^{(3)} ( i, j, k ) + \ldots \; \; .$$ Note that the definition of the terms in this identity is completely analogous to standard MBE for correlation energy, with the sole difference that the correlation energy of each $n$-mer is computed in the field representing the influence of all the other monomers. This EMBE is exact by construction, no matter what choice we make for the embedding potential, but its convergence properties will be strongly affected by this choice. Here we use the form of embedding potential due to Bygrave, Allan and Manby (BAM) [@bygrave12], which has been shown to work well for some molecular crystals. We shall show that it works so well for the MP2 correlation energy of water systems that the resulting EMBE can be truncated at the 2-body level without significant loss of accuracy. (We refer to 2-body-truncated EMBE in the following as EMBE-2.) The BAM embedding potential is constructed using iterative Hartree-Fock calculations on the embedded monomers. The iterative process is initiated by HF calculations on all $N$ monomers in their given geometry, each being treated as isolated in free space. This gives the HF ground-state electron density $\rho ( {\bf r} )$ for each monomer, which is represented in a basis of Gaussian functions centred on the atomic sites. The electron density of each monomer gives rise to a Coulomb field $V_{\rm coul}$ and a Pauli exchange-repulsion field $V_{\rm rep}$. The Coulomb field produced by each monomer is simply the sum of the fields due to the individual Gaussians. The approximation adopted for $V_{\rm rep}$ uses the fact [@wheatley90] that the exchange-repulsion energy between two molecules A and B can be quite accurately represented as $k S_{\rm AB}$, where $S_{\rm AB}$ is the overlap integral of their electron densities $\rho_{\rm A} ( {\bf r} )$ and $\rho_{\rm B} ( {\bf r} )$: $$S_{\rm AB} = \int d {\bf r} \, \rho_{\rm A} ( {\bf r} ) \rho_{\rm B} ( {\bf r} ) \; ,$$ and $k$ is a constant depending on the species involved. With this motivation, the exchange-repulsion potential due to a monomer having density $\rho ( {\bf r} )$ is assumed to be simply $k \rho ( {\bf r} )$. The initial approximation for the embedding field acting on any monomer $i$ is then the superposition of the fields $V_{\rm coul} + V_{\rm rep}$ coming from all the other monomers. The HF ground-state calculation on each monomer is now repeated, but this time in the initial approximation for the embedding potential. This yields a new electron distribution for each monomer, which is then used to recompute the potentials $V_{\rm coul}$ and $V_{\rm rep}$, and the process is repeated to self-consistency within a specified tolerance. The embedding potential constructed in this way is then used without further change in calculating the correlation energies of the monomers, dimers, etc..., from which the energies of the EMBE are extracted. We note the physical motivation underlying the BAM embedding potential. The substantial electron redistribution in water and some other systems caused by assembling monomers into larger aggregates can be expected to change the correlation energy. The intrinsically many-body nature of this redistribution is accounted for by the self-consistent iterative procedure in BAM, which describes cooperative effects due to both long-range Coulomb polarization and short-range exchange-repulsion. This gives some reason to expect that the influence of cooperative electron redistribution on the correlation energy may be adequately described by truncating EMBE at the 2-body level. The BAM form of EMBE that we have outlined can be applied to both molecular clusters and molecular crystals, and we shall present the results of both types of calculation, performed using a development version of the [molpro]{} code [@molpro10; @molpro12]. The calculations on crystals require the use of periodic boundary conditions, and Ewald techniques are used to handle the long-range Coulomb parts of the embedding potentials, as described in Ref. [@bygrave12]. For periodic systems, the embedded 2-body correlation terms involving any given monomer in the primary cell should in principle be summed over all monomers and their images in all cells, but in practice we set a spatial cut-off radius $R_c$ beyond which correlation terms are neglected. Since the correlation energies are expected to fall off with distance $R$ as $1 / R^6$, extrapolation to the $R_c \rightarrow \infty$ limit is straightforward, as we show later. Both the calculation of the self-consistent embedding potentials and the computation of the 1- and 2-body embedded correlation energies can readily be distributed over parallel processors, and it is efficient to do so. Hartree-Fock energy {#sec:HF} ------------------- The accurate computation of $E_{\rm HF}$ for small and moderate clusters, performed here using [molpro]{} [@molpro10; @molpro12], is completely standard, and needs no further comment. The computation of the Hartree-Fock energy of crystals has a long history [@pisani80; @pisani96; @paier05; @gillan08; @guidon09; @paier09], but basis-set convergence of $E_{\rm HF}$ to the high tolerance of $\sim 0.2$ m$E_{\rm h}$ $\simeq 5$ meV/monomer that we attempt to achieve here for ice structures is still not routine, and we summarise here the plane-wave techniques that we have used. In Sec. \[sec:clusters\_ice\], we will report comparisons between [molpro]{} and plane-wave calculations of the HF binding energies of water clusters, which demonstrate that the plane-wave techniques do, indeed, achieve the required accuracy. Our plane-wave calculations of HF energy all employ the PAW (Projector Augmented Wave) technique [@blochl94; @kresse99] implemented in the [vasp]{} code [@kresse96]. The underlying theory is outlined in Ref. [@paier05]. As usual when applying plane-wave techniques to isolated molecules and clusters, the calculations are actually performed on the system in a large periodically repeated cell, whose volume $\Omega$ is systematically increased until convergence is achieved. It is explained in Ref. [@paier05] that when this approach is used for neutral molecules or clusters there are two dominant contributions to cell-size error, both of which fall off as $1 / \Omega$. The first is due to the dipole moment of the repeated system, and the second is a completely different contribution arising from the treatment of exchange in periodic boundary conditions. The two contributions can, if necessary, be separated using $k$-point sampling techniques [@monkhorst76], but in practice we simply recalculate the energy for a sequence of cubic repeated cells of increasing cell-length $L$ using $\Gamma$-point sampling. We will show in Sec. \[sec:clusters\_ice\] that convergence of the total energy of the clusters of interest with respect to $L$ to within $\sim 1$ meV or better can readily be achieved. However, even with perfect convergence with respect to $L$, the HF energy with PAW will not be identical to that computed with [molpro]{} at the CBS limit, for two reasons. First, the PAW calculations do not include relaxation of the core orbitals (the O(1s) orbitals in the present case). Second, the PAW treatment is not exact for the valence electrons in the core regions, since it uses only a finite number of projectors. However, our comparisons for clusters will demonstrate that the resulting errors are well within our specified tolerance. For the ice structures, we use exactly the same PAW techniques for HF energy, and in this case it is essential to achieve convergence with respect to $k$-point sampling. With the standard Monkhorst-Pack sampling [@monkhorst76], the error due to insufficient $k$-point sampling falls off as $1 / N_k$, where $N_k$ is the number of $k$-points in the full Brillouin zone. If we compute HF energies for a sequence of increasing $N_k$ values, we can then extrapolate to the $N_k \rightarrow \infty$ limit, and we find that convergence to this limit within better than $1$ meV/monomer is straightforward to achieve. Water clusters and ice structures {#sec:clusters_ice} ================================= We will start our tests of EMBE by studying the binding energies of four isomers of the hexamer (H$_2$O)$_6$, which have been accurately characterized in many previous papers [@santra08; @dahlke08; @olson07; @bates09], and for which highly converged HF and MP2 energies are readily computed by standard techniques. The hexamers will also allow us to test our plane-wave techniques for computing the HF energy, which we rely on later for the ice structures. We then present similar tests on equilibrium configurations of the (H$_2$O)$_8$, (H$_2$O)$_{12}$ and (H$_2$O)$_{16}$ clusters. Further tests on samples of non-equilibrium congurations of (H$_2$O)$_6$ and (H$_2$O)$_9$ follow. Having quantified the errors of EMBE, we will then turn to the energetics of ice Ih, II and VIII structures, where we can assess the accuracy of our HF and MP2 methods by comparing with energies from experiment and quantum Monte Carlo. Four isomers of the hexamer {#sec:hexamer} --------------------------- We compute the binding energies of the prism, cage, book and ring isomers of the 6-mer, shown in Fig. \[fig:hexamers\]. The atomic coordinates used for these calculations are taken from the work of Santra *et al.* (Supplementary Information) [@santra08], who obtained them by relaxing the structures with MP2 using aug-cc-pVTZ basis sets. The binding energies of the clusters and ice structures treated here are always referred to the energy of the appropriate number of isolated H$_2$O monomers in the equilibrium geometry reported in the definitive work by Partridge and Schwenke [@partridge97]. For all our cluster calculations, we use the correlation-consistent aug-cc-pVXZ basis sets [@dunning89; @kendall92] (we refer to them simply as AVXZ, with X the cardinality), approaching the CBS limit in the usual way by increasing the cardinality. All our correlation energies are computed using the explicitly-correlated F12 technique [@werner07; @adler07] provided by [molpro]{}. All our calculations on embedded dimers employ counterpoise. According to the tests reported in Ref. [@gillan12], these technical choices are expected to deliver embedded 2-body correlation energies to within better than $20$ $\mu E_{\rm h}$ of the CBS limit. We report in Table \[tab:hexamer\] the directly calculated HF and MP2-correlation components $E ( \mathrm{HF} )$ and $E ( \Delta \mathrm{MP2(direct)} )$ of the total binding energy of the four isomers, and also the MP2 correlation energies $E ( \Delta \mathrm{MP2(EMBE\mhyphen 2)} )$ given by EMBE-2. (We use the notation $\Delta \mathrm{MP2}$ to refer to the correlation energy computed with MP2.) Comparing the $\Delta$MP2(direct) and $\Delta$MP2(EMBE-2) values, we see that the errors incurred by truncating EMBE are very small, the worst error in the total binding energy being $0.3$ m$E_{\rm h}$, which equates to $50$ $\mu E_{\rm h}$ ($1.4$ meV) per monomer. We also compare the total MP2 binding energies relative to that of the prism with the corresponding benchmark MP2 values reported recently in Ref. [@gillan12]. This is an interesting comparison, because it is well known that standard DFT approximations give erroneous trends in binding energy as we pass from the compact prism and cage structures to the extended book and ring structures [@santra08]. The results of Table \[tab:hexamer\] show that the EMBE errors are small compared with the variation of binding energy through the isomer sequence. It is well known that the more accurate CCSD(T) treatment of correlation does not change the energy ordering of the (H$_2$O)$_6$ isomers, but increases the difference of total energy between the prism and the ring from $1.9$ to $2.9$ m$E_{\rm h}$ [@gillan12; @bates09]. To test whether EMBE correctly predicts this change, we used it to compare the total binding energies with CCSD(T) and MP2, using AVTZ basis sets and F12. The AVTZ basis set is smaller than the AVQZ basis used for the MP2 calculations reported above, but is expected to suffice for the difference $E ( \delta \mathrm{CCSD(T)} ) \equiv E ( \mathrm{CCSD(T)} ) -E ( \mathrm{MP2} )$. Table \[tab:hexamer\] compares the $E ( \delta \mathrm{CCSD(T)} )$ values from EMBE with those from standard methods [@gillan12], and we see that EMBE gives good results for the more extended ring and book structures, but gives a somewhat overbinding $\delta \mathrm{CCSD(T)}$ shift for the compact cage and prism, so that the increase of prism-ring splitting due to $\delta \mathrm{CCSD(T)}$ is exaggerated. An important difference between MP2 and CCSD(T) is that the latter includes 3-body correlations, which are not accounted for by 2-body-truncated EMBE, and this is the likely cause of the errors. (To keep the matter in perspective, though, the worst of these errors is still less than $0.1$ m$E_{\rm h}$ per monomer.) In order to test the plane-wave techniques used later to compute the HF energies of the ice structures, we have calculated the HF binding energies of the (H$_2$O)$_6$ isomers using PAW and treating the hexamer in a large periodically repeated cell, as described in Sec. \[sec:HF\]. The cell length used was $30$ Å, and we have checked that this is large enough to reduce residual cell-size errors to less than $0.1$ m$E_{\rm h}$ in the total energy. The discrepancy between the PAW total binding energies from [vasp]{} (Table \[tab:hexamer\]) and the [molpro]{} values is at worst $0.16$ m$E_{\rm h}$, or $27$ $\mu E_{\rm h}$ ($0.7$ meV) per monomer, which is negligible for present purposes. The octamer, dodecamer and hexadecamer {#sec:8-12-16} -------------------------------------- The geometries we have used for our calculations on the 8-mer, 12-mer and 16-mer all have the fused-cube form, and are shown in Fig. \[fig:8-12-16mer\]. The atomic coordinates of the 8-mer and 12-mer were taken from the work of Wales and Hodges [@wales98], in which the basin-hopping algorithm [@wales03] was used together with the empirical TIP4P interaction model [@jorgensen83] to search for minimum-energy structures of a range of water clusters. The geometry used here for the 16-mer was the one obtained by Yoo *et al.* [@yoo10] by energy minimization at the MP2/AVTZ level, and is the same as the geometry used in the recent works by Góra *et al.* [@gora11] and Wang *et al.* [@wang13]. We report in Table \[tab:8-12-16mer\] the total MP2 binding energies of the clusters, and the HF and MP2 correlation components of these binding energies. As before, the HF energies are computed both using standard [molpro]{} calculations with AVQZ basis sets and using PAW with the [vasp]{} code. We give values for the correlation energies calculated with MP2-F12 and AVQZ basis sets both directly and using EMBE-2. For comparison, we show also the MP2 binding energies for the 16-mer reported recently in Ref. [@wang13]. (The reference geometry of the free H$_2$O monomer used in Ref. [@wang13] differs slightly from the Partridge-Schwenke reference that we use, and we have adjusted their energies accordingly.) As for the hexamers, the HF energies from PAW agree closely with those from [molpro]{}, the largest discrepancy being $0.5$ m$E_{\rm h}$ in the total energy for the 12-mer, which corresponds to $50$ $\mu E_{\rm h} \simeq 1.5$ meV per monomer. (The HF binding energy from Ref. [@wang13] given in the Table is also in close agreement.) The high accuracy of EMBE-2 for the correlation component $E ( \Delta \mathrm{MP2} )$ of the MP2 energy is shown by the very close agreement between the direct and EMBE-2 values for the octamer. It is further supported by the agreement with the total MP2 binding energy of the hexadecamer reported by Wang *et al.* [@wang13]. (The difference from their binding energy is perhaps slightly great than expected, but is less than $0.1$ m$E_{\rm h}$ per monomer.) Thermal samples of the hexamer and nonamer {#sec:thermal-6-9} ------------------------------------------ The tests of EMBE that we presented so far all refer to equilibrium structures. However, for many applications such structures may not be particularly relevant, and it is therefore interesting to test EMBE for more general configuration samples. In our recent work on benchmarking with QMC [@gillan12; @alfe13], we have shown that it is useful to work with random samples of configurations typical of thermal equilibrium. One type of system that we studied was water “nano-droplets”, i.e. clusters in thermal equilibrium, with evaporation prevented by a weak confining potential. We present here tests of EMBE for 20 configurations each of the hexamer and the nonamer in thermal equilibrium at a temperature of $200$ K. The techniques used to generate these thermal-equilibrium samples are described in detail in Ref. [@alfe13]. For small nano-droplets in thermal equilibrium, even at a temperatures as low as $200$ K, the binding energy spontaneously fluctuates over a wide range, and we want to know whether EMBE makes significant errors in describing these fluctuations. Using exactly the same techniques as for the other clusters, we calculate the HF energy and the MP2 correlation energy directly for all the configurations, using AVQZ basis sets, with F12 used for $\Delta$MP2. Separately, we use EMBE-2 to compute $E ( \Delta \mathrm{MP2} )$, again with AVQZ and F12. Fig. \[fig:thermal\_parity\] shows parity plots of the total $E ( \mathrm{MP2} ) \equiv E ( \mathrm{HF} ) + E ( \Delta \mathrm{MP2} )$ binding energies computed in these two ways for the 6-mer and the 9-mer. The errors of EMBE are so small that they are almost imperceptible on these plots. The mean values of the EMBE errors, i.e. the mean deviations of MP2(EMBE) from MP2(direct) are $50$ and $70$ $\mu E_{\rm h}$ in the total binding energy. The rms values of these deviations of total energy are $75$ and $140$ $\mu E_{\rm h}$. The overall conclusion from all the foregoing tests on clusters is that the errors incurred by using 2-body-truncated EMBE for computing the MP2 correlation energy are no more than $100$ $\mu E_{\rm h}$ ($\simeq 3$ meV) per monomer, which is negligible for most practical purposes. Ice structures -------------- . \[sec:ice\] The normal form of ice at ambient conditions is the ice Ih structure [@petrenko99], in which each H$_2$O monomer forms hydrogen bonds with four first neighbors at O-O separations of $2.7$ Å, the second neighbors having the much longer separation of $4.5$ Å. Ice Ih is proton-disordered but is closely related to the proton-ordered ice XI structure, which is the stable low-pressure form at temperatures below $72$ K [@petrenko99]. With increasing pressure at low temperatures, ice transforms successively to the sequence of denser structures known as II, XV and VIII, the density of ice VIII at the pressure at which it first becomes stable being $\sim 70$ % greater than that of ice Ih (or ice XI) [@petrenko99]. The large increase of density through this sequence results entirely from shortening of the second-neighbor O-O distance, and in ice VIII each monomer is surrounded by eight neighbors at almost equal O-O separations of $\sim 2.8$ Å. Since four of these neighbors are not hydrogen-bonded to the central monomer, Coulomb interactions and exchange-repulsion should strongly destabilize ice VIII relative to ice Ih, but electron correlation will have the opposite effect, so that the relative energies of ice VIII and ice Ih will depend on a balance between Coulomb, exchange-repulsion and correlation energies. It is known that standard DFT approximations describe this balance rather poorly [@santra11], making the $\mathrm{VIII} - \mathrm{Ih}$ energy difference much too great. The question we address here is whether MP2 describes this balance correctly. The atomic coordinates for our ice calculations are exactly the same as those used in the QMC calculations of Ref. [@santra11]. We present first our EMBE calculations of correlation energy with MP2, for which we use AVQZ basis sets with F12. We performed a thorough test of $k$-point convergence for the ice VIII structure, by monitoring the total HF energy of the 8-molecule cell as the number of $k$-points was increased stepwise from $1$ to $1152$. We found that for $75$ $k$-points or more this total energy was converged to better than $30$ $\mu E_{\rm h}$ ($\sim 1$ meV). For all the results that follow, the number of $k$ points was always large enough to ensure this degree of convergence. As noted in Sec. \[sec:correlation\], the embedded 2-body correlation energy includes all dimers within a specified cut-off radius $R_c$, and we must ensure that the MP2 correlation energy $E ( \Delta \mathrm{MP2} )$ is adequately converged with respect to $R_c$. Since correlation at long distances $r$ is dispersion, which falls off as $1 / r^6$, we can estimate the error $\delta E_{\rm corr}$ incurred by truncating the sum over 2-body correlation contributions at radius $R_c$ as: $$\delta E_{\rm corr} = 2 \pi n_{\rm mol} \int_{R_c}^\infty dr \, r^2 C_6 / r^6 \; ,$$ where $n_{\rm mol}$ is the number density of monomers and $C_6$ is the dispersion coefficient. The error $\delta E_{\rm corr}$ therefore falls off as $1 / R_c^3$, so that we can estimate the $R_c \rightarrow \infty$ limit by plotting the correlation energy against $1 / R_c^3$ and making a linear extrapolation. Our MP2 correlation energies for $R_c$ values ranging from $5$ to $10$ Å are plotted in this way for the three ice structures in Fig. \[fig:ice\_correl\_extrap\], together with linear least-squares fits for $R_c \ge 6.25$ Å, and we see from this that the uncertainty due to $R_c \rightarrow \infty$ extrapolation is no more than $\sim 0.1$ m$E_{\rm h}$/monomer. Our Hartree-Fock energies, together with the MP2 correlation energies obtained by $R_c \rightarrow \infty$ extrapolation, and the resulting MP2 cohesive energies are reported in Table \[tab:ice\], where we also give experimental values [@whalley84] and values from QMC calculations [@santra11]. The agreement of MP2 with these accurate benchmark values is surprisingly good, the deviations from experiment all being less than $0.25$ m$E_{\rm h}$ ($7$ meV) per monomer and the largest difference from the QMC energies being $0.31$ m$E_{\rm h}$ ($9$ meV) per monomer. It might be tempting to conclude from these comparisons that MP2 gives a complete and accurate description of ice energetics. However, it would be rash to do so without assessing the corrections given by the more accurate CCSD(T) approximation. We have followed the same procedure as for the hexamers (Sec. \[sec:hexamer\]), obtaining the difference $E ( \delta \mathrm{CCSD(T)} ) \equiv E ( \mathrm{CCSD(T)} ) - E ( \mathrm{MP2} )$ between the CCSD(T) and MP2 correlation energies by using EMBE-2 to compute the CCSD(T) and MP2 binding cohesive energies with AVTZ basis sets and F12. Our calculations of the difference $E ( \delta \mathrm{CCSD(T)} )$ were performed with cut-off distance $R_c$ equal to $7.5$ Å, which our tests indicate is large enough to ensure convergence to better than $0.1$ m$E_{\rm h}$ per monomer. Our values of the $\delta \mathrm{CCSD(T)}$ corrections to the correlation energy per monomer for the three ice structures (Table \[tab:ice\]) have the effect of stabilizing ice VIII and destabilizing ice Ih, and are so sizeable that the cohesive energies of the two structures become almost identical, so that the apparently excellent predictions of MP2 are completely overturned. A likely explanation for this outcome is that truncation of EMBE at the 2-body level is not an accurate enough approximation for CCSD(T). In fact, our EMBE calculations on the hexamer (Sec. \[sec:hexamer\]) showed that although EMBE-2 gives satisfactory $\delta \mathrm{CCSD(T)}$ corrections for the extended ring and book isomers, it gives significantly overbinding corrections for the compact cage and prism. Since CCSD(T) includes beyond-2-body correlations but MP2 does not, EMBE-2 will not be accurate for $\delta \mathrm{CCSD(T)}$ corrections if such correlations are important. To investigate this further, we have assessed the significance of 3-body correlations by computing $\delta \mathrm{CCSD(T)}$ corrections to the 3-body energies of the H$_2$O trimers that occur in ice Ih and VIII. Our estimates are reported in the Appendix, where we show that the corrections due to 3-body correlation are very small in ice Ih (see also Ref. [@vonlilienfeld10]), but are substantial in ice VIII, being large enough to destabilize the latter by $\sim 1.2$ m$E_{\rm h}$ per monomer. This suffices to compensate almost entirely for the 2-body $\delta \mathrm{CCSD(T)}$ changes. We include in Table \[tab:ice\] our estimates for the 3-body $\delta \mathrm{CCSD(T)}$ corrections, and the resulting final CCSD(T) binding energies. The energy differences between ice VIII and the other two ice structures are once more in reasonable agreement with experiment and QMC, though we note that ice II has now become very slightly more stable than Ih. More work will be needed to refine the details, but it seems clear that our understanding of ice energetics is incomplete without 3-body correlation. Discussion and conclusions {#sec:discussion} ========================== A number of conclusions can be drawn from our results. First, the embedded many-body expansion (EMBE) truncated at the 2-body level provides a remarkably effective way of calculating the MP2 correlation energy of water systems. Our results imply that beyond-2-body terms in EMBE contribute almost nothing to the MP2 correlation energy. This is important, because MP2 is widely used as a reasonably good approximation to the energetics of water systems. A second important conclusion is that 2-body truncated EMBE for MP2 correlation, when combined with accurate methods for the Hartree-Fock energy, provides a rather straightforward way of computing the MP2 energies of ice structures close to the basis set limit. The MP2 values for both the cohesive energies and the relative energies of the Ih, II and VIII structures agree surprisingly well with accurate values from experiment and from quantum Monte Carlo calculations. The results make it clear that the small energy differences between the structures depend on a fine balance between Hartree-Fock and correlation energies, which individually vary quite substantially. Specifically, the destabilization of ice VIII relative to Ih by Coulombic and exchange-repulsion energies described by Hartree-Fock is to a large extent compensated by the restabilization due to correlation. However, our EMBE calculations with CCSD(T) show that the accuracy of MP2 for the relative energies of ice structures is not quite what it seems, and is partly due to a lucky cancellation of errors in the description of 2-body and beyond-2-body correlation. Our analysis implies that the energy difference between the Ih and VIII structures cannot be understood without many-body dispersion. The success of the 2-body-truncated EMBE for MP2 correlation is not unexpected. Since MP2 includes only double excitations, it accounts for correlation between electrons on single monomers and on pairs of monomers, but not for correlation between three or more monomers. In the standard (unembedded) many-body expansion for correlation energy, beyond-2-body contributions arise from the change of 2-body correlation energy due to polarization of monomers, and these effects are accounted for by EMBE truncated at the 2-body level. The genuine 3-body correlation effects that we have seen to be important in ice VIII are missed by MP2 but captured by CCSD(T). However, they are not captured by the form of EMBE used in the present work if we truncate at the 2-body level. More sophisticated projector-embedding techniques (see e.g. Ref. [@manby12]), which are expected to yield still more rapidly convergent EMBE expansions, will be capable of including such many-body correlation effects, even when truncated at 2-body level. There is quite a close connection between EMBE as we have used it here and the incremental correlation method pioneered by Stoll and co-workers [@stoll92; @paulus06]. In that method too, the total interaction energy is separated into its HF and correlation parts, and a many-body expansion is used to compute the correlation part. It was pointed out [@hermann07] that the standard (unembedded) many-body expansion applied to small water clusters converges more rapidly for the correlation energy than for the total interaction energy, and this insight formed the basis for the demonstration by Hermann and Schwerdtfeger [@hermann08] that a 1- and 2-body treatment of the correlation energy computed using MP2 and CCSD(T) gives an accurate account of the energetics of ice Ih. The EMBE approach is in principle more general, since it allows for many different forms of embedding, and this flexibility may well be important for future developments. It seems likely that MP2 and CCSD(T) calculations in the EMBE approximation will be useful for predicting the energetics of a range of other water systems. Examples might include the formation energies of ordered and disordered ice surfaces [@pan10] and the energetics of lattice defects in bulk ice [@dekoning06]. Recent work has demonstrated that 2-body-truncated EMBE can also be effective for other simple molecular systems such as CO$_2$ and HF [@bygrave12]. Beyond this, the energetics of mixtures should also be accessible, a particularly important example being gas hydrates, including the environmentally important clathrates of CH$_4$ and CO$_2$. Two important technical features of EMBE will greatly assist its application to more complex systems. The first is its very favourable computational scaling with system size. The effort needed for the computation of the embedding potential is proportional to the number of monomers in the cluster or in the unit cell of the crystal. For clusters, the calculation of correlation scales as $N^2$, if we include correlation between all monomer pairs. For a crystal, on the other hand, once we have adopted a cut-off radius $R_c$ for the pair correlations, the scaling of the correlation calculation is linear in the number of monomers in the unit cell. The example of gas hydrates is instructive here. The methane hydrates have typically $\sim 50$ molecules in the unit cell, but the linear scaling of the correlation energy means that correlation for these systems requires only $\sim 4$ times the computational effort needed by the ice structures treated here. This all means that if a standard method is used for the HF energy, then it is this that will dominate the overall scaling. The second beneficial feature of EMBE is that it is trivially parallelizable. Since the 2-body correlation energies are independent of each other, the wall-clock time needed for the total correlation energy can in principle be reduced to the time needed for a single dimer, provided enough processors are available. Before concluding, we draw attention to the calculation of forces in the EMBE framework. In the present work, we have calculated only energies, but clearly EMBE would become even more useful if it could be used to relax structures and to perform molecular dynamics simulation. The calculation of forces is an important problem for the future. In conclusion, we have shown that a simple form of embedded many-body expansion truncated at 2-body level gives an effective and accurate way of computing the MP2 correlation energy of large water clusters and ice structures with rather modest computational resources. Our calculations on three key ice structures show that MP2 gives a rather accurate account of their cohesive energies and their relative energies. However, the success of MP2 for ice structures is partly due to error cancellation, and a full understanding of their relative energies requires non-additive dispersion as described by CCSD(T). The application of the embedding techniques to a range of other condensed-phase molecular systems appears likely to be both feasible and fruitful. Acknowledgments {#acknowledgments .unnumbered} =============== CRT is supported by an EPSRC studentship. We thank Prof. K. D. Jordan for providing technical details of calculations on the water 16-mer performed by his group. [99]{} L. A. Curtiss, K. Raghavachari, G. W. Trucks, and J. A. Pople, J. Chem. Phys. **94**, 7221 (1991). B. J. Lynch and D. G. Truhlar, J. Phys. Chem. A **107**, 3898 (2003). P. Jurečka, J. Šponer, J. Černý, and P. Hobza, Phys. Chem. Chem. Phys. **8**, 1985 (2006). A. Karton, S. Daon, and J. M. L. Martin, Chem. Phys. Lett. **510**, 165 (2011). E. Schwegler, J. C. Grossman, F. Gygi, and G. Galli, J. Chem. Phys. **121**, 5400 (2004). P. H.-L. Sit and N. Marzari, J. Chem. Phys. **122**, 204510 (2005). B. Santra, A. Michaelides, M. Fuchs, A. Tkatchenko, C. Filippi, and M. Scheffler, J. Chem. Phys. **129**, 194111 (2008). J. Schmidt, J. VandeVondele, I.-F. W. Kuo, D. Sebastiani, J. I. Siepmann, J. Hutter, and C. J. Mundy, J. Phys. Chem. B **113**, 11959 (2009). J. Wang, G. Román-Pérez, J. M. Soler, E. Artacho, and M.-V. Fernández, J. Chem. Phys. **134**, 024516 (2011). B. Santra, J. Klimeš, D. Alfè, A. Tkatchenko, B. Slater, A. Michaelides, R. Car and M. Scheffler, Phys. Rev. Lett **107**, 185701 (2011). Z. Ma, Y. Zhang, and M. E. Tuckerman, J. Chem. Phys. **137**, 044506 (2012). C. J. Burnham, J. Li, S. S. Xantheas, and M. Leslie, J. Chem. Phys. **110**, 4566 (1999). R. Bukowski, K. Szalewicz, G. C. Groeneboom, and A. van der Avoird, Science **315**, 1249 (2007). G. S. Fanourgakis and S. S. Xantheas, J. Chem. Phys. **128**, 074506 (2008). R. Kumar, F.-F. Wang, G. R. Jenness, and K. D. Jordan, J. Chem. Phys. **132**, 014309 (2010). Y. Wang, X. Huang, B. C. Shepler, B. J. Braams, and J. M. Bowman, J. Chem. Phys. **134**, 094509 (2011). Y. Wang, B. C. Shepler, B. J. Braams, and J. M. Bowman, J. Chem. Phys. **131**, 054511 (2009). V. Babin, G. R. Medders, and F. Paesani, J. Phys. Chem. Lett. **3**, 3765 (2012). J. A. Anderson and G. S. Tschumper, J. Phys. Chem. A **110**, 7268 (2006). E. E. Dahlke, R. M. Olson, H. R. Leverentz, and D. G. Truhlar, J. Phys. Chem. A **112**, 3976 (2008). F.-F. Wang, G. Jenness, W. A. Al-Saidi, and K. D. Jordan, J. Chem. Phys. **132**, 134303 (2010). M. J. Gillan, F. R. Manby, M. D. Towler, and D. Alfè, J. Chem. Phys. **136**, 244105 (2012). G. R. Medders, V. Babin, and F. Paesani, J. Chem. Theory Comput. **9**, 1103 (2013). L. Ojamäe and K. Hermansson, J. Phys. Chem. **98**, 4271 (1994). S. S. Xantheas, J. Chem. Phys. **100**, 7523 (1994). A. Lagutchenkov, G. S. Fanourgakis, G. Niedner-Schatteburg, and S. S. Xantheas, J. Chem. Phys. **122**, 194310 (2005). B. Santra, A. Michaelides, and M. Scheffler, J. Chem. Phys. **127**, 184104 (2007). P. J. Bygrave, N. L. Allan, and F. R. Manby, J. Chem. Phys. **137**, 164102 (2012). C. Møller and M. S. Plesset, Phys. Rev. **46**, 618 (1934). A. Szabo and N. S. Ostlund, Modern Quantum Chemistry (McGraw Hill, New York, 1982). T. Helgaker, P. Jorgensen, J. Olsen, Molecular Electronic Structure Theory (Wiley, New York, 2000). A. Erba, S. Casassa, L. Maschio, and C. Pisani, J. Phys. Chem. B **113**, 2347 (2009). A. Hermann and P. Schwerdtfeger, Phys. Rev. Lett. **101**, 183005 (2008). R. M. Olson, J. L. Bentz, R. A. Kendall, M. W. Schmidt, and M. S. Gordon, J. Chem. Theory Comput. **3**, 1312 (2007). D. M. Bates and G. S. Tschumper, J. Phys. Chem. A, **113**, 3555 (2009). S. Yoo, E. Aprà, X. C. Zeng, and S. S. Xantheas, J. Phys. Chem. Lett. **1**, 3122 (2010). D. Hankins, J. W. Moskowitz, and F. H. Stillinger, J. Chem. Phys. **53**, 4544 (1970). J. M. Pedulla, F. Vila, and K. D. Jordan, J. Chem. Phys. **105**, 11091 (1996). F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. **8**, 2564 (2012). S. Hirata, M. Valiev, M. Dupuis, S. S. Xantheas, S. Sugiki, and H. Sekino, Mol. Phys. **103**, 2255 (2005). H. R. Leverentz and D. G. Truhlar, J. Chem. Theory Comput. **5**, 1573 (2009). S. Wen, K. Nanda, Y. Huang, and G. J. O. Beran, Phys. Chem. Chem. Phys. **14**, 7578 (2012). C. A. Coulson and D. Eisenberg, Proc. R. Soc. London Ser. A **291**, 445 (1966). P. L. Silvestrelli and M. Parrinello, Phys. Rev. Lett. **82**, 3308 (1999). H. Stoll, Phys. Rev. B **46**, 6700 (1992). B. Paulus, Phys. Rep. **428**, 1 (2006). R. Wheatley and S. Price, Mol. Phys. **69**, 507 (1990). H.-J. Werner, P. J. Knowles, G. Knizia, F. R. Manby, M. Schütz *et al.*, [molpro]{}, version 2010.1, a package of *ab initio* programs (2010). See also [http://www.molpro.net]{}. H.-J. Werner, P. J. Knowles, G. Knizia, F. R. Manby, and M. Schütz, WIREs Comput. Mol. Sci. **2**, 242 (2012). C. Pisani and R. Dovesi, Int. J. Quantum Chem. **17**, 501 (1980). C. Pisani, Quantum-Mechanical Ab-Initio Calculation of the Properties of Crystalline Materials, Lecture Notes in Chemistry, Vol. 67, Springer Verlag, Heidelberg (1996). J. Paier, R. Hirschl, M. Marsman, and G. Kresse, J. Chem. Phys. **122**, 234102 (2005). M. J. Gillan, D. Alfè, S. De Gironcoli, and F. R. Manby, J. Comput. Chem. **29**, 2098 (2008). M. Guidon, J. Hutter, and J. VandeVondele, J. Chem. Theory Comput. **5**, 3010 (2009). J. Paier, C. V. Diaconu, G. E. Scuseria, M. Guidon, J. VandeVondele, and J. Hutter, Phys. Rev. B **80**, 174114 (2009). P. E. Blöchl, Phys. Rev. B **5**, 17953 (1994). G. Kresse and D. Joubert, Phys. Rev. B **59**, 1758 (1999). G. Kresse and J. Furthmüller, Phys. Rev. B **54**, 11169 (1996). H. J. Monkhorst and J. D. Pack, Phys. Rev. B **13**, 5188 (1976). H. Partridge and D. W. Schwenke, J. Chem. Phys. **106**, 4618 (1997). T. H. Dunning, J. Chem. Phys. **90**, 1007 (1989). R. A. Kendall, T. H. Dunning, and R. J. Harrison, J. Chem. Phys. **96**, 6796 (1992). H.-J. Werner, T. B. Adler, and F. R. Manby, J. Chem. Phys. **126**, 164102 (2007). T. B. Adler, G. Knizia, and H.-J. Werner, J. Chem. Phys. **127**, 221106 (2007). D. J. Wales and M. P. Hodges, Chem. Phys. Lett. **286**, 65 (1998). D. J. Wales, *Energy Landscapes*, Cambridge University Press, Cambridge (2003). W. L. Jorgensen, J. Chandrasekhar, J. D. Madura, R. W. Impey, and M. L. Klein, J. Chem. Phys. **79**, 926 (1983). U. Góra, R. Podeszwa, W. Cencek, and K. Szalewicz, J. Chem. Phys. **135**, 224102 (2011). F.-F. Wang, M. J. Deible, and K. D. Jordan, J. Phys. Chem. A, DOI: 10.1021/jp404541c (2013). D. Alfè, A. P. Bartók, G. Csányi, and M. J. Gillan, J. Chem. Phys. **138**, 221102 (2013). V. F. Petrenko and R. W. Whitworth, *Physics of Ice*, Oxford University Press, Oxford (1999). E. Whalley, J. Chem. Phys. **81**, 4087 (1984). A. Hermann, R. P. Krawczyk, M. Lein, P. Schwerdtfeger, I. P. Hamilton, and J. J. P. Stewart, Phys. Rev. A **76**, 013202 (2007). D. Pan, L.-M. Liu, G. A. Tribello, B. Slater, A. Michaelides, and E. Wang, J. Phys. Condens. Matter, **22**, 074209 (2010). M. de Koning, A. Antonelli, A. J. R. da Silva, and A. Fazzio, Phys. Rev. Lett. **97**, 155501 (2006). O. A. von Lilienfeld and A. Tkatchenko, J. Chem. Phys. **132**, 234109 (2010). B. M. Axilrod and E. Teller, J. Chem. Phys. **11**, 299 (1943). A. J. Stone, *The Theory of Intermolecular Forces*, 2nd edition, Oxford University Press, Oxford (2013), Section 10.2. K. T. Tang, Phys. Rev. **177**, 108 (1969). T. Korona, M. Przybytek, and B. Jeziorski, Molec. Phys. **104**, 2303 (2006). Appendix: Three-body correlation in ice {#appendix-three-body-correlation-in-ice .unnumbered} ======================================= In Sec. \[sec:ice\], we noted reasons for thinking that the relative energies of the ice Ih and VIII structures cannot be fully understood without considering 3-body electron correlation (non-additive dispersion). Here, we provide evidence confirming this idea. Since MP2 consists of second-order perturbation theory starting from the HF ground state, it does not account for 3-body correlation. By contrast, CCSD(T) does describe such correlation, since it includes contributions from all orders of perturbation theory. The difference $E^{(3)} ( \delta \mathrm{CCSD(T)} ) \equiv E^{(3)} ( \mathrm{CCSD(T)} ) - E^{(3)} ( \mathrm{MP2} )$ between 3-body energies calculated with CCSD(T) and MP2 provides some measure of 3-body correlation. We show in Fig. \[fig:iceVIII\_nonamer\] the nonamer obtained by cutting out of the ice VIII crystal an H$_2$O monomer and its eight nearest neighbors. Four of these neighbors are hydrogen bonded to the central monomer as donors or acceptors (D1, D2, A1, A2 in the Figure, with central monomer labeled 0), and the other four are non-bonded (N1 - N4 in the Figure). The O-O distances from the central monomer to the eight neighbors are all rather close to the O-O distance of $2.7$ Å in ice Ih, so that the pentamer obtained by taking the central H$_2$O and its four hydrogen-bonded neighbors is almost the same as the pentamer that would be obtained by cutting out of the ice Ih crystal a monomer and its four neighbors. There are 84 trimers that can be formed by extracting all distinct triplets from the nonamer. We have computed $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ for all these trimers using AVDZ basis sets with F12 for both MP2 and CCSD(T), and we use these $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ values to assess 3-body correlation in ice. To reduce basis-set superposition error, we use full counterpoise, so that the total energy and its 1- and 2-body parts are all computed using the basis set of the entire trimer. Tests on a random sub-set of the 84 trimers show that repeating the calculations with AVTZ basis sets produces differences of only a few $\mu E_{\rm h}$ for each trimer. It is helpful to separate the 84 trimers into the following groups (we give examples of group members using the labeling of Fig. \[fig:iceVIII\_nonamer\]): - central monomer with one hydrogen-bonded neighbor and one non-bonded neighbor, the two neighbors being adjacent to each other (example: D1-0-N1); - central monomer with two hydrogen-bonded neighbors (examples: D1-0-D2, D1-0-A1, A1-0-A2); - central monomer with two non-bonded neighbors (example: N1-0-N2); - central monomer with one hydrogen-bonded neighbor and one non-bonded neighbor, the two neighbors being on opposite sides of the central monomer (example: A1-0-N2); - three neighbors of the central monomer, all lying on the same cube face (examples: D1-N1-A1, N1-D2-N2); - three neighbors of the central monomer, not all lying on the same cube face (example: A1-D1-D2). The sums of all the $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ values in each of these six groups are reported in Table \[tab:3Bcorr\]. The contribution from group G1 dominates all the others, and groups G2 and G6 can safely be ignored. The energies in Table \[tab:3Bcorr\] can be used to estimate the contribution of 3-body correlation per monomer in ice VIII. A little thought shows that for each monomer in a large sample of ice VIII the number of trimers of type G1 is 12, which is the same as the number for the nonamer, so that we can use the G1 entry in Table \[tab:3Bcorr\] as it stands. The same is true of all the other entries, except for group G5, where a factor $1/2$ must be applied. Adding the contributions, we estimate that $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ raises the energy of ice VIII by $1.2$ m$E_{\rm h}$ ($\simeq 32$ meV) per monomer. Table \[tab:3Bcorr\] indicates that for ice Ih the energy shift due to $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ is negligible, since the only contributions that need to be considered in that structure are of type G2. It is natural to ask whether $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ can be understood in the framework of the standard Axilrod-Teller-Muto (ATM) theory of 3-body dispersion [@axilrod43; @stone13]. According to ATM, the 3-body dispersion interaction $E_{\rm ATM}$ of three identical, spherically symmetric bodies is: $$E_{\rm ATM} = C_9 ( 1 + 3 \cos \gamma_1 \cos \gamma_2 \cos \gamma_3 ) / ( R_1 R_2 R_3 )^3 \; ,$$ where $R_i$ and $\gamma_i$ are the sides and angles of the triangle formed by the three bodies, and $C_9$ is a positive constant. To test whether this formula can account for the energies in Table \[tab:3Bcorr\], we choose the value of $C_9$ so as to reproduce exactly the 3-body correlation energy of group G1, and we then use this value to predict the contributions from the other groups. The required value of $C_9$ is $289$ a.u., and the Table gives the resulting ATM energies, which agree very well with our $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ values. As predicted by the formula, most of the correlation energies are positive, except for group G4, where one of the angles in the triangle is $180^\circ$, so that the angular factor has the negative value $-2$. For groups G2 and G3, the tetrahedral angle ($109.5^\circ$) is close to the value of $117^\circ$ at which the angular factor passes through zero, and this is one of the reasons why 3-body correlation is expected to be so small in ice Ih. (The other reason is that one side of the triangle is very long.) To check whether our fitted value of $C_9$ is reasonable, we use the approximate relation between $C_9$ and the 2-body $C_6$ dispersion coefficient due to Tang [@tang69], according to which $C_9 \simeq \frac{3}{4} \alpha C_6$, where $\alpha$ is the molecular polarizability. (Needless to say, all these quantities are tensors, but since the polarizability of H$_2$O is nearly isotropic we commit only small errors by treating them as scalars.) Using the values $\alpha = 9.9$ a.u. and $C_6 = 46$ a.u. [@korona06], we estimate $C_9 \simeq 342$ a.u., which is fairly close to our fitted value of $289$ a.u. The values of the 3-body CCSD(T) corrections $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ to the binding energies of the ice Ih and II structures given in Table \[tab:ice\] of the main text were estimated using the ATM formula. prism cage book ring ----------------------------------- ---------- ---------- ---------- ---------- HF([molpro]{}) $-41.60$ $-41.80$ $-43.70$ $-45.65$ HF([vasp]{}) $-41.44$ $-41.92$ $-43.65$ $-45.74$ $\Delta$MP2(direct) $-32.14$ $-31.85$ $-29.53$ $-26.25$ $\Delta$MP2(EMBE-2) $-32.21$ $-31.85$ $-29.35$ $-25.94$ rel MP2(EMBE-2) $0.00$ $0.15$ $0.76$ $2.22$ rel MP2(ref. [@bates09]) $0.00$ $0.10$ $0.53$ $1.93$ $\delta \mathrm{CCSD(T)}$(direct) $0.27$ $0.58$ $0.88$ $1.13$ $\delta \mathrm{CCSD(T)}$(EMBE-2) $-0.24$ $0.07$ $0.63$ $1.07$ : Components of the total binding energies (m$E_{\rm h}$ units) of four isomers of the H$_2$O hexamer computed with MP2 and CCSD(T) near the complete basis-set limit. HF([molpro]{}) and HF([vasp]{}) are Hartree-Fock binding energies from [molpro]{} and from PAW calculations with [vasp]{}, and $\Delta$MP2 is the correlation part of total binding energy, with $\Delta$MP2(direct) computed directly from MP2 calculations on the entire cluster and $\Delta$MP2(EMBE-2) computed using the embedded MBE truncated at the 2-body level. Rel MP2 indicates values of MP2 total binding energies relative to that of the prism. Coupled-cluster corrections to the total binding energies $\delta \mathrm{CCSD(T)}$ are differences between binding energies from CCSD(T) and MP2.[]{data-label="tab:hexamer"} 8-mer 12-mer 16-mer --------------------- ----------- ----------- ----------- HF([molpro]{}) $-68.35$ $-107.85$ $-145.12$ HF() $-68.00$ $-107.32$ $-144.73$ HF(Wang) $-$ $-$ $-144.65$ $\Delta$MP2(direct) $-41.31$ $-$ $-$ $\Delta$MP2(EMBE-2) $-41.28$ $-68.94$ $-114.95$ MP2(direct) $-109.66$ $-$ $-$ MP2(EMBE-2) $-109.63$ $-176.79$ $-260.07$ MP2(Wang) $-$ $-$ $-261.57$ : Total binding energies (m$E_{\rm h}$ units) of near-equilibrium configurations of the H$_2$O 8-mer, 12-mer and 16-mer computed with MP2 near the complete basis-set limit. HF([molpro]{}) and HF([vasp]{}) are Hartree-Fock component of the binding energy from [molpro]{} calculations and from PAW calculations with [vasp]{}; $\Delta$MP2(direct) and $\Delta$MP2(EMBE-2) are correlation energies calculated directly with MP2 on the entire cluster and with MP2 EMBE-2 calculations. MP2(direct) and MP2(EMBE-2) result from addition of $\Delta$MP2(direct) and $\Delta$MP2(EMBE-2) to HF([molpro]{}). HF(Wang) and MP2(Wang) values are from benchmark calculations of Wang *et al.* [@wang13], adjusted for different choice of reference geometry of isolated monomer. []{data-label="tab:8-12-16mer"} Ih II VIII ------------------------- ------------------ ------------------ ------------------ HF([vasp]{}) $-10.40$ $-10.07$ $-6.84$ $\Delta$MP2(EMBE-2) $-11.97$ $-12.17$ $-14.60$ MP2 $-22.37$ $-22.24$ $-21.44$ $\delta$CCSD(T)(EMBE-2) $0.27$ $-0.16$ $-0.87$ 3-body $\delta$CCSD(T) $0.01$ $0.30$ $1.22$ CCSD(T) $-22.09$ $-22.10$ $-21.09$ $E ( \mathrm{DMC} )$ $-22.23 \pm 0.2$ $-22.38 \pm 0.2$ $-21.13 \pm 0.2$ $E ( \mathrm{expt} )$ $-22.42$ $-22.38$ $-21.20$ : Binding energies per monomer (m$E_{\rm h}$ units) of the ice Ih, II and VIII structures at their equilibrium volumes. HF([vasp]{}) is Hartree-Fock part of binding energy computed with PAW using the [vasp]{} code, $\Delta \mathrm{MP2}$(EMBE-2) is MP2 correlation energy from EMBE-2 technique, MP2 is sum of HF([vasp]{}) and $\Delta \mathrm{MP2}$(EMBE-2), $\delta \mathrm{CCSD(T)}$ (EMBE-2) is coupled-cluster correction from EMBE-2, 3-body $\delta$CCSD(T) is the additional 3-body CCSD(T) correction, and CCSD(T) is the sum of MP2 binding energy and coupled-cluster corrections. DMC is benchmark binding energy from quantum Monte Carlo [@santra11], and experimental values are from Ref. [@whalley84], with zero-point vibrational energies subtracted.[]{data-label="tab:ice"} group $n_{\rm G}$ $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ (m$E_{\rm h}$) ATM (m$E_{\rm h}$) ------- ------------- ------------------------------------------------------ -------------------- G1 $12$ $1.01$ $1.01$ G2 $6$ $0.01$ $0.04$ G3 $6$ $0.09$ $0.04$ G4 $4$ $-0.07$ $-0.08$ G5 $24$ $0.22$ $0.22$ G6 $32$ $0.07$ $0.05$ Total $84$ $1.22$ $1.17$ : Three-body correlation energies contributed by different groups of trimers in ice VIII ($n_{\rm G}$ is number of trimers in each group), calculated as the difference $E^{(3)} ( \delta \mathrm{CCSD(T)} )$ between CCSD(T) and MP2 values of trimer 3-body energy. Also given is the 3-body correlation energy predicted by the Axilrod-Teller-Muto formula with coefficient $C_9 = 289$ a.u.[]{data-label="tab:3Bcorr"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![ The four isomers of the H$_2$O hexamer whose binding energies we compute using MP2 and CCSD(T): (a) prism; (b) cage; (c) book; (d) ring. Red and grey spheres represent O and H atoms, with connecting lines showing hydrogen bonds. []{data-label="fig:hexamers"}](prism_clip.eps "fig:"){width="0.5\linewidth"} ![ The four isomers of the H$_2$O hexamer whose binding energies we compute using MP2 and CCSD(T): (a) prism; (b) cage; (c) book; (d) ring. Red and grey spheres represent O and H atoms, with connecting lines showing hydrogen bonds. []{data-label="fig:hexamers"}](cage_clip.eps "fig:"){width="0.5\linewidth"} ![ The four isomers of the H$_2$O hexamer whose binding energies we compute using MP2 and CCSD(T): (a) prism; (b) cage; (c) book; (d) ring. Red and grey spheres represent O and H atoms, with connecting lines showing hydrogen bonds. []{data-label="fig:hexamers"}](book_clip.eps "fig:"){width="0.5\linewidth"} ![ The four isomers of the H$_2$O hexamer whose binding energies we compute using MP2 and CCSD(T): (a) prism; (b) cage; (c) book; (d) ring. Red and grey spheres represent O and H atoms, with connecting lines showing hydrogen bonds. []{data-label="fig:hexamers"}](ring_clip.eps "fig:"){width="0.5\linewidth"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [cc]{} ![ Geometries of the octamer (a), dodecamer (b), and hexadecamer (c) water clusters whose binding energies we compute using MP2. Connecting lines show hydrogen bonds. []{data-label="fig:8-12-16mer"}](8mer_clip.eps "fig:"){width="0.4\linewidth"} & ![ Geometries of the octamer (a), dodecamer (b), and hexadecamer (c) water clusters whose binding energies we compute using MP2. Connecting lines show hydrogen bonds. []{data-label="fig:8-12-16mer"}](12mer_clip.eps "fig:"){width="0.5\linewidth"}\ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![ Parity plots characterizing accuracy of EMBE for MP2 binding energy of random thermal samples of 20 configurations each of the H$_2$O hexamer (left panel) and nonamer (right panel). Horizontal and vertical axes show total binding energy (m$E_{\rm h}$ units) relative to free monomers computed using standard MP2 and using MP2 with EMBE truncated at 2-body level. []{data-label="fig:thermal_parity"}](he_etr_h_dm_aq_df_f12_embe_1-20_pp.eps "fig:"){width="0.5\linewidth"} ![ Parity plots characterizing accuracy of EMBE for MP2 binding energy of random thermal samples of 20 configurations each of the H$_2$O hexamer (left panel) and nonamer (right panel). Horizontal and vertical axes show total binding energy (m$E_{\rm h}$ units) relative to free monomers computed using standard MP2 and using MP2 with EMBE truncated at 2-body level. []{data-label="fig:thermal_parity"}](no_etr_h_dm_aq_df_f12_embe_1-20_pp.eps "fig:"){width="0.5\linewidth"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![ The correlation part of the binding energy per monomer for the ice structures Ih, II and VIII computed with the embedded many-body expansion truncated at 2-body level as function of cut-off radius $R_c$ (see text). Correlation energy is computed with MP2-F12 using AVQZ basis-sets. Straight lines are least-squares fits to the points having $R_c \ge 6.25$ Å (i.e. omitting the point at $1 / R_c^3 = 0.008$ Å). []{data-label="fig:ice_correl_extrap"}](ice_Ih_II_VIII_rc_1-5.eps){width="0.8\linewidth"} ![ Water nonamer cut from ice VIII crystal structure. Labels indicate central monomer (0), donor (D1, D2) and acceptor (A1, A2) hydrogen-bonded neighbors and non-bonded neighbors (N1 - N4). []{data-label="fig:iceVIII_nonamer"}](iceVIII_nonamer_clip.eps){width="0.7\linewidth"}
{ "pile_set_name": "ArXiv" }
--- abstract: 'We calculate the first Hochschild cohomology group of quantum matrices, the quantum general linear group and the quantum special linear group in the generic case when the deformation parameter is not a root of unity. As a corollary, we obtain information about twisted Hochschild homology of these algebras.' author: - 'S Launois and T H Lenagan [^1]' title: '[The first Hochschild cohomology group of quantum matrices and the quantum special linear group]{}' --- .5cm [*2000 Mathematics subject classification:*]{} 16E40, 16W35, 17B37, 17B40, 20G42 .5cm [*Key words:*]{} Quantum matrices, quantum special linear group, derivation, Hochschild cohomology, twisted Hochschild homology. Introduction {#introduction .unnumbered} ============ There has been interest recently in calculating Hochschild homology and cohomology for certain quantum groups and quantum algebras, see, for example, papers by Hadfield and Krähmer, [@hadKtheory; @hadkra], and Brown and Zhang, [@bz]. In this paper, we begin to study the Hochschild cohomology of the algebra of quantum matrices, $\oqmn$, in the generic case where $q$ is not a root of unity. To be more specific, we calculate the first Hochschild cohomology, $\mathrm{HH}^1(\oqmn)$, of $\oqmn$: in other words, we calculate the derivations of $\oqmn$. Once this has been done, we are also able to calculate $ \mathrm{HH}^1$ for the quantum general linear group, $\oqgln$, and the quantum special linear group, $\oqsln$. Alev and Chamarie, [@alevchamarie], have calculated $\mathrm{HH}^1(\oqmtwo)$ directly by using the commutation relations for $\oqmtwo$. It seems impossible to follow this route in the general case: the commutation relations one would have to deal with are far too involved. Thus, we have taken another approach to the problem, by using Cauchon’s theory of deleting derivations. Even via this approach, the calculations are necessarily very technical. However, the idea is relatively easy to follow. The starting point is a result of Osborn and Passman, [@op], that describes the derivations of a quantum torus. In particular, they show that the first Hochschild cohomology group of the quantum torus with $n^2$ generators is a free module of rank $n^2$ over the centre of the quantum torus. The key to transfering this result to $\oqmn$ is Cauchon’s theory of deleting derivations, introduced in [@cauchoneff; @c2]. The algebra $\oqmn$ is presented in a natural way as an iterated Ore extension in $n^2$ steps. In $(n-1)^2$ of these steps a nontrivial skew derivation is involved. The quantum torus of rank $n^2$ is a localisation of a quantum affine space of dimension $n^2$. This quantum affine space is an iterated Ore extension in $n^2$ steps and no skew derivations are involved in any of the steps. Cauchon shows that one can construct a chain of algebras, starting from $\oqmn$ and finishing with a quantum affine space of dimension $n^2$. At each stage in the construction of this chain of algebras, the two adjacent algebras are equal up to the inversion of the powers of an element; and so information can be passed along the chain. However, at $(n-1)^2$ of the stages, the newly constructed algebra can be presented as an iterated Ore extension using one fewer skew derivation. This process can be reversed, and then at $(n-1)^2$ stages a skew derivation is re-introduced into the presentation of the algebra as an iterated Ore extension. Informally, in reintroducing a skew derivation to the presentation, one loses a derivation from the first Hochschild cohomology group. Thus, by the time one has re-introduced all $(n-1)^2$ skew derivations and recovered $\oqmn$, there remain $n^2 - (n-1)^2 = 2n-1$ derivations in $\mathrm{HH}^1(\oqmn)$; in other words, $\mathrm{HH}^1(\oqmn)$ is free of rank $2n-1$ over the centre of $\oqmn$. The technical problems arise due to two main problems. First, the formulae involved in the deleting and re-introducing skew derivations process are awkward to deal with. Secondly, the centres change along the way. In the last section, we apply our main result to compute the first Hochschild cohomology group of the quantum groups $\oqgln$ and $\oqsln$. Regarding the Hochschild homology of $\oqsln$, Feng and Tsygan have shown, [@ft], that $\mathrm{HH}_k(\oqsln)=0$ for all $k \geq n$, whereas the global dimension of $\oqsln$ is $n^2-1$. In other words, there is a “dimension drop” phenomenon in the Hochschild homology of $\oqsln$. To deal with this problem, Hadfield and Krähmer, [@hadKtheory; @hadkra], have shown that one should use the twisted Hochschild homology defined by Kustermans, Murphy and Tuset, [@kmt], rather than classical Hochschild homology. The twisted Hochschild homology of $\oqsln$ depends on an automorphism of $\oqsln$. When $\sigma$ is the modular automorphism associated to the Haar functional of $\oqsln$ ([@ks Section 11.3]), Hadfield and Krähmer have shown that the twisted Hochschild homology group of degree $n^2-1$ is reduced to the base field $K$; that is, $\mathrm{HH}^{\sigma}_{n^2-1}(\oqsln)=K$, so that the “dimension drop” phenomenon disappears. This result was recently generalised to any connected complex semisimple algebraic group $G$ by Brown and Zhang, [@bz]. In the last section of this paper, thanks to a (twisted) Poincaré duality between the twisted Hochschild homology associated to the modular automorphism and the Hochschild cohomology of $\oqsln$, [@hadkra; @vdb], we derive new information on the twisted Hochschild homology of $\oqsln$: roughly speaking, we show that, when $G$ is a connected complex semisimple algebraic group of type $A$, the rank of the algebraic group $G$ appears as a twisted homological invariant of the quantised coordinate ring of $G$. In an earlier paper, [@ll], we have calculated the automorphism group of $\oqmmn$ in the case that $m\neq n$. Partial results were obtained for the square case $\oqmn$, but technicalities prevented a resolution of the problem in this case. In a subsequent paper, we intend to use the results obtained in this paper to finish the calculation of the automorphism group of $\oqmn$. The deleting derivations algorithm in the algebra of quantum matrices. ====================================================================== In this section, we present briefly the deleting-derivations algorithm and use it to construct a tower of algebras from the algebra of quantum matrices to a quantum torus. This tower will be used in the next section to obtain the derivations of the algebra of quantum matrices from the derivations of the quantum torus. The algebra of quantum matrices. -------------------------------- Throughout this paper, we use the following conventions.\ $ $\ $\bullet$ The cardinality of a finite set $I$ is denoted by $|I|$.\ $\bullet$ ${ [ \hspace{-0.65mm} [}a,b {] \hspace{-0.65mm} ]}:= \{ i\in{\mathbb N} \mid a\leq i\leq b\}$.\ $\bullet$ $K$ denotes a field of characteristic 0 and $K^*:=K\setminus \{0\}$.\ $\bullet$ **$q\in K^*$ is not a root of unity**.\ $\bullet$ $n$ denotes a positive integer with $n>1$.\ $\bullet$ $R=\oqmn$ is the quantisation of the ring of regular functions on $n \times n$ matrices with entries in $K$; it is the $K$-algebra generated by the $n \times n $ indeterminates $Y_{{i,\alpha}}$, for $1 \leq i, \alpha \leq n$, subject to the following relations:\ $$\begin{array}{ll} Y_{i, \beta}Y_{i, \alpha}=q^{-1} Y_{i, \alpha}Y_{i ,\beta}, & (\alpha < \beta); \\ Y_{j, \alpha}Y_{i, \alpha}=q^{-1}Y_{i, \alpha}Y_{j, \alpha}, & (i<j); \\ Y_{j,\beta}Y_{i, \alpha}=Y_{i, \alpha}Y_{j,\beta}, & (i <j, \alpha > \beta); \\ Y_{j,\beta}Y_{i, \alpha}=Y_{i, \alpha} Y_{j,\beta}-(q-q^{-1})Y_{i,\beta}Y_{j,\alpha}, & (i<j, \alpha <\beta). \end{array}$$ It is well-known that $R$ can be presented as an iterated Ore extension over $K$, with the generators $Y_{{i,\alpha}}$ adjoined in lexicographic order. Thus the ring $R$ is a Noetherian domain; its skew-field of fractions is denoted by $F$. The deleting derivations algorithm and some related algebras. {#subsectionStrate0} ------------------------------------------------------------- First, recall, see [@c2], that the theory of deleting derivations can be applied to the iterated Ore extension $R=K[Y_{1,1}][Y_{1,2};\sigma_{1,2}]\dots [Y_{n,n};\sigma_{n,n},\delta_{n,n}]$ (where the indices are increasing for the lexicographic order $\leq$). The corresponding deleting derivations algorithm is called the standard deleting derivations algorithm. Before recalling its construction, we need to introduce some notation. - The lexicographic ordering on $\mathbb{N}^2$ is denoted by $\leq_s$. This order is often referred to as the standard ordering on $\mathbb{N}^2$. Recall that $({i,\alpha}) \leq_s (j,\beta)$ if and only if $ [(i < j) \mbox{ or } (i=j \mbox{ and } \alpha \leq \beta )]$. - Set $E=\left({ [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2 \cup \{(n,n+1)\} \right) \setminus \{(1,1)\}$. - Let $(j,\beta) \in E$ with $(j,\beta) \neq (n,n+1)$. The least element (relative to $\leq_s$) of the set $\left\{ ({i,\alpha}) \in E \mbox{ $\mid$ }(j,\beta) <_s ({i,\alpha}) \right\}$ is denoted by $(j,\beta)^{+}$. As described in [@c2], the standard deleting derivations algorithm constructs, for each $r \in E$, a family $\{Y_{{i,\alpha}}^{(r)}\}$, for ${({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2}$, of elements of $F:={\mathrm{Frac}}(R)$, defined as follows.\ $ $ 1. , then $Y_{{i,\alpha}}^{(n,n+1)}=Y_{{i,\alpha}}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$.\ $ $ 2. and that the $Y_{{i,\alpha}}^{(r^{+})}$ for $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n{] \hspace{-0.65mm} ]}^2$ are already constructed. Then, it follows from [@cauchoneff Théorème 3.2.1] that each $Y_{j,\beta}^{(r^+)} \neq 0$ and that, for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n{] \hspace{-0.65mm} ]}^2$, we have $$Y_{{i,\alpha}}^{(r)}=\left\{ \begin{array}{ll} Y_{{i,\alpha}}^{(r^{+})}-Y_{i,\beta}^{(r^{+})} \left(Y_{j,\beta}^{(r^{+})}\right)^{-1} Y_{j,\alpha}^{(r^{+})} & \mbox{ if } i<j \mbox{ and } \alpha < \beta \\ Y_{{i,\alpha}}^{(r^{+})} & \mbox{ otherwise.} \end{array} \right.$$ As in [@cauchoneff], for all $(j,\beta) \in E$, the subalgebra of ${\mathrm{Frac}}(R)$ generated by the indeterminates $Y_{{i,\alpha}}^{(j,\beta)}$, with $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$, is denoted by $R^{(j,\beta)}$. Also, $\overline{R}$ denotes the subalgebra of ${\mathrm{Frac}}(R)$ generated by the indeterminates obtained at the end of this algorithm; that is, $\overline{R}$ is the subalgebra of ${\mathrm{Frac}}(R)$ generated by the $T_{{i,\alpha}}:=Y_{{i,\alpha}}^{(1,2)}$ for each $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$.\ Recall [@cauchoneff Theorem 3.2.1] that, for all $(j,\beta) \in E$, the algebra $R^{(j,\beta)}$ can be presented as an iterated Ore extension over $K$, with the generators $Y_{{i,\alpha}}^{(j,\beta)}$ adjoined in lexicographic order. Thus the algebra $R^{(j,\beta)}$ is a Noetherian domain.\ For all $(j,\beta) \in E$, the multiplicative system generated by the indeterminates $T_{{i,\alpha}}$, for $({i,\alpha}) \geq (j,\beta)$ with $i >1$ and $\alpha >1$, is denoted by $S_{(j,\beta)}$. As $T_{{i,\alpha}}=Y_{{i,\alpha}}^{(j,\beta)}$, for all $({i,\alpha}) \geq (j,\beta)$ with $i >1$ and $\alpha >1$, the set $S_{(j,\beta)}$ is a multiplicative system of regular elements of $R^{(j,\beta)}$. Moreover, the $T_{{i,\alpha}}$ such that $({i,\alpha}) \geq (j,\beta)$ with $i >1$ and $\alpha >1$ are normal in $R^{(j,\beta)}$. Hence, $S_{(j,\beta)}$ is an Ore set in $R^{(j,\beta)}$; so that one can form the localisation $$U_{(j,\beta)}:= R^{(j,\beta)}S_{(j,\beta)}^{-1}.$$ Clearly, the set of monomials of the form $(Y_{1,1}^{(j,\beta)})^{^{\gamma_{1,1}}} (Y_{1,2}^{(j,\beta)})^{^{\gamma_{1,2}}} \dots(Y_{n,n}^{(j,\beta)})^{^{\gamma_{n,n}}}$, with $\gamma_{i,\alpha} \in \mathbb{N} $ if $(i,\alpha) < (j,\beta)$ or $i=1$ or $\alpha=1$, and $\gamma_{i,\alpha} \in \mathbb{Z} $ otherwise, is a PBW basis of $U_{(j,\beta)}$. Further, recall from [@c2 Theorem 2.2.1] that $\Sigma_{(j,\beta)}:=\{(T_{j,\beta})^k \mid k \in \mathbb{N} \}$ is an Ore set in both $R^{(j,\beta)}$ and $R^{(j,\beta)^+}$, and that $$R^{(j,\beta)}\Sigma_{(j,\beta)}^{-1}=R^{(j,\beta)^+}\Sigma_{(j,\beta)}^{-1}.$$ Hence, we obtain the following result. \[lemmaUjbeta\] $R^{(j,1)}=R^{(j,2)}$ and $U_{(j,1)}=U_{(j,2)}$.\ Let $\beta > 1$. Then $R^{(j,\beta)}\Sigma_{(j,\beta)}^{-1}=R^{(j,\beta)^+}\Sigma_{(j,\beta)}^{-1}$ and $U_{(j,\beta)}=U_{(j,\beta)^+}\Sigma_{(j,\beta)}^{-1}$. Let $N \in \mathbb{N}^*$ and let $\Lambda=(\Lambda_{i,j})$ be a multiplicatively antisymmetric $N\times N$ matrix over $K^*$; that is, $\Lambda_{i,i}=1$ and $\Lambda_{j,i}=\Lambda_{i,j}^{-1}$ for all $i,j \in { [ \hspace{-0.65mm} [}1,N {] \hspace{-0.65mm} ]}$. The corresponding quantum affine space is denoted by $K_{\Lambda}[T_1,\dots,T_N]$; that is, $K_{\Lambda}[T_1,\dots,T_N]$ is the $K$-algebra generated by the $N$ indeterminates $T_1,\dots,T_N$ subject to the relations $T_i T_j =\Lambda_{i,j} T_j T_i $ for all $i,j \in { [ \hspace{-0.65mm} [}1,N {] \hspace{-0.65mm} ]}$. In [@c2 Section 2.2], Cauchon has shown that $\overline{R}$ can be viewed as the quantum affine space generated by the indeterminates $T_{{i,\alpha}}$ for $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$, subject to the following relations. $$\begin{array}{ll} T_{i, \beta}T_{i, \alpha}=q^{-1}T_{i, \alpha}T_{i ,\beta}, & (\alpha < \beta); \\ T_{j, \alpha}T_{i, \alpha}=q^{-1}T_{i, \alpha}T_{j, \alpha}, & (i<j); \\ T_{j,\beta}T_{i, \alpha}=T_{i, \alpha}T_{j,\beta}, & (i <j, \alpha > \beta); \\ T_{j,\beta}T_{i, \alpha}=T_{i, \alpha} T_{j,\beta}, & (i<j, \alpha <\beta). \end{array}$$ Hence, $\overline{R}=K_{\Lambda}[T_{1,1},T_{1,2},\dots,T_{n,n}]$, where $\Lambda$ denotes the $n^2 \times n^2$ matrix defined as follows. Set $$A:=\left( \begin{array}{ccccc} 0 & 1 & 1 & \dots & 1 \\ -1 & 0 & 1 & \dots & 1 \\ \vdots & \ddots &\ddots&\ddots &\vdots \\ -1 & \dots & -1 & 0 & 1 \\ -1& \dots& \dots & -1 & 0 \\ \end{array} \right) \in \mathcal{M}_{n}(\mathbb{Z}),$$ and $$B:=\left( \begin{array}{ccccc} A & I & I & \dots & I \\ -I & A & I & \dots & I \\ \vdots & \ddots &\ddots&\ddots &\vdots \\ -I & \dots & -I & A & I \\ -I& \dots& \dots & -I & A \\ \end{array} \right) \in \mathcal{M}_{n^2}(\mathbb{Z}),$$ where $I$ denotes the identity matrix of $\mathcal{M}_n(\mathbb{Z})$. Then $\Lambda $ is the $n^2 \times n^2$ matrix whose entries are defined by $\Lambda_{k,l} =q^{b_{k,l}}$ for all $k,l \in { [ \hspace{-0.65mm} [}1, n^2 {] \hspace{-0.65mm} ]}$. Now, observe that $$U_{(2,2)}=K_{\Lambda}[T_{1,1},T_{1,2},\dots,T_{1,n},T_{2,1},T_{2,2}^{\pm 1},\dots,T_{2,n}^{\pm 1}, \dots, T_{n,1},T_{n,2}^{\pm 1},\dots,T_{n,n}^{\pm 1}].$$ In other words, $$U_{(2,2)}=\overline{R}S^{-1},$$ where $S=S_{(2,2)}$ is the multiplicative system generated by the $T_{{i,\alpha}}$ with $i>1$ and $\alpha > 1$.\ In order to investigate the Lie algebra of derivations of $R$, we also need to introduce the following algebras.\ For all $(j,\beta) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$ with $j=1$ or $\beta =1$, the multiplicative system generated by those $T_{{i,\alpha}}$ such that $({i,\alpha}) > (j,\beta)$ and either $i=1$ or $\alpha=1$ is denoted by $\mathcal{S}_{(j,\beta)}$. Clearly, $\mathcal{S}_{(j,\beta)}$ is an Ore set in $U_{(2,2)}$. Set $$V_{(j,\beta)}:=U_{(2,2)}\mathcal{S}_{(j,\beta)}^{-1},$$ and observe that $V_{(n,1)}=U_{(2,2)}$. As the set of monomials $ T_{1,1}^{\gamma_{1,1}} T_{1,2}^{\gamma_{1,2}} \dots T_{n,n}^{\gamma_{n,n}}$, with $\gamma_{i,\alpha} \in \mathbb{N} $ if $i=1$ or $\alpha=1$, and $\gamma_{i,\alpha} \in \mathbb{Z} $ otherwise, is a PBW basis of $U_{(2,2)}$, it is easy to check that the set of monomials $ T_{1,1}^{\gamma_{1,1}} T_{1,2}^{\gamma_{1,2}} \dots T_{n,n}^{\gamma_{n,n}}$, with $\gamma_{i,\alpha} \in \mathbb{N} $ if $(i,\alpha) \leq (j,\beta)$ and either $i=1$ or $\alpha=1$, and $\gamma_{i,\alpha} \in \mathbb{Z} $ otherwise, is a PBW basis of $V_{(j,\beta)}$ Finally, set $V_{(1,0)}:=\qtor$, where $\qtor$ denotes the quantum torus associated to the quantum affine space $\overline{R}$; that is, the localisation of $\overline{R}$ with respect of the multiplicative system generated by all the $T_{{i,\alpha}}$. Recall that the set of monomials $\{ T_{1,1}^{\gamma_{1,1}} T_{1,2}^{\gamma_{1,2}} \dots T_{n,n}^{\gamma_{n,n}} \}$, with $\gamma_{i,\alpha} \in \mathbb{Z} $, forms a PBW basis of $\qtor$. Our proof will use the tower of algebras: $$\begin{aligned} \lefteqn{R=U_{(n,n+1)} \subset U_{(n,n)} \subset \dots \subset U_{(2,3)} \subset U_{(2,2)}=V_{(n,1)} \subset V_{(n-1,1)}} \nonumber\\ &&\subset \dots \subset V_{(2,1)} \subset V_{(1,n)} \subset \dots \subset V_{(1,0)}=\qtor \label{tower} \end{aligned}$$ Quantum minors and the centres of $\oqmn$, $\qtor$ and $U_{(2,2)}$. {#sectioncentre} ------------------------------------------------------------------- The algebra $\oqmn$ has a special element, the [*quantum determinant*]{}, denoted by $\detq$, and defined by $$\detq := \sum_{\sigma}\, (-q)^{l(\sigma)}Y_{1,\sigma(1)}\cdots Y_{n,\sigma(n)},$$ where the sum is taken over the permutations of $\{1, \dots, n\}$ and $l(\sigma)$ is the usual length function on such permutations. The quantum determinant is a central element of $\oqmn$, see, for example, [@pw Theorem 4.6.1]. If $I$ and $\Gamma$ are $t$-element subsets of $\{1, \dots, n\}$, then the quantum determinant of the subalgebra of $\oqmn$ generated by $Y_{i,\alpha}$, with $i\in I$ and $\alpha \in \Gamma$, is denoted by $[I\mid \Gamma]$. The elements $[I\mid \Gamma]$ are the [*quantum minors*]{} of $\oqmn$. In order to describe the centres of $\qtor$ and $U_{(2,2)}$, we introduce the following quantum minors of $R$. For $1\leq i \leq 2n-1$, let $b_i$ be the quantum minor defined as follows. $$b_i:= \left\{ \begin{array}{ll} \left[1, \dots , i \mid n-i+1 , \dots , n \right] & \mbox{ if }1 \leq i \leq n \\ \left[i-n+1, \dots , n \mid 1 , \dots , 2n-i \right] & \mbox{ if } n < i \leq 2n-1 \end{array} \right.$$ For convenience, we set $b_0=b_{2n}=1$. Note that these $b_i$ are a priori elements of $R$. However, it turns out that they also belong to the quantum torus $\qtor$, as the following result shows. \[normalT\] For $1 \leq i \leq 2n-1$, we have $$b_i= \left\{ \begin{array}{ll} T_{1,n-i+1}T_{2,n-i+2}\dots T_{i,n} & \mbox{\rm if } 1 \leq i \leq n \\ T_{i-n+1,1}T_{i-n+2,2}\dots T_{n,2n-i} & \mbox{\rm if } n \leq i \leq 2n-1 \\ \end{array} \right.$$ This follows from [@c2 Proposition 5.2.1] (see also [@ll Lemma 2.2]). The centre of an algebra $A$ is denoted by $Z(A)$. Set $\Delta_i:=b_i b_{n+i}^{-1}$ for all $i\in \{1,\dots,n\}$. Notice that $\Delta_n=\mathrm{det}_q$. It follows from Lemma \[normalT\] that the $\Delta_i$ belong to the quantum torus $\qtor$: in fact, the $\Delta_i$ are also central. The following result is established in [@ll Theorem 2.4]. \[centreqtor\] $Z(\qtor)=K[\Delta_1^{\pm 1}, \dots , \Delta_n^{\pm 1}]$. It is useful to record for later use the expression for the $\Delta_i$ in terms of the $T_{i,\alpha}$. \[delta=T\] $\Delta_i = T_{1,n-i+1}T_{2,n-i+2}\dots T_{i,n} T_{i+1,1}^{-1}T_{i+2,2}^{-1}\dots T_{n, n-i}^{-1}$, for $1\leq i\leq n$. This follows easily from Lemma \[normalT\], noting the commutation relations between the $T_{i,\alpha}$. We finish this section by describing the centre of the algebra $U_{(2,2)}$. First, observe that $Z(U_{(2,2)}) \subseteq Z(\qtor)=K[\Delta_1^{\pm 1}, \dots , \Delta_n^{\pm 1}]$, since $\qtor$ is a localisation of $U_{(2,2)}$. Next, by using the PBW-basis of $U_{(2,2)}$ together with the expressions for the $\Delta_i$ as products of certain $T_{i,\alpha}$ coming from Lemma \[delta=T\], we obtain the following result. \[centreU\] $Z(U_{(2,2)})=K[\Delta_n]=K[\mathrm{det}_q]$. Derivations =========== Recall that $R$ denotes the algebra of $n \times n $ generic quantum matrices. Our aim in this section is to investigate ${{\rm Der}}(R)$, the Lie algebra of derivations of $R$. Let $D$ be a derivation of $R$. First, as there exists a multiplicative system $\Sigma$ of $R$ such that $R\Sigma^{-1}=\qtor=V_{(1,0)}$, see [@cauchoneff Theorem 3.3.1], the derivation $D$ extends (uniquely) to a derivation of the quantum torus $\qtor$. It follows from [@op Corollary 2.3] that $D$ can be written as $$D=\mathrm{ad}_x + \theta,$$ where $x \in \qtor=V_{(1,0)}$ and $\theta$ is a derivation of $\qtor$ such that $\theta(T_{i,\alpha})=z_{i,\alpha}T_{i,\alpha}$ with $z_{i,\alpha} \in Z(\qtor)$ for all $(i,\alpha) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. For $\underline{\gamma} \in \mathbb{Z}^{n^2}$, set $$T^{\underline{\gamma}}:=T_{1,1}^{\gamma_{1,1}}T_{1,2}^{\gamma_{1,2}} \dots T_{n,n}^{\gamma_{n,n}}.$$ As the set of monomials $\{T^{\underline{\gamma}}\}_{\underline{\gamma} \in \mathbb{Z}^{n^2}}$ forms a PBW basis of $\qtor$, one can write $$x= \sum_{\underline{\gamma} \in \mathcal{E}} c_{\underline{\gamma}} T^{\underline{\gamma}},$$ where $\mathcal{E}$ is a finite subset of $\mathbb{Z}^{n^2}$ and $c_{\underline{\gamma}}\in {\mathbb C}$. Moreover, as $\mathrm{ad}_x=\mathrm{ad}_{x+z}$ for all $z \in Z(\qtor)$, one can assume that, for all $\underline{\gamma} \in \mathcal{E}$, the monomial $ T^{\underline{\gamma}}$ does not belong to $Z(\qtor)$. Next, recall that an element $y=\sum_{\underline{\gamma} \in \mathbb{Z}^{n^2}} y_{\underline{\gamma}} T^{\underline{\gamma}} \in \qtor$ is central if and only if $T^{\underline{\gamma}} \in Z(\qtor)$ for each $\underline{\gamma} \in \mathbb{Z}^{n^2}$ such that $y_{\underline{\gamma}} \neq 0$. Denote by $\mathcal{F}$ the set of all $\underline{\gamma} \in \mathbb{Z}^{n^2}$ such that $T^{\underline{\gamma}} \in Z(\qtor)$. Then, for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$, we can write $z_{{i,\alpha}}$ in the form $$z_{{i,\alpha}}= \sum_{\underline{\gamma} \in \mathcal{F}} z_{{i,\alpha},\underline{\gamma}} T^{\underline{\gamma}},$$ with $z_{{i,\alpha},\underline{\gamma}} \in {\mathbb C}$. Let $0 \leq \beta \leq n$. Then $x \in V_{(1,\beta)}$. The proof is by induction on $\beta$. The case $\beta=0$ follows from the above discussion, because $V_{(1,0)} = \qtor$. Hence, assume that $\beta \geq 1$. It follows from the inductive hypothesis that $$x= \sum_{\underline{\gamma} \in \mathcal{E}} c_{\underline{\gamma}} T^{\underline{\gamma}},$$ where $\mathcal{E}$ is a finite subset of the set $\{ \underline{\gamma} \in \mathbb{Z}^{n^2} \mid \gamma_{1,1} \geq 0, \dots, \gamma_{1,\beta-1} \geq 0 \mbox{ and } T^{\underline{\gamma}} \notin Z(\qtor)\}$. We need to prove that $\gamma_{1,\beta} \geq 0$. Observe that, by construction, $ V_{(1,\beta)}$ is obtained from $R$ by a sequence of localisations. Thus, $D$ extends to a derivation of $V_{(1,\beta)}$. Let $(i,\alpha) \neq (1,\beta)$. Then $D(T_{{i,\alpha}}) \in V_{(1,\beta)}$, since $T_{{i,\alpha}} \in V_{(1,\beta)}$; that is, $$\begin{aligned} \label{v1beta} xT_{{i,\alpha}}-T_{{i,\alpha}}x+z_{{i,\alpha}}T_{{i,\alpha}}\in V_{(1,\beta)}.\end{aligned}$$ Set $$x_+:= {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} c_{\underline{\gamma}} T^{\underline{\gamma}}, \qquad x_-= {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} c_{\underline{\gamma}} T^{\underline{\gamma}}.$$ We need to prove that $x_-=0$. It follows from (\[v1beta\]) that $$u:=x_-T_{{i,\alpha}}-T_{{i,\alpha}}x_-+z_{{i,\alpha}}T_{{i,\alpha}}\in V_{1,\beta}.$$ Now, $$\begin{aligned} \label{ubasis} u = {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} (q^{-exp({i,\alpha},\underline{\gamma},+)} -q^{-exp({i,\alpha},\underline{\gamma},-)})c_{\underline{\gamma}} T^{\underline{\gamma}+\varepsilon_{{i,\alpha}}}+\sum_{\underline{\gamma} \in \mathcal{F}} q^{-exp({i,\alpha},\underline{\gamma},+)} z_{{i,\alpha},\underline{\gamma}}T^{\underline{\gamma}+\varepsilon_{{i,\alpha}}} \end{aligned}$$ where $$exp({i,\alpha},\underline{\gamma},-):=\sum_{k=1}^{i-1}\gamma_{k,\alpha} +\sum_{k=1}^{\alpha-1}\gamma_{i,k}\, , \qquad exp({i,\alpha},\underline{\gamma},+):=\sum_{k=i+1}^{n}\gamma_{k,\alpha} +\sum_{k=\alpha+1}^{n}\gamma_{i,k}$$ and $\varepsilon_{{i,\alpha}}$ is the element of $\mathbb{Z}^{n^2}$ that has $1$ in the $({i,\alpha})$ position and zero elsewhere. As we have assumed that the monomial $ T^{\underline{\gamma}}$ does not belong to $Z(\qtor)$ for all $\underline{\gamma} \in \mathcal{E}$, we have: $$\mbox{for all }\underline{\gamma} \in \mathcal{E}, \mbox{and for all }\underline{\gamma}' \in \mathcal{F}, \underline{\gamma}+\varepsilon_{{i,\alpha}} \neq \underline{\gamma}' + \varepsilon_{{i,\alpha}}.$$ Hence, (\[ubasis\]) gives the expression of $u$ in the PBW basis of $\qtor$. On the other hand, as $u$ belongs to $V_{(1,\beta)}$, we obtain $$u = \sum_{\underline{\gamma} \in \mathcal{E}'} x_{\underline{\gamma}}T^{\underline{\gamma}},$$ where $\mathcal{E}'$ is a finite subset of $\{ \underline{\gamma} \in \mathbb{Z}^{n^2} \mid \gamma_{1,1} \geq 0, \dots, \gamma_{1,\beta} \geq 0\}$. Comparing the two expressions of $u$ in the PBW basis of $\qtor$ leads to $q^{-exp({i,\alpha},\underline{\gamma},+)} -q^{-exp({i,\alpha},\underline{\gamma},-)}=0$ for all $\underline{\gamma} \in \mathcal{E}$ such that $\gamma_{1,\beta} < 0$ and $c_{\underline{\gamma}} \neq 0$. Hence, $$x_-T_{{i,\alpha}}-T_{{i,\alpha}}x_- = {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} (q^{-exp({i,\alpha},\underline{\gamma},+)} -q^{-exp({i,\alpha},\underline{\gamma},-)}) c_{\underline{\gamma}} T^{\underline{\gamma}+\varepsilon_{{i,\alpha}}}=0$$ for all $({i,\alpha}) \neq (j,\beta)$. In other words, $x_-$ commutes with those $T_{{i,\alpha}} $ such that $({i,\alpha}) \neq (1,\beta)$. Now, recall from Lemma \[delta=T\] that $\Delta_{n+1-\beta} = T_{1,\beta}T_{2,\beta+1} \dots T_{n+1-\beta,n}T_{n+2-\beta,1}^{-1}T_{n+3-\beta,2}^{-1} \dots T_{n,\beta-1}^{-1} $ is central in $\qtor$. Hence, $x_-$ also commutes with $T_{1,\beta}$. This implies that $x_-\in Z(\qtor)$; so that $x_-$ can be written as $$x_-= \sum_{\underline{\gamma} \in \mathcal{F}} d_{\underline{\gamma}} T^{\underline{\gamma}}.$$ Hence, $x_-=0$, because $\mathcal{E} \cap \mathcal{F}= \emptyset$; so that $x=x_+ \in V_{(1,\beta)}$, as desired. The following result is proved by using similar arguments. Let $2\leq j\leq n$. Then $x \in V_{(j,1)}$. In particular, $ x\in V_{(n,1)} = U_{(2,2)}$. The derivation $D$ of $R$ extends to a derivation of $U_{(2,2)}$, since $U_{(2,2)}$ is obtained from $R$ by a sequence of localisations; so $D(T_{{i,\alpha}}) \in U_{(2,2)}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. Hence $$xT_{{i,\alpha}}-T_{{i,\alpha}}x+z_{{i,\alpha}}T_{{i,\alpha}}= D(T_{{i,\alpha}}) \in U_{(2,2)}.$$ As we have proved that $x \in U_{(2,2)}$, this implies that $z_{{i,\alpha}}T_{{i,\alpha}} \in U_{(2,2)}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. If $i \geq 2$ and $\alpha \geq 2$, then $T_{i,\alpha }$ is invertible in $U_{(2,2)}$, so that $z_{{i,\alpha}} \in U_{(2,2)} \cap Z(\qtor) = Z(U_{(2,2)})$. However, $Z(U_{(2,2)})=K[\Delta_n]$ by Lemma \[centreU\]; so $z_{{i,\alpha}} \in K[\Delta_n] \subseteq R$ in this case. In the other cases, at this stage in the proof we can only prove a weaker result. Assume that $i=1$ and $\alpha >1$. Then $z_{1,\alpha}T_{1,\alpha} \in U_{(2,2)}$. On the other hand, as $z_{1,\alpha}$ belongs to the centre of the quantum torus $\qtor$, one can write $z_{1,\alpha}$ as follows: $$z_{1,\alpha}=P(\Delta_1,\dots,\Delta_n) \in K[\Delta_1^{\pm 1}, \dots , \Delta_n^{\pm 1}].$$ Now, using the expressions of the $\Delta_i$ as products of $T_{j,\beta}^{\pm 1}$ coming from Lemma \[delta=T\], we obtain $$\begin{aligned} \label{zcentral} z_{1,\alpha}=\sum_{\underline{\gamma} \in \mathcal{Z}}z_{1,\alpha,\underline{\gamma}} T^{\underline{\gamma}},\end{aligned}$$ where $\mathcal{Z}$ denotes the set of those $\underline{\gamma}=(\gamma_{1,1},\gamma_{1,2},\dots,\gamma_{n,n}) \in \mathbb{Z}^{n^2}$ such that 1. $\gamma_{1,1} =\gamma_{2,2}=\dots= \gamma_{n,n} $ 2. $\gamma_{1,\beta}= \gamma_{2,\beta+1} = \dots = \gamma_{n-\beta+1,n}= -\gamma_{n-\beta+2,1}=\dots= -\gamma_{n,\beta-1}$ for all $\beta \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}$, and $z_{1,\alpha,\underline{\gamma}} \in K$ for all $\underline{\gamma} \in \mathcal{Z}$. Hence, $$z_{1,\alpha}T_{1,\alpha}=\sum_{\underline{\gamma} \in \mathcal{Z}}z'_{{i,\alpha},\underline{\gamma}} T^{\underline{\gamma}+\varepsilon_{1,\alpha}} \in U_{(2,2)},$$ where $z'_{1,\alpha,\underline{\gamma}}=\qdot z_{1,\alpha,\underline{\gamma}}$ for all $\underline{\gamma} \in \mathcal{Z}$. As the monomials $T_{1,1}^{\gamma_{1,1}} T_{1,2}^{\gamma_{1,2}} \dots T_{n,n}^{\gamma_{n,n}}$, where $\gamma_{j,\beta} \in \mathbb{N} $ when either $j=1$ or $\beta=1$, and $\gamma_{j,\beta} \in \mathbb{Z} $ otherwise, form a PBW basis of $U_{(2,2)}$, we obtain $z'_{1,\alpha,\underline{\gamma}}=0$ whenever either $\gamma_{1,1} < 0$, or $\gamma_{1, \beta } \neq 0$ for some $\beta \neq 1,\alpha$, or $\gamma_{1,\alpha} \notin \{ -1 , 0\}$. Hence we easily deduce from (\[zcentral\]) and Lemma \[delta=T\] that there exist polynomials $P_{1,\alpha},Q_{1,\alpha} \in K[\Delta_n]$ such that $$z_{1,\alpha}=Q_{1,\alpha}(\Delta_n)\Delta_{n+1-\alpha}^{-1}+ P_{1,\alpha}(\Delta_n).$$ Similar computations for $z_{i,1}$, for $i >1$, and for $z_{1,1}$ lead to the following result. \[propositionu22\] 1. $x \in U_{(2,2)}$. 2. Let $(i,\alpha) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. Then there exist polynomials $P_{{i,\alpha}},Q_{{i,\alpha}} \in K[\Delta_n]$ such that $$z_{i,\alpha}= \left\{ \begin{array}{ll} Q_{{i,\alpha}}(\Delta_n)\Delta_{n+1-\alpha}^{-1}+P_{{i,\alpha}}(\Delta_n) & \mbox{\rm if }i=1,\\ Q_{{i,\alpha}}(\Delta_n)\Delta_{i-1}+P_{{i,\alpha}}(\Delta_n) & \mbox{\rm if }\alpha=1,\\ P_{{i,\alpha}}(\Delta_n) & \mbox{\rm otherwise.} \end{array} \right.$$ (Here we use the convention $\Delta_0=b_0b_n^{-1}=\Delta_n^{-1}$.) Next, we have to deal with a second kind of localisation that involves inverting an element which is not normal. This is done in several steps. \[lemmau23\] 1. $x \in U_{(2,3)}$. 2. $z_{1,1}+z_{2,2}=z_{1,2}+z_{2,1}$. 3. $z_{1,1},z_{1,2},z_{2,1}$ and $z_{2,2}$ belong to $Z(R)=K[\Delta_n]$. 4. $D(Y_{{i,\alpha}}^{(2,3)})=\mathrm{ad}_x(Y_{{i,\alpha}}^{(2,3)})+z_{{i,\alpha}} Y_{{i,\alpha}}^{(2,3)}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. $\bullet$ [**Step 1: we prove that $x \in U_{(2,3)}$.**]{} In order to simplify the notation, set $Z_{{i,\alpha}}:=Y_{{i,\alpha}}^{(2,3)}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. Moreover, for all $\underline{\gamma} \in \mathcal{E}:= \mathbb{N}^{n} \times (\mathbb{N} \times \mathbb{Z}^{n-1}) \times \dots \times (\mathbb{N} \times \mathbb{Z}^{n-1}) \subset \mathbb{Z}^{n^2} $, set $$Z^{\underline{\gamma}}:=Z_{1,1}^{\gamma_{1,1}}Z_{1,2}^{\gamma_{1,2}} \dots Z_{n,n}^{\gamma_{n,n}}.$$ It follows from Proposition \[propositionu22\] that $x $ belongs to $U_{(2,2)}$. Using the notation of the previous section, it follows from Lemma \[lemmaUjbeta\] that $$U_{(2,2)}=U_{(2,3)}\Sigma_{(2,2)}^{-1};$$ so that $x$ can be written as $$x= \sum_{\underline{\gamma} \in \mathcal{E}} c_{\underline{\gamma}} Z^{\underline{\gamma}},$$ with $c_{\underline{\gamma}}\in{\mathbb C}$. Set $$x_+:= {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} c_{\underline{\gamma}} Z^{\underline{\gamma}}, \qquad x_-= {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} c_{\underline{\gamma}} Z^{\underline{\gamma}},$$ with $ c_{\underline{\gamma}} \in {\mathbb C}$. Assume that $x_- \neq 0$. Denote by $B$ the subalgebra of $U_{(2,2)}$ generated by the $Z_{{i,\alpha}}$ with $({i,\alpha}) \neq (2,2)$ and the $Z_{{i,\alpha}}^{-1}$ with $i \geq 2$ and $\alpha \geq 2 $ but $({i,\alpha}) \neq (2,2)$. Hence $U_{(2,2)}=U_{(2,3)}\Sigma_{(2,2)}^{-1}$ is a left $B$-module with basis $\{Z_{2,2}^{l}\}_{l \in \mathbb{Z}}$, so that there are elements $b_{l} \in B$ such that $$x_-= \sum_{l =l_0}^{-1} b_{l} Z_{2,2}^{l}$$ with $l_0 <0$ and $b_{l_0}\neq 0$. (Observe that this makes sense because we have assumed that $x_- \neq 0$.) The derivation $D$ of $R$ extends to a derivation of $U_{(2,3)}$, since $U_{(2,3)}$ is obtained from $R$ by a sequence of localisations; so $D(Z_{1,1}) \in U_{(2,3)}$. Now, $Z_{1,1} = T_{1,1} + T_{1,2}T_{2,2}^{-1}T_{2,1} = T_{1,1} + Z_{1,2}Z_{2,2}^{-1}Z_{2,1}$; so that $$\begin{aligned} \label{zinU23} x_-Z_{1,1}-Z_{1,1}x_-+z_{1,1}Z_{1,1} + (z_{1,2}+z_{2,1}-z_{1,1}-z_{2,2})Z_{1,2}Z_{2,2}^{-1}Z_{2,1} &\in& U_{(2,3)}.\end{aligned}$$ Now $$Z_{2,2}^{-k}Z_{1,1} =Z_{1,1}Z_{2,2}^{-k}+ q(q^{2k}-1) Z_{1,2}Z_{2,1}Z_{2,2}^{-k-1}$$ for each positive integer $k$. Hence, $$\begin{aligned} \lefteqn{x_-Z_{1,1}- Z_{1,1}x_- +z_{1,1}Z_{1,1} + (z_{1,2}+z_{2,1}-z_{1,1}-z_{2,2}) Z_{1,2}Z_{2,2}^{-1}Z_{2,1} } \nonumber \\ & = & \sum_{ l =l_0}^{-1} b'_{l}Z_{2,2}^{l} + \sum_{l =l_0}^{-1} q(q^{-2l}-1)b_{l}Z_{1,2}Z_{2,1}Z_{2,2}^{l-1}\nonumber\\ &&\qquad - \,(z_{1,2}+z_{2,1}-z_{1,1}-z_{2,2}) Z_{1,2}Z_{2,2}^{-1}Z_{2,1} +z_{1,1}Z_{1,1} \in U_{(2,3)}.\label{step1z11}\end{aligned}$$ It follows from Proposition \[propositionu22\] that $z_{1,1} det_q$, $ z_{1,2} b_{n-1}$ and $ z_{2,1}b_{n+1} $ belong to $R \subset U_{(2,3)}$. On the other hand, it follows from [@c2 Proposition 5.2.1] that $det_q = (Z_{1,1}Z_{2,2}-q Z_{1,2}Z_{2,1})Z_{3,3} \dots Z_{n,n}$, while $b_{n-1}=Z_{1,2}Z_{2,3} \dots Z_{n-1,n}$ and $b_{n+1}=Z_{2,1} \dots Z_{n,n-1}$. Hence each of $z_{1,1} (Z_{1,1}Z_{2,2}-q Z_{1,2}Z_{2,1}),z_{1,2} Z_{1,2}$ and $z_{2,1}Z_{2,1}$ belong to $U_{(2,3)}$. As $z_{2,2} \in R$, by Proposition \[propositionu22\], we obtain $$(z_{1,2}+z_{2,1}-z_{1,1}-z_{2,2})Z_{1,2}Z_{2,1}(Z_{1,1}Z_{2,2}-q Z_{1,2}Z_{2,1}) \in U_{(2,3)}.$$ Multiplying (\[step1z11\]) on the right by $(Z_{1,1}Z_{2,2}-q Z_{1,2}Z_{2,1})Z_{2,2}$ leads to: $$\begin{aligned} \sum_{ l =l_0}^{-1} b'_{l}(Z_{1,1}Z_{2,2}-q Z_{1,2}Z_{2,1})Z_{2,2}^{l+1} + \sum_{ l =l_0}^{-1} q(q^{-2l}-1)b_{l}Z_{1,2}Z_{2,1}(Z_{1,1}Z_{2,2}-q Z_{1,2}Z_{2,1})Z_{2,2}^{l} \in U_{(2,3)}.\end{aligned}$$ In other words, $$\sum_{ l =l_0+1}^{1} b''_{l}Z_{2,2}^{l} - q^2 (q^{-2l_0}-1) b_{l_0}Z_{1,2}^2Z_{2,1}^2Z_{2,2}^{l_0} \in U_{(2,3)}.$$ As $U_{(2,3)}$ is a left $B$-module with basis $\{Z_{2,2}^{l}\}_{l \in \mathbb{N}}$, this implies that $b_{l_0}=0$, a contradiction. Hence $x_-=0$ and $x=x_+ \in U_{(2,3)}$, as desired.\ $\bullet$ [**Step 2: we prove that $z_{1,1}+z_{2,2}=z_{1,2}+z_{2,1}$.**]{} As $x_-=0$ and $z_{1,1}(Z_{1,1}Z_{2,2}-qZ_{1,2}Z_{2,1}) \in U_{(2,3)}$, we deduce from (\[zinU23\]) that $$y:=(z_{1,2}+z_{2,1}-z_{1,1}-z_{2,2})Z_{1,2}Z_{2,1}(Z_{1,1}Z_{2,2}-q Z_{1,2}Z_{2,1}) \in U_{(2,3)} Z_{2,2}.$$ So $y$ is an element of $U_{(2,3)}$ which $q$-commutes with $Z_{1,1}$ and which belongs to $U_{(2,3)} Z_{2,2}$. We show next that this forces $y=0$, so that $z_{1,1}+z_{2,2}=z_{1,2}+z_{2,1}$, as desired.\ Since $U_{(2,3)}$ is a left $B$-module with basis $\{Z_{2,2}^{l}\}_{l \in \mathbb{N}}$, one can write $y=\sum_{l \in \mathbb{N}}y_l Z_{2,2}^l$ with $y_l \in B$ equal to zero except for at most a finite number of them. As $y$ belongs to $U_{(2,3)} Z_{2,2}$, it is easy to show that $y_0=0$, so that $$y={\sum_{\substack{l \in \mathbb{N}\\#2}}} y_l Z_{2,2}^l.$$ On the other hand, as $y$ $q$-commutes with $Z_{1,1}$, there exists $a \in \mathbb{Z}$ such that $Z_{1,1}y=q^a y Z_{1,1}$. In other words, $${\sum_{\substack{l \in \mathbb{N}\\#2}}}Z_{1,1}y_l Z_{2,2}^l = {\sum_{\substack{l \in \mathbb{N}\\#2}}} q^ay_l Z_{2,2}^lZ_{1,1}.$$ As $Z_{2,2}^l Z_{1,1}=Z_{1,1}Z_{2,2}^l +q(q^{-2l}-1)Z_{1,2}Z_{2,1}Z_{2,2}^{l-1}$ for all positive integer $l$, we get $$\begin{aligned} {\sum_{\substack{l \in \mathbb{N}\\#2}}}Z_{1,1}y_l Z_{2,2}^l = {\sum_{\substack{l \in \mathbb{N}\\#2}}} q^ay_l Z_{1,1} Z_{2,2}^l +{\sum_{\substack{l \in \mathbb{N}\\#2}}} q^{a+1} (q^{-2l}-1)y_l Z_{1,2} Z_{2,1} Z_{2,2}^{l-1}\end{aligned}$$ Assume that $y \neq 0$ and let $l_0$ be minimal such that $y_{l_0}\neq 0$. Observe that $l_0 \geq 1$. As $U_{(2,3)}$ is a left $B$-module with basis $\{Z_{2,2}^{l}\}_{l \in \mathbb{N}}$, we deduce from the previous equality that we should have $ 0=q^{a+1}(q^{-2l_0}-1)y_{l_0} Z_{1,2} Z_{2,1}$, a contradiction since $l_0 \geq 1$ and $q$ is not a root of unity. So $y=0$, as desired.\ $\bullet$ [**Step 3: we prove that $z_{1,1}, z_{1,2}, z_{2,1}$ and $z_{2,2}$ belong to $Z(R)$.**]{} It follows from Proposition \[propositionu22\] that $$\begin{array}{cc} z_{1,1} = Q_{1,1}\Delta_n^{-1}+P_{1,1}(\Delta_n) &\qquad z_{1,2}=Q_{1,2}(\Delta_n)\Delta_{n-1}^{-1}+P_{1,2}(\Delta_n)\\ z_{2,1}=Q_{2,1}(\Delta_n)\Delta_{1}+P_{2,1}(\Delta_n) &z_{2,2} =P_{2,2}(\Delta_n) \end{array}$$ where $Q_{1,1} \in K$ and $Q_{{i,\alpha}},P_{{i,\alpha}} \in K[\Delta_n]$ otherwise. As $z_{1,1}+z_{2,2}=z_{1,2}+z_{2,1}$, we obtain $$Q_{1,1}\Delta_n^{-1}+P_{1,1}(\Delta_n) +P_{2,2}(\Delta_n) = Q_{1,2}(\Delta_n)\Delta_{n-1}^{-1}+Q_{2,1}(\Delta_n)\Delta_{1}+ P_{1,2}(\Delta_n)+P_{2,1}(\Delta_n).$$ Recalling that the monomials $\Delta_1^{i_1}\dots \Delta_n^{i_n}$, with $i_k \in \mathbb{Z}$, are linearly independent, we obtain $$Q_{1,1}=Q_{1,2}(\Delta_n)=Q_{2,1}(\Delta_n)=0,$$ so that $z_{1,1} = P_{1,1}(\Delta_n)$, $z_{1,2}= P_{1,2}(\Delta_n)$, $z_{2,1}= P_{2,1}(\Delta_n)$. Hence $z_{1,1}$, $z_{1,2}$ and $z_{2,1}$ belong to $K[\Delta_n]=Z(R)$, and we have already observed that $z_{2,2} = P_{2,2}(\Delta_n) \in K[\Delta_n]=Z(R)$.\ $\bullet$ [**Step 4: we prove that $D(Z_{{i,\alpha}})=\mathrm{ad}_x(Z_{{i,\alpha}})+z_{{i,\alpha}} Z_{{i,\alpha}}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$.**]{} If $({i,\alpha}) \neq (1,1)$, then $Z_{{i,\alpha}}=T_{{i,\alpha}}$ and so the result is obvious. Next, consider the case $({i,\alpha})=(1,1)$. Note that $Z_{1,1}=T_{1,1}+T_{1,2}T_{2,2}^{-1}T_{2,1}$. Hence, $$\begin{aligned} D(Z_{1,1})& = & D\left( T_{1,1}+T_{1,2}T_{2,2}^{-1}T_{2,1} \right) \\ & = & \mathrm{ad}_x(T_{1,1}) +z_{1,1}T_{1,1}\\ &~& + \, \mathrm{ad}_x\left( T_{1,2}T_{2,2}^{-1}T_{2,1} \right) + (z_{1,2}-z_{2,2}+z_{2,1}) T_{1,2}T_{2,2}^{-1}T_{2,1} \\ & = & \mathrm{ad}_x(Z_{1,1}) +z_{1,1}Z_{1,1} + (z_{1,2}-z_{2,2}+z_{2,1}-z_{1,1}) T_{1,2}T_{2,2}^{-1}T_{2,1} \end{aligned}$$ Now it follows from the second step that $z_{1,2}-z_{2,2}+z_{2,1}-z_{1,1}=0$. Hence, $$\begin{aligned} D(Z_{1,1})& = & \mathrm{ad}_x(Z_{1,1}) +z_{1,1} Z_{1,1},\end{aligned}$$ as desired. The next two lemmas continue the process of descending down the tower of algebras (\[tower\]). Although the proofs superficially look the same as the proof of the previous lemma, there are subtle differences in each proof; so we find it necessary to include the full proofs. \[lemmaROW2\] Let $\beta \in { [ \hspace{-0.65mm} [}2,n {] \hspace{-0.65mm} ]}$. 1. $x \in U_{(2,\beta+1)}$. (Here we use the convention $U_{(2,n+1)}:=U_{(3,1)}$.) 2. For all $\alpha < \beta$, we have $z_{1,\alpha}+z_{2,\beta}=z_{1,\beta}+z_{2,\alpha}$. 3. $z_{1,\beta} \in Z(R)$. 4. $D(Y_{{i,\alpha}}^{(2,\beta+1)}) =\mathrm{ad}_x(Y_{{i,\alpha}}^{(2,\beta+1)}) +z_{{i,\alpha}} Y_{{i,\alpha}}^{(2,\beta+1)}$ for all ${i,\alpha}\in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$.\ (Here we use the convention $Y_{{i,\alpha}}^{(2,n+1)}:=Y_{{i,\alpha}}^{(3,1)}$.) The proof is by induction on $\beta$. The case $\beta=2$ has been dealt with in the previous lemma. Now, assume that $\beta \geq 2$ and that the lemma has been proved for $\beta$. In order to simplify the notation, set $Z_{{i,\alpha}}:=Y_{{i,\alpha}}^{(2,\beta+1)}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. Moreover, for all $\underline{\gamma} \in \mathcal{E} := \mathbb{N}^{n} \times (\mathbb{N}^{\beta-1} \times \mathbb{Z}^{n+1-\beta}) \times (\mathbb{N} \times \mathbb{Z}^{n-1}) \times \dots \times (\mathbb{N} \times \mathbb{Z}^{n-1}) $, set $$Z^{\underline{\gamma}}:=Z_{1,1}^{\gamma_{1,1}}Z_{1,2}^{\gamma_{1,2}} \dots Z_{n,n}^{\gamma_{n,n}}.$$ $ $ We now proceed in five steps.\ $\bullet$ [**Step 1: we prove that $x \in U_{(2,\beta+1)}$.**]{} It follows from the inductive hypothesis that $x $ belongs to $U_{(2,\beta)}$. Using the notation of previous sections, we have: $$U_{(2,\beta)}=U_{(2,\beta+1)}\Sigma_{2,\beta}^{-1},$$ so that $x$ can be written as follows: $$x= \sum_{\underline{\gamma} \in \mathcal{E}} c_{\underline{\gamma}} Z^{\underline{\gamma}},$$ with $c_{\underline{\gamma}} \in {\mathbb C}$. Set $$x_+:= {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} c_{\underline{\gamma}} Z^{\underline{\gamma}}, \qquad x_-= {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} c_{\underline{\gamma}} Z^{\underline{\gamma}}.$$ Assume that $x_- \neq 0$. Denote by $B$ the subalgebra of $U_{(2,\beta)}$ generated by the $Z_{{i,\alpha}}$ with $({i,\alpha}) \neq (2,\beta)$ and the $Z_{{i,\alpha}}^{-1}$ with $i \geq 2$ and $\alpha \geq 2$ but $(i,\alpha)> (2,\beta)$. Then $U_{(2,\beta)}=U_{(2,\beta+1)}\Sigma_{2,\beta}^{-1}$ is a left $B$-module with basis $\{Z_{2,\beta}^{l}\}_{l \in \mathbb{Z}}$; so that there are elements $b_{l} \in B$ such that $$x_-= \sum_{l =l_0}^{-1} b_{l} Z_{2,\beta}^{l}$$ with $l_0 <0$ and $ b_{l_0}\neq 0$. (Observe that this makes sense because we have assumed that $x_- \neq 0$.) The derivation $D$ of $R$ extends to a derivation of $U_{(2,\beta+1)}$, since $U_{(2,\beta+1)}$ is obtained from $R$ by a sequence of localisations; so $D(Z_{1,\beta-1}) \in U_{(2,\beta+1)}$. This implies that $$\begin{aligned} \lefteqn{ x_-Z_{1,\beta-1}-Z_{1,\beta-1}x_-+z_{1,\beta-1}Z_{1,\beta-1} }\nonumber\\ && \qquad +(z_{1,\beta}+z_{2,\beta-1}-z_{1,\beta-1}-z_{2,\beta}) Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\beta-1} \in U_{(2,\beta+1)}.\label{A} \end{aligned}$$ Now, $$Z_{2,\beta}^{-k}Z_{1,\beta-1}=Z_{1,\beta-1}Z_{2,\beta}^{-k} +q(q^{2k}-1) Z_{1,\beta}Z_{2,\beta-1}Z_{2,\beta}^{-k-1}$$ for each positive integer $k$. Hence, $$\begin{aligned} x_-Z_{1,\beta-1} & - & Z_{1,\beta-1}x_- +z_{1,\beta-1}Z_{1,\beta-1} + (z_{1,\beta}+z_{2,\beta-1}-z_{1,\beta-1}-z_{2,\beta}) Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\beta-1} \\ & = & \sum_{ l =l_0}^{-1} b'_{l}Z_{2,\beta}^{l} + \sum_{ l =l_0}^{-1}q(q^{-2l}-1) b_{l}Z_{1,\beta}Z_{2,\beta-1}Z_{2,\beta}^{l-1}\\ & - & (z_{1,\beta}+z_{2,\beta-1}-z_{1,\beta-1}-z_{2,\beta})Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\beta-1} +z_{1,\beta-1}Z_{1,\beta-1} \in U_{(2,\beta+1)}.\end{aligned}$$ It follows from the inductive hypothesis that $z_{1,\beta-1} \in R \subset U_{(2,\beta+1)}$. Thus we obtain $$\begin{aligned} \lefteqn{\sum_{ l =l_0}^{-1} b'_{l}Z_{2,\beta}^{l} + \sum_{ l =l_0}^{-1} q(q^{-2l}-1)b_{l}Z_{1,\beta}Z_{2,\beta-1}Z_{2,\beta}^{l-1} }\nonumber\\ &&\qquad - (z_{1,\beta}+z_{2,\beta-1}-z_{1,\beta-1}-z_{2,\beta}) Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\beta-1} \in U_{(2,\beta+1)}.\label{u23step1}\end{aligned}$$ It follows from the inductive hypothesis and Proposition \[propositionu22\] (and Lemma \[lemmau23\] when $\beta =2$) that $z_{1,\beta-1}$, $z_{1,\beta} b_{n-\beta+1}$, $z_{2,\beta-1} $ and $z_{2,\beta}$ belong to $R \subset U_{(2,\beta+1)}$. On the other hand, it follows from [@c2 Proposition 5.2.1] that $b_{n-\beta+1} = Z_{1,\beta} Z_{2,\beta+1} \dots Z_{n-\beta+1,n}$. Hence, $z_{1,\beta} Z_{1,\beta}$ belongs to $U_{(2,\beta+1)}$. Thus, $$(z_{1,\beta}+z_{2,\beta-1}-z_{1,\beta-1}-z_{2,\beta})Z_{1,\beta}Z_{2,\beta-1}\in U_{(2,\beta+1)}.$$ Multiplying (\[u23step1\]) on the right by $Z_{2,\beta}$ leads to $$\begin{aligned} \sum_{ l =l_0}^{-1} b'_{l}Z_{2,\beta}^{l+1} + \sum_{ l =l_0}^{-1}q(q^{-2l}-1) b_{l}Z_{1,\beta}Z_{2,\beta-1}Z_{2,\beta}^{l} \in U_{(2,\beta+1)}.\end{aligned}$$ In other words, $$\sum_{ l =l_0+1}^{0} b''_{l}Z_{2,\beta}^{l} +q(q^{-2l_0}-1) b_{l_0}Z_{1,\beta}Z_{2,\beta-1}Z_{2,\beta}^{l_0} \in U_{(2,\beta+1)}.$$ As $U_{(2,\beta+1)}$ is a left $B$-module with basis $\{Z_{2,\beta}^{l}\}_{l \in \mathbb{N}}$, this implies that $b_{l_0}=0$, a contradiction. Hence $x_-=0$ and $x=x_+ \in U_{(2,\beta+1)}$, as desired.\ $\bullet$ [**Step 2: we prove that $z_{1,\beta-1}+z_{2,\beta}=z_{1,\beta}+z_{2,\beta-1}$.**]{} As $x_-=0$ and $z_{1,\beta-1}Z_{1,\beta-1} \in U_{(2,\beta+1)}$ by the inductive hypothesis, we deduce from (\[A\]) that $$y:=(z_{1,\beta}+z_{2,\beta-1}-z_{1,\beta-1}-z_{2,\beta})Z_{1,\beta}Z_{2,\beta-1} \in U_{(2,\beta+1)} Z_{2,\beta}.$$ Thus, $y$ is an element of $U_{(2,\beta+1)}$ which $q$-commutes with $Z_{1,\beta-1}$ and which belongs to $U_{(2,\beta+1)} Z_{2,\beta}$. As in the proof of Lemma \[lemmau23\] (Step 2), some easy calculations show that this forces $y=0$, so that $$z_{1,\beta-1}+z_{2,\beta}=z_{1,\beta}+z_{2,\beta-1},$$ as desired.\ $\bullet$ [**Step 3: we prove that, for all $\alpha < \beta$, we have $z_{1,\alpha}+z_{2,\beta}=z_{1,\beta}+z_{2,\alpha}$.**]{} First, when $\alpha = \beta -1$, the result follows from Step 2. Next, for $\alpha < \beta -1$, it follows from the inductive hypothesis that $$z_{1,\alpha}+z_{2,\beta-1}=z_{1,\beta-1}+z_{2,\alpha}.$$ Further, it follows from Step 2 that $$z_{1,\beta-1}+z_{2,\beta}=z_{1,\beta}+z_{2,\beta-1}.$$ Combining these two equalities leads to the desired result.\ $\bullet$ [**Step 4: we prove that $z_{1,\beta}$ belongs to $Z(R)$.**]{} It follows from Proposition \[propositionu22\] that $z_{1,\beta} = Q_{1,\beta}(\Delta_n)\Delta_{n+1-\beta}^{-1}+P_{1,\beta}(\Delta_n)$, for some polynomials $Q_{1,\beta}(\Delta_n), P_{1,\beta}(\Delta_n)\in K[\Delta_n]$. Further, it follows from the inductive hypothesis and Proposition \[propositionu22\] (and Lemma \[lemmau23\] when $\beta =2$) that $z_{1,\beta-1}=P_{1,\beta-1}(\Delta_n)$, $z_{2,\beta-1}=P_{2,\beta-1}(\Delta_n) $ and $z_{2,\beta}=P_{2,\beta}(\Delta_n)$, where each $P_{{i,\alpha}} \in K[\Delta_n]$. As $z_{1,\beta-1}+z_{2,\beta}=z_{1,\beta}+z_{2,\beta-1}$, we obtain $$P_{1,\beta-1}(\Delta_n) +P_{2,\beta}(\Delta_n) = Q_{1,\beta}(\Delta_n)\Delta_{n+1-\beta}^{-1} +P_{1,\beta}(\Delta_n)+P_{2,\beta-1}(\Delta_n).$$ Recalling that the monomials $\Delta_1^{i_1}\dots \Delta_n^{i_n}$ with $i_k \in \mathbb{Z}$ are linearly independent, we get that $$Q_{1,\beta}(\Delta_n)=0;$$ so that $z_{1,\beta} = P_{1,\beta}(\Delta_n)$ belongs to $K[\Delta_n]=Z(R)$.\ $\bullet$ [**Step 5: we prove that $D(Z_{{i,\alpha}})=\mathrm{ad}_x(Z_{{i,\alpha}})+z_{{i,\alpha}} Z_{{i,\alpha}}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$.**]{} First, if $i \geq 2$ or $\alpha \geq \beta$, then $Z_{{i,\alpha}}= Y_{{i,\alpha}}^{(2,\beta)^+}=Y_{{i,\alpha}}^{(2,\beta)}$, so that the result easily follows from the inductive hypothesis. Next, assume that $i=1 $ and $\alpha < \beta$, so that $Z_{1,\alpha}= Y_{1,\alpha}^{(2,\beta+1)}=Y_{1,\alpha}^{(2,\beta)}+Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\alpha} $. Hence we deduce from the inductive hypothesis that $$\begin{aligned} D(Z_{1,\alpha})& = & D \left(Y_{1,\alpha}^{(2,\beta)}+Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\alpha} \right) \\ & = & \mathrm{ad}_x(Y_{1,\alpha}^{(2,\beta)}) +z_{1,\alpha}Y_{1,\alpha}^{(2,\beta)}\\ &&\qquad +\, \mathrm{ad}_x \left( Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\alpha} \right) + (z_{1,\beta}-z_{2,\beta}+z_{2,\alpha}) Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\alpha} \\ & = & \mathrm{ad}_x(Z_{1,\alpha}) +z_{1,\alpha}Z_{1,\alpha} + (z_{1,\beta}-z_{2,\beta}+z_{2,\alpha}-z_{1,\alpha}) Z_{1,\beta}Z_{2,\beta}^{-1}Z_{2,\alpha} \end{aligned}$$ Now it follows from the Step 3 that $z_{1,\alpha}+z_{2,\beta}=z_{1,\beta}+z_{2,\alpha} =0$. Hence, $$\begin{aligned} D(Z_{1,\alpha})& = & \mathrm{ad}_x(Z_{1,\alpha}) +z_{1,\alpha} Z_{1,\alpha},\end{aligned}$$ as desired. \[lemmafinalstep\] Let $(j,\beta) \in E$ with $(j,\beta) \geq (3,1)$. Then 1. $x \in U_{(j,\beta)}$. 2. For all $(k,\delta) < (j,\beta)$, $i< k$ and $\alpha < \delta$, we have $z_{{i,\alpha}}+z_{k,\delta}=z_{i,\delta}+z_{k,\alpha}$. 3. $D(Y_{{i,\alpha}}^{(j,\beta)})=\mathrm{ad}_x(Y_{{i,\alpha}}^{(j,\beta)})+z_{{i,\alpha}} Y_{{i,\alpha}}^{(j,\beta)}$ for all ${i,\alpha}\in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. We prove this result by induction on $(j,\beta)$. The case $(j,\beta) = (3,1)$ follows from Lemma \[lemmaROW2\]. Assume that the result is established for $(3,1) \leq (j,\beta) < (n,n+1)$, and let $(j,\beta)^+$ be the smallest element of $E$ greater then $(j,\beta)$. In order to simplify the notation, we set $Z_{{i,\alpha}}:=Y_{{i,\alpha}}^{(j,\beta)^+}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. Moreover, for all $\underline{\gamma} \in \mathcal{E}:=\mathbb{N}^{(j-1)n}\times (\mathbb{N}^{\beta-1} \times \mathbb{Z}^{n+1-\beta}) \times (\mathbb{N} \times \mathbb{Z}^{n-1}) \times \dots \times (\mathbb{N} \times \mathbb{Z}^{n-1}) \subset \mathbb{Z}^{n^2}$, set $$Z^{\underline{\gamma}}:=Z_{1,1}^{\gamma_{1,1}}Z_{1,2}^{\gamma_{1,2}} \dots Z_{n,n}^{\gamma_{n,n}}.$$ We now proceed in four steps.\ $\bullet$ [**Step 1: we prove that $x \in U_{(j,\beta)^+}$.**]{} It follows from the inductive hypothesis that $x $ belongs to $U_{(j,\beta)}$. We distinguish between two cases. If $\beta=1$, then $U_{(j,\beta)^+} = U_{(j,\beta)}$; so that the induction step is obvious in this case. Now, assume that $\beta > 1$. In this case, using the notation of the previous section, $$U_{(j,\beta)}=U_{(j,\beta)^+}\Sigma_{j,\beta}^{-1},$$ so that $x$ can be written as $$x= \sum_{\underline{\gamma} \in \mathcal{E}} c_{\underline{\gamma}} Z^{\underline{\gamma}},$$ with $c_{\underline{\gamma}}\in{\mathbb C}$. Set $$x_+:= {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} c_{\underline{\gamma}} Z^{\underline{\gamma}},\qquad x_-= {\sum_{\substack{\underline{\gamma} \in \mathcal{E}\\#2}}} c_{\underline{\gamma}} Z^{\underline{\gamma}}.$$ Assume that $x_- \neq 0$. Denote by $B$ the subalgebra of $U_{(j,\beta)}=U_{(j,\beta)^+}\Sigma_{j,\beta}^{-1}$ generated by the $Z_{{i,\alpha}}$ with $({i,\alpha}) \neq (j,\beta)$ and the $Z_{{i,\alpha}}^{-1}$ such that $i \geq 2$ and $\alpha \geq 2$ but $({i,\alpha}) > (j,\beta)$. Then $U_{(j,\beta)}=U_{(j,\beta)^+}\Sigma_{j,\beta}^{-1}$ is a left $B$-module with basis $\{Z_{j,\beta}^{l}\}_{l \in \mathbb{Z}}$; so that there are elements $b_{l} \in B$ such that $$x_-= \sum_{l =l_0}^{-1} b_{l}Z_{j,\beta}^{l}$$ with $l_0 <0$ and $b_{l_{0}}\neq 0$. (Observe that this makes sense because we have assumed that $x_- \neq 0$.) The derivation $D$ of $R$ extends to a derivation of $U_{(j,\beta)^+}$, since $U_{(j,\beta)^+}$ obtained from $R$ by a sequence of localisations; so $D(Z_{j-1,\beta-1}) \in U_{(j,\beta)^+}$. This implies that $$\begin{aligned} \lefteqn{ x_-Z_{j-1,\beta-1}-Z_{j-1,\beta-1}x_-+z_{j-1,\beta-1}Z_{j-1,\beta-1} }\nonumber\\ &&\qquad +(z_{j-1,\beta}+z_{j,\beta-1} -z_{j-1,\beta-1}-z_{j,\beta}) Z_{j-1,\beta}Z_{j,\beta}^{-1}Z_{j,\beta-1} \in U_{(j,\beta)^+}.\label{B}\end{aligned}$$ Now, $$Z_{j,\beta}^{-k}Z_{j-1,\beta-1} =Z_{j-1,\beta-1}Z_{j,\beta}^{-k} +q(q^{2k}-1) Z_{j-1,\beta}Z_{j,\beta-1}Z_{j,\beta}^{-k-1}$$ for all positive integers $k$. Hence, $$\begin{aligned} \lefteqn{ x_-Z_{j-1,\beta-1} - Z_{j-1,\beta-1}x_- +z_{j-1,\beta-1}Z_{j-1,\beta-1} }\\ &&+ (z_{j-1,\beta}+z_{j,\beta-1}-z_{j-1,\beta-1}-z_{j,\beta}) Z_{j-1,\beta}Z_{j,\beta}^{-1}Z_{j,\beta-1} \\ & = & \sum_{ l =l_0}^{-1} b'_{l}Z_{j,\beta}^{l} + \sum_{ l =l_0}^{-1} q(q^{-2l}-1)b_{l} Z_{j-1,\beta}Z_{j,\beta-1}Z_{j,\beta}^{l-1}\\ &&\quad -\, (z_{j-1,\beta}+z_{j,\beta-1}-z_{j-1,\beta-1}-z_{j,\beta}) Z_{j-1,\beta}Z_{j,\beta}^{-1}Z_{j,\beta-1} \\ && \qquad +z_{j-1,\beta-1}Z_{j-1,\beta-1} \in U_{(j,\beta)^+}.\end{aligned}$$ Now, observe that $z_{j-1,\beta-1} \in R \subset U_{(j,\beta)^+}$. Indeed, if $\beta > 2$, then it follows from Proposition \[propositionu22\] that $z_{j-1,\beta-1}$ also belongs to $R \subset U_{(j,\beta)^+}$. On the other hand, if $\beta =2$, then it follows from the inductive hypothesis that $z_{j-1,1}+z_{1,2}=z_{1,1}+z_{j-1,2}$. As each of $z_{1,1}$, $z_{1,2}$ and $z_{j-1,2}$ belong to $R\subset U_{(j,\beta)^+}$ by Lemma \[lemmau23\] and Proposition \[propositionu22\], it follows that $z_{j-1,1} \in R \subset U_{(j,\beta)^+}$. As $z_{j-1,\beta-1} \in R \subset U_{(j,\beta)^+}$, we obtain $$\begin{aligned} \lefteqn{ \sum_{ l =l_0}^{-1} b'_{l}Z_{j,\beta}^{l} + \sum_{ l =_0}^{-1} q(q^{-2l}-1) b_{l} Z_{j-1,\beta}Z_{j,\beta-1}Z_{j,\beta}^{l-1} }\nonumber\\ &&\qquad - (z_{j-1,\beta}+z_{j,\beta-1}-z_{j-1,\beta-1}-z_{j,\beta}) Z_{j-1,\beta}Z_{j,\beta}^{-1}Z_{j,\beta-1} \in U_{(j,\beta)^+}.\label{stepjbeta}\end{aligned}$$ It follows from Proposition \[propositionu22\] that $z_{j-1,\beta}$ and $z_{j,\beta}$ belong to $R \subset U_{(j,\beta)^+}$; so each of $z_{j-1,\beta-1}$, $z_{j-1,\beta}$ and $z_{j,\beta}$ also belong to $U_{(j,\beta)^+}$. We now distinguish between two cases to prove that $$(z_{j-1,\beta}+z_{j,\beta-1}-z_{j-1,\beta-1}-z_{j,\beta})Z_{j-1,\beta}Z_{j,\beta-1}\in U_{(j,\beta)^+}.$$ (Note that it only remains to show that $z_{j,\beta-1}Z_{j-1,\beta}Z_{j,\beta-1}\in U_{(j,\beta)^+}$.)\ $\bullet\bullet$ First, if $\beta=2$, then it follows from Proposition \[propositionu22\] that $z_{j,\beta-1} b_{n+j-1} \in R \subset U_{(j,\beta)^+}$. On the other hand, it follows from [@c2 Proposition 5.2.1] that $b_{n+j-1} = Z_{j,1} Z_{j+1,2} \dots Z_{n,n-j+1}$. Hence $z_{j,\beta-1} Z_{j,\beta-1}$ belongs to $U_{(j,\beta)^+}$ since $Z_{j+1,2}$, ..., $Z_{n,n-j+1}$ are invertible in $U_{(j,\beta)^+}$. Thus, $$(z_{j-1,\beta}+z_{j,\beta-1}-z_{j-1,\beta-1}-z_{j,\beta})Z_{j-1,\beta}Z_{j,\beta-1}\in U_{(j,\beta)^+},$$ as claimed.\ $\bullet\bullet$ If $\beta > 2$, then $\beta-1 \geq 2$, and so it follows from Proposition \[propositionu22\] that $z_{j,\beta-1} \in R \subset U_{(j,\beta)^+}$. Thus, $$(z_{j-1,\beta}+z_{j,\beta-1}-z_{j-1,\beta-1}-z_{j,\beta})Z_{j-1,\beta}Z_{j,\beta-1}\in U_{(j,\beta)^+},$$ as claimed.\ So, in each case, $(z_{j-1,\beta}+z_{j,\beta-1}-z_{j-1,\beta-1}-z_{j,\beta})Z_{j-1,\beta}Z_{j,\beta-1}\in U_{(j,\beta)^+}$, and thus multiplying (\[stepjbeta\]) on the right by $Z_{j,\beta}$ leads to: $$\begin{aligned} \sum_{ l =l_0}^{-1} b'_{l}Z_{j,\beta}^{l+1} + \sum_{ l =l_0}^{-1}q(q^{-2l}-1) b_{l}Z_{j-1,\beta}Z_{j,\beta-1}Z_{j,\beta}^{l} \in U_{(j,\beta)^+}.\end{aligned}$$ In other words, we have $$\sum_{ l =l_0+1}^{0} b''_{l}Z_{j,\beta}^{l} +q(q^{-2l_0}-1) b_{l_0}Z_{j-1,\beta}Z_{j,\beta-1}Z_{j,\beta}^{l_0} \in U_{(j,\beta)^+}.$$ As $U_{(j,\beta)^+}$ is a left $B$-module with basis $\{Z_{j,\beta}^{l}\}_{l \in \mathbb{N}}$, this implies that $b_{l_0}=0$, a contradiction. Hence $x_-=0$ and $x=x_+ \in U_{(j,\beta)^+}$, as desired.\ $\bullet$ [**Step 2: we prove that $z_{j-1,\beta-1}+z_{j,\beta}=z_{j-1,\beta}+z_{j,\beta-1}$.**]{} As $x_-=0$ and $z_{j-1,\beta-1}Z_{j-1,\beta-1} \in U_{(j,\beta)^+}$ by the above study, we deduce from (\[B\]) that $$y:=(z_{j-1,\beta}+z_{j,\beta-1}-z_{j-1,\beta-1}- z_{j,\beta})Z_{j-1,\beta}Z_{j,\beta-1} \in U_{(j,\beta)^+} Z_{j,\beta}.$$ Thus, $y$ is an element of $U_{(j,\beta)^+}$ which $q$-commutes with $Z_{j-1,\beta-1}$ and which belongs to $U_{(j,\beta)^+} Z_{j,\beta}$. As in the proof of Lemma \[lemmau23\] (Step 2), some easy calculations show that this forces $y=0$, so that $$z_{j-1,\beta-1}+z_{j,\beta}=z_{j-1,\beta}+z_{j,\beta-1},$$ as desired.\ $\bullet$ [**Step 3: we prove that $z_{{i,\alpha}}+z_{k,\delta}=z_{i,\delta}+z_{k,\alpha}$, for all $(k,\delta) < (j,\beta)^+$, with $i< k$ and $\alpha < \delta$.**]{} In order to do this, let $(k,\delta) < (j,\beta)^+$, with $i< k$ and $\alpha < \delta$. If $(k,\delta)<(j,\beta)$, it follows from the inductive hypothesis that $z_{{i,\alpha}}+z_{k,\delta}=z_{i,\delta}+z_{k,\alpha}$, as required. Now we assume that $(k,\delta)=(j,\beta)$. First, if $({i,\alpha})=(j-1,\beta-1)$, then we have just proved in Step 2 that $z_{{i,\alpha}}+z_{j,\beta}=z_{i,\beta}+z_{j,\alpha}$, as required. Next, assume that $i<j-1$ and $\alpha=\beta-1$. Then it follows from the inductive hypothesis that $$z_{i,\beta-1}+z_{j-1,\beta}=z_{i,\beta}+z_{j-1,\beta-1}.$$ Moreover, we have already shown that $z_{j-1,\beta}+z_{j,\beta-1}=z_{j-1,\beta-1}+z_{j,\beta}$. Hence, $$z_{i,\beta-1}+z_{j,\beta}=z_{i,\beta}+z_{j,\beta-1},$$ as required. Similar arguments show that $$z_{j-1,\alpha}+z_{j,\beta}=z_{j-1,\beta}+z_{j,\alpha},$$ for all $\alpha < \beta$. Assume now that $i < j-1$ and $\alpha < \beta-1$. It follows from the inductive hypothesis that $$z_{{i,\alpha}}+z_{j-1,\beta}=z_{i,\beta}+z_{j-1,\alpha}.$$ Moreover, we have already shown that $$z_{j-1,\alpha}+z_{j,\beta}=z_{j-1,\beta}+z_{j,\alpha}.$$ Combining these two equations leads to $$z_{{i,\alpha}}+z_{j,\beta}=z_{i,\beta}+z_{j,\alpha},$$ as desired.\ $\bullet$ [**Step 4: we prove that $D(Z_{{i,\alpha}})=\mathrm{ad}_x(Z_{{i,\alpha}})+z_{{i,\alpha}} Z_{{i,\alpha}}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$.**]{} First, if $i \geq j$ or $\alpha \geq \beta$, then $Z_{{i,\alpha}}= Y_{{i,\alpha}}^{(j,\beta)^+}=Y_{{i,\alpha}}^{(j,\beta)}$; so that the result easily follows from the inductive hypothesis. Now assume that $i < j$ and $\alpha < \beta$, so that $Z_{{i,\alpha}}= Y_{{i,\alpha}}^{(j,\beta)^+}=Y_{{i,\alpha}}^{(j,\beta)} +Z_{i,\beta}Z_{j,\beta}^{-1}Z_{j,\alpha} $. Hence, we deduce from the inductive hypothesis (and the previous case) that $$\begin{aligned} D(Z_{{i,\alpha}})& = & D \left(Y_{{i,\alpha}}^{(j,\beta)}+Z_{i,\beta}Z_{j,\beta}^{-1}Z_{j,\alpha} \right) \\ & = & \mathrm{ad}_x(Y_{{i,\alpha}}^{(j,\beta)}) +z_{{i,\alpha}}Y_{{i,\alpha}}^{(j,\beta)}\\ & + & \mathrm{ad}_x\left( Z_{i,\beta}Z_{j,\beta}^{-1}Z_{j,\alpha} \right) + (z_{i,\beta}-z_{j,\beta}+z_{j,\alpha}) Z_{i,\beta}Z_{j,\beta}^{-1}Z_{j,\alpha} \\ & = & \mathrm{ad}_x(Z_{{i,\alpha}}) +z_{{i,\alpha}}Z_{{i,\alpha}} + (z_{i,\beta}-z_{j,\beta}+z_{j,\alpha}-z_{{i,\alpha}}) Z_{i,\beta}Z_{j,\beta}^{-1}Z_{j,\alpha} \end{aligned}$$ Now it follows from Step 3 that $z_{i,\beta}-z_{j,\beta}+z_{j,\alpha}-z_{{i,\alpha}}=0$. Hence, $$\begin{aligned} D(Z_{{i,\alpha}})& = & \mathrm{ad}_x(Z_{{i,\alpha}}) +z_{{i,\alpha}} Z_{{i,\alpha}},\end{aligned}$$ as desired. \[corollaryzcentral\] The element $z_{{i,\alpha}}$ belongs to $Z(R)=K[\Delta_n]$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. We already know from Proposition \[propositionu22\] that $z_{{i,\alpha}} \in Z(R)$ when $i \geq 2$ and $\alpha \geq 2$. Further, it follows from Lemma \[lemmaROW2\] that $z_{{i,\alpha}} \in Z(R)$ when $i=1$. Finally, let $i \geq 2$. It follows from Lemma \[lemmafinalstep\] that $z_{i,1}=z_{1,1}+z_{i,2}-z_{1,2}$. Thus, $z_{i,1}\in Z(R)$, since the three elements on the right side of this equation belong to $Z(R)$. Any derivation $D$ of $R =\oqmn = K[Y_{i,\alpha}]$ can be written as $D = {\mathrm ad}_x + \theta$, where $x\in R$ and $\theta$ is a derivation of $R$ such that $\theta(Y_{i,\alpha}) = z_{i,\alpha}Y_{i,\alpha}$ for some $z_{i,\alpha} \in K[\Delta]$ satisfying $z_{i,\alpha} + z_{k,\delta} = z_{i,\delta} + z_{k,\alpha}$ whenever $i<k$ and $\alpha<\delta$. \[finalcorollary\] This is the case $(n,n+1)$ of Lemma \[lemmafinalstep\]. We now seek to describe the possibilities for the derivation $\theta$ occuring in the previous result. It is easy to verify that there are $2n$ derivations of $R$ given by $D_{i,*}, D_{*,\alpha}$, for $1\leq i,\alpha\leq n$, where $$D_{i,*}(Y_{j,\beta}) = \delta_{ij}Y_{i,\beta} \quad{\rm ~and~} \quad D_{*,\alpha}(Y_{j,\beta}) = \delta_{\alpha\beta}Y_{j,\alpha}.$$ In other words, $D_{i,*}$ fixes row $i$ and kills all the other rows, while $D_{*,\alpha}$ fixes column $\alpha$ and kills all other columns. We show that $\theta$ above can be described in terms of these row and column derivations. However, note that these derivations are not independent, since $\sum D_{i,*} = \sum D_{*,\alpha}$; so we begin by defining $2n-1$ derivations which span the same space, but which are independent. Set $$D_j = \left\{ \begin{array}{ll} D_{*,n+1-j} & \mbox{for~} 1\leq j\leq n-1 \\ D_{j-n+1,*} & \mbox{for~} n+1\leq j\leq 2n-1 \end{array} \right.$$ while $$D_n = D_{1,*} + D_{*,1} - \sum_{i=1}^n\, D_{i,*} \quad ( = D_{1,*} + D_{*,1} - \sum_{\alpha=1}^n\, D_{*,\alpha}).$$ It is easy to see that the $K$-span of $\{D_j \mid 1\leq j\leq 2n-1\}$ is the same as the $K$-span of $\{D_{i,*}, D_{*,\alpha}\mid 1\leq i,\alpha \leq n\}$. Note that:\ $\bullet$ If $j \in { [ \hspace{-0.65mm} [}1, n-1 {] \hspace{-0.65mm} ]}$, then $D_j(Y_{{i,\alpha}})=Y_{{i,\alpha}}$ if $\alpha=n+1-j$, and $D_j(Y_{{i,\alpha}})=0$ otherwise.\ $\bullet$ $D_n(Y_{1,1})=Y_{1,1}$, $D_n(Y_{{i,\alpha}})=-Y_{{i,\alpha}}$ if $i \geq 2$ and $\alpha \geq 2$, and $D_n(Y_{{i,\alpha}})=0$ otherwise.\ $\bullet$ If $j \in { [ \hspace{-0.65mm} [}n+1, 2n-1 {] \hspace{-0.65mm} ]}$, then $D_j(Y_{{i,\alpha}})=Y_{{i,\alpha}}$ if $i=j-n+1$, and $D_j(Y_{{i,\alpha}})=0$ otherwise.\ It follows from Corollary \[finalcorollary\] that any derivation $D $ of $R$ can be written as follows: $$D=\mathrm{ad}_x + z_{1,n} D_1+ \dots +z_{1,2} D_{n-1}+z_{1,1} D_n +z_{2,1} D_{n+1} \dots +\mu_{n,1} D_{2n-1},$$ with $x \in R$ and $z_{1,1}, \dots ,z_{1,n}, z_{2,1}, \dots ,z_{n,1} \in Z(R)$.\ Recall that the Hochschild cohomology group in degree 1 of $R$, denoted by $\mathrm{HH}^1(R)$, is defined by: $$\mathrm{HH}^1(R):= {{\rm Der}}(R)/ \mathrm{InnDer}(R),$$ where $ \mathrm{InnDer}(R):=\{\mathrm{ad}_x \mid x \in R\}$ is the Lie algebra of inner derivations of $R$. It is well known that $\mathrm{HH}^1(R)$ is a module over $\mathrm{HH}^0(R):=Z(R)$. The following result makes this latter structure precise. \[theoremDerMat\] 1. Every derivation $D $ of $R$ can be uniquely written as $$D=\mathrm{ad}_x + \mu_1 D_1+\dots +\mu_{2n-1} D_{2n-1},$$ with $\mathrm{ad}_x \in \mathrm{InnDer}(R)$ and $\mu_1,\dots,\mu_{2n-1} \in Z(R)=K[\Delta_n]$. 2. $\mathrm{HH}^1(R)$ is a free $Z(R)$-module of rank $2n-1$ with basis $(\overline{D_1},\dots,\overline{D_{2n-1}})$. It just remains to prove that, if $x \in R$ and $\mu_1,\dots,\mu_{2n-1} \in Z(R)$ with $\mathrm{ad}_x + \mu_1 D_1+\dots +\mu_{2n-1} D_{2n-1}=0$, then $\mu_1=\dots=\mu_{2n-1}=0$ and $\mathrm{ad}_x=0$. Set $\theta:= \mu_1 D_1+\dots +\mu_{2n-1} D_{2n-1} $, so that $\mathrm{ad}_x + \theta =0 $. The derivation $\theta$ of $R$ extends uniquely to a derivation $\tilde{\theta}$ of the quantum torus $\qtor$. Naturally, we still have $\mathrm{ad}_x +\tilde{\theta}=0$. Further, straightforward computations show that $$\tilde{\theta}(T_{{i,\alpha}})= \left\{ \begin{array}{ll} \mu_n T_{1,1} & \\ \mu_{n+1-\alpha} T_{1,\alpha} & \mbox{ if } \alpha \geq 2 \\ \mu_{n+i-1} T_{i,1} & \mbox{ if } i \geq 2 \\ (\mu_{n+1-\alpha}+ \mu_{n+i-1} - \mu_{n}) T_{i,\alpha} & \mbox{ otherwise.} \\ \end{array} \right.$$ Hence $\tilde{\theta} $ is a central derivation of $\qtor$, in the terminology of [@op]. Thus we deduce from [@op Corollary 2.3] that $\mathrm{ad}_x=0=\theta $. Evaluating $\theta$ on $Y_{1,\alpha}$ with $\alpha \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}$, and on $Y_{i,1}$ with $i \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}$ leads to $\mu_1=\dots=\mu_{2n-1}=0$, as desired. As a corollary of Theorem \[theoremDerMat\], we obtain some new information on the twisted homology of quantum matrices. We refer to [@hadkra] and references therein for definition and properties of the twisted homology. In [@hadkra], the authors have shown using results of [@vdb] that the “dimension drop” in Hochschild homology is overcome by passing to twisted Hochschild homology. More precisely, they have shown that $$\mathrm{HH}_{n^2}(\oqmn,\oqmn_{\sigma}) \simeq K[\Delta_n],$$ where $\sigma$ denotes the automorphism of $\oqmn$ defined by $$\sigma(Y_{{i,\alpha}})= q^{2(n+1-i-\alpha)}Y_{{i,\alpha}},$$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}$. In fact, it follows from Theorem \[theoremDerMat\] and [@hadkra Proposition 2.1] that not only $\mathrm{HH}_{n^2}(\oqmn,\oqmn_{\sigma})$ is nonzero, but also $\mathrm{HH}_{n^2-1}(\oqmn,\oqmn_{\sigma})$ is nonzero. More precisely, recall from [@hadkra Proposition 2.1] that $\oqmn$ has the (twisted) Poincaré duality property, so that $\mathrm{HH}_{n^2-1}(\oqmn,\oqmn_{\sigma})$ is isomorphic as a vector space to $\mathrm{HH}^{1}(\oqmn)$. Hence we deduce from Theorem \[theoremDerMat\] the following result. \[twisted\] $\mathrm{HH}_{n^2-1}(\oqmn,\oqmn_{\sigma}) \neq 0$. On Hochschild cohomology and twisted Hochschild homology of $\oqgln$ and $\oqsln$. ================================================================================== In this section, we describe the derivations of $\oqgln$ and $\oqsln$. As a consequence, we show that the Hochschild cohomology group in degree $1$ and the twisted Hochschild homology group in degree $n^2-2$ of $\oqsln$ are both finite-dimensional vector spaces of dimension $2n-2$. Derivations of $\oqgln$. ------------------------ The quantisation of the ring of regular functions on $GL_n(K)$ is denoted by $\oqgln$; recall that it is the localisation of $\oqmn$ at the powers of the central element $\Delta_n$. It is well-known that $\oqgln$ is a Noetherian domain that is endowed with a Hopf algebra structure. As $\oqgln$ is a localisation of $\oqmn$, the derivations $D_1,\dots,D_{2n-1}$ of $\oqmn$ defined in the discussion before Theorem \[theoremDerMat\] extend uniquely to derivations of $\oqgln$ that are still denoted by $D_1,\dots,D_{2n-1}$. \[theoremDerGL\] 1. Every derivation $D $ of $\oqgln$ can be uniquely written as follows: $$D=\mathrm{ad}_x + \mu_1 D_1+\dots +\mu_{2n-1} D_{2n-1},$$ with $\mathrm{ad}_x \in \mathrm{InnDer}(\oqgln)$ and $\mu_1,\dots,\mu_{2n-1} \in Z(\oqgln)=K[\Delta_n^{\pm 1}]$. 2. $\mathrm{HH}^1(\oqgln)$ is a free $Z(\oqgln)$-module of rank $2n-1$ with basis $(\overline{D_1},\dots,\overline{D_{2n-1}})$. Let $D$ be a derivation of $\oqgln$. Then there exists $k \in \mathbb{N}$ such that, for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$, $$\Delta_n^kD(Y_{{i,\alpha}})=D(Y_{{i,\alpha}})\Delta_n^k \in \oqmn.$$ It is easy to check that $\Delta_n^k.D$ resticts to a derivation of $\oqmn$. Hence, it follows from Theorem \[theoremDerMat\] that there exist $\mu_1, \dots,\mu_{2n-1}\in K[\Delta_n]$ and $x \in \oqmn$ such that $$\Delta_n^k.D= \mathrm{ad}_x + \mu_1 D_1+\dots +\mu_{2n-1} D_{2n-1}.$$ As $\Delta_n $ is central, we obtain $$D= \mathrm{ad}_{\Delta_n^{-k}x} + \mu_1\Delta_n^{-k} D_1+\dots +\mu_{2n-1}\Delta_n^{-k} D_{2n-1},$$ as desired. It just remains to prove that, if $x \in \oqgln$ and $\mu_1,\dots,\mu_{2n-1} \in Z(\oqgln)$ with $\mathrm{ad}_x + \mu_1 D_1+\dots +\mu_{2n-1} D_{2n-1}=0$, then $\mu_1=\dots=\mu_{2n-1}=0$ and $\mathrm{ad}_x=0$. Set $D:= \mathrm{ad}_x + \mu_1 D_1+\dots +\mu_{2n-1} D_{2n-1}$. Let $k \in \mathbb{N}$ such that $x \Delta_n^k \in \oqmn$ and $\mu_i \Delta_n^k \in \oqmn$ for all $i \in { [ \hspace{-0.65mm} [}1, 2n-1 {] \hspace{-0.65mm} ]}$. Then $\Delta_n^k D$ induces a derivation of $\oqmn$ such that $0=\Delta_n^k D=\mathrm{ad}_{x\Delta_n^k} + \mu_1 \Delta_n^k D_1 + \dots + \mu_{2n-1}\Delta_n^k D_{2n-1}$. As all the $\mu_i \Delta_n^k$ belong to $K[\Delta_n]=Z(\oqmn)$, we deduce from Theorem \[theoremDerMat\] that $\Delta_n^k.\mathrm{ad}_{x}=\mathrm{ad}_{\Delta_n^kx}=0$ and $\mu_i \Delta_n^k=0$ for all $i \in { [ \hspace{-0.65mm} [}1, 2n-1 {] \hspace{-0.65mm} ]}$. Naturally, this forces $\mathrm{ad}_{x}=0$ and $\mu_i= 0$ for all $i \in { [ \hspace{-0.65mm} [}1, 2n-1 {] \hspace{-0.65mm} ]}$, as required. Following the same reasoning as in the discussion before Corollary \[twisted\], we obtain the following result regarding the twisted Hochschild homology of $\oqgln$. $\mathrm{HH}_{n^2-1}(\oqgln,\oqgln_{\sigma}) \neq 0$. Derivations of $\oqsln$. ------------------------ In this section, we first consider the case where $n \geq 3$. (The case $n=2$ needs a slighty different treatment for technical reasons.) The quantisation of the ring of regular functions on $SL_n(K)$ is denoted by $\oqsln$; recall that $$\oqsln:= \oqmn / {\langle {\Delta_n-1}\rangle}.$$ We set $X_{{i,\alpha}}:=Y_{{i,\alpha}}+{\langle {\Delta_n-1}\rangle}$ for all $({i,\alpha}) \in { [ \hspace{-0.65mm} [}1,n {] \hspace{-0.65mm} ]}^2$. It is well-known that $\oqsln$ is a Noetherian domain whose centre is reduced to scalars. Observe that, for all $i \in { [ \hspace{-0.65mm} [}1,n-1 {] \hspace{-0.65mm} ]}\cup { [ \hspace{-0.65mm} [}n+1 , 2n-1 {] \hspace{-0.65mm} ]}$, the derivation $D_i +\frac{1}{n-2} D_n$ of $\oqmn$ satisfies $\left(D_i +\frac{1}{n-2} D_n\right)(\Delta_n)=0$. Hence it induces a derivation of $\oqsln$ that we denote by $D'_i$. \[theoremDerSL\] 1. Every derivation $D'$ of $\oqsln$ can be uniquely written as follows: $$D'=\mathrm{ad}_y + \mu'_1 D'_1+\dots+\mu'_{n-1} D'_{n-1} +\mu'_{n+1}D'_{n+1}+\dots +\mu'_{2n-1} D'_{2n-2},$$ with $\mathrm{ad}_y \in \mathrm{InnDer}(\oqsln)$ and $\mu'_1,\dots,\mu'_{n-1}, \mu'_{n+1},\dots ,\mu'_{2n-1} \in Z(\oqsln)=K$. 2. $\mathrm{HH}^1(\oqsln)$ is a finite-dimensional vector space of dimension $2n-2$ with basis $(\overline{D'_1},\dots,\overline{D'_{n-1}}, \overline{D'_{n+1}},\dots,\overline{D'_{2n-1}})$. Let $D'$ be a derivation of $\oqsln$. Naturally, one can extend $D'$ to a derivation of $\oqsln[t^{\pm 1}]$ by setting $D'(t)=0$. Now, recall from [@ls Proposition] that there exists a unique isomorphism $\varphi : \oqsln[t^{\pm 1}] \rightarrow \oqgln$ such that $\varphi(X_{{i,\alpha}})=Y_{{i,\alpha}}$ if $i >1$, $\varphi(X_{1,\alpha})=Y_{1,\alpha}\Delta_n^{-1}$, and $\varphi(t)=\Delta_n$. As $D'$ is a derivation of $\oqsln[t^{\pm 1}]$, one can transfer it via $\varphi$ in order to obtain a derivation of $\oqgln$. More precisely, it is easy to check that $D:=\varphi \circ D' \circ \varphi^{-1}$ is a derivation of $\oqgln$ such that $D(\Delta_n)=0$. Hence, it follows from the proof of Theorem \[theoremDerGL\] that there exist $k \in \mathbb{N}$, $\mu_1,\dots,\mu_{2n-1} \in K[\Delta_n]$ and $x \in \oqmn$ such that $D=\Delta_n^{-k} \mathrm{ad}_x+\Delta_n^{-k}\mu_1 D_1+\dots+\Delta_n^{-k}\mu_{2n-1} D_{2n-1}$. Moreover, since $D(\Delta_n)=0$, we must have $\mu_1 + \dots +\mu_{n-1}+\mu_{n+1}+\dots +\mu_{2n-1}-(n-2)\mu_n=0$. Hence $D=\Delta_n^{-k} \mathrm{ad}_x+\Delta_n^{-k}\mu_1 D''_1+\dots+\Delta_n^{-k}\mu_{n-1} D''_{n-1}+\Delta_n^{-k}\mu_{n+1} D''_{n+1} +\dots +\Delta_n^{-k}\mu_{2n-1} D''_{2n-2}$, where $D''_i=D_i +\frac{1}{n-2}D_n$ for all $i \in { [ \hspace{-0.65mm} [}1,n-1 {] \hspace{-0.65mm} ]}\cup { [ \hspace{-0.65mm} [}n+1, 2n-1 {] \hspace{-0.65mm} ]}$. Hence, $$\begin{aligned} D(Y_{1,1})&=& \Delta_n^{-k} \mathrm{ad}_x(Y_{1,1}) +\Delta_n^{-k}\frac{1}{n-2} (\mu_1 + \dots +\mu_{n-1}+\mu_{n+1}+\dots +\mu_{2n-1}) Y_{1,1} \\D(Y_{1,\alpha}) &=& \Delta_n^{-k} \mathrm{ad}_x(Y_{1,\alpha})+\Delta_n^{-k}\mu_{n+1-\alpha} Y_{1,\alpha} \quad\mbox{~for } \alpha \geq 2 \\D(Y_{i,1}) &=& \Delta_n^{-k} \mathrm{ad}_x(Y_{i,1})+\Delta_n^{-k}\mu_{n+i-1} Y_{i,1} \quad\mbox{~for } i \geq 2 \end{aligned}$$ and $$\begin{aligned} \lefteqn{D(Y_{i,\alpha}) = \Delta_n^{-k} \mathrm{ad}_x(Y_{{i,\alpha}})}\\ && +\;\Delta_n^{-k}\left( \mu_{n+1-\alpha}+ \mu_{n+i-1} - \frac{1}{n-2} (\mu_1 + \dots +\mu_{n-1}+\mu_{n+1}+\dots +\mu_{2n-1}) \right) Y_{i,\alpha} \end{aligned}$$ when $i\geq 2$ and $\alpha \geq 2$. Set $y:=\varphi^{-1}(x)$, and write $y=\sum_{l \in \mathbb{Z}} y_l t^l$ with $y_l \in \oqsln$ equal to 0 except for a finite number of values of $l$. Also, for all $i \in { [ \hspace{-0.65mm} [}1,n-1 {] \hspace{-0.65mm} ]}\cup { [ \hspace{-0.65mm} [}n+1 , 2n-1 {] \hspace{-0.65mm} ]}$, we set $\varphi^{-1}(\mu_i)=\sum_{l \in \mathbb{Z}} \mu_{i,l} t^l$ with $\mu_{i,l} \in \oqsln$ equal to 0 except for a finite number of values of $l$. Now, $\varphi^{-1}(\mu_i)$ is central in $\oqsln[t^{\pm 1}]$, since $\mu_i$ is central in $\oqmn$; and so $\varphi^{-1}(\mu_i) \in K[t^{\pm 1}]$. Hence, for all $i,l$, $\mu_{i,l} \in K$. Then, straightforward computations show that $$D'= \mathrm{ad}_{y_k} + \mu_{1,k} D'_1+\dots+\mu_{n-1,k} D'_{n-1}+\mu_{n+1,k} D'_{n+1} +\mu_{2n-1,k} D'_{2n-2}.$$ We show this when $({i,\alpha})=(1,1)$, the other cases are proved in a similar manner. In this case, $D'(X_{1,1})=\varphi^{-1} \circ D (Y_{1,1}\Delta_n^{-1})$; that is, $$D'(X_{1,1})= \varphi^{-1} \left(\Delta_n^{-k-1} \mathrm{ad}_x(Y_{1,1}) +\Delta_n^{-k-1}\frac{1}{n-2} (\mu_1 + \dots +\mu_{n-1}+\mu_{n+1}+\dots +\mu_{2n-1}) Y_{1,1} \right)$$ $$=\sum_{l \in \mathbb{Z}} \left[ \mathrm{ad}_{y_l}(X_{1,1}) +\frac{1}{n-2} (\mu_{1,l} + \dots +\mu_{n-1,l}+\mu_{n+1,l}+\dots +\mu_{2n-1,l}) X_{1,1} \right] t^{l-k}.$$ Now, as $\oqsln[t^{\pm 1}]=\oplus_{l \in \mathbb{Z}} \oqsln t^l$ and $D'(X_{1,1}) \in \oqsln$, we deduce from the previous equality that $$\begin{aligned} D'(X_{1,1}) &=& \mathrm{ad}_{y_k}(X_{1,1}) +\frac{1}{n-2} (\mu_{1,k} + \dots +\mu_{n-1,k}+\mu_{n+1,k}+\dots +\mu_{2n-1,k}) X_{1,1}\\ &=& \mathrm{ad}_{y_k}(X_{1,1}) + \mu_{1,k} D'_1(X_{1,1})+\dots+\mu_{n-1,k} D'_{n-1}(X_{1,1})\\ && \qquad +\mu_{n+1,k} D'_{n+1}(X_{1,1})+\dots +\mu_{2n-1,k} D'_{2n-2}(X_{1,1}),\end{aligned}$$ as desired. To finish, let us mention that the decomposition of $D'$ is unique because of the uniqueness of the decomposition of $D$ in $\oqgln$. Note that the automorphism $\sigma$ of $\oqmn$ defined in the discussion before Corollary \[twisted\] induces an automorphism of $\oqsln$, still denoted by $\sigma$, since $\sigma(\Delta_n)= \Delta_n$. Following the same reasoning as in the discussion before Corollary \[twisted\], we obtain the following result regarding the twisted Hochschild homology of $\oqsln$. $\mathrm{HH}_{n^2-2}(\oqsln,\oqsln_{\sigma})$ is a finite dimensional vector space of dimension $2n-2$. When $n=2$, the derivations $D_1 - D_3$ and $D_2$ of $\oqmn$ satisfy $\left(D_1 - D_3\right)(\Delta_n)=0=D_2(\Delta_n)$. Hence, they induce two derivations of $\co_q(SL_2)$ that are denoted by $D'_1$ and $D'_2$. Then, by using arguments similar to those in the proof of Theorem \[theoremDerSL\], one can prove the following result. 1. Every derivation $D'$ of $\co_q(SL_2)$ can be uniquely written as follows $$D'=\mathrm{ad}_y + \mu'_1 D'_1+\mu'_{2} D'_{2},$$ with $\mathrm{ad}_y \in \mathrm{InnDer}(\co_q(SL_2))$ and $\mu'_1,\mu'_{2} \in Z(\co_q(SL_2))=K$. 2. $\mathrm{HH}^1(\co_q(SL_2))$ is a two-dimensional vector space with basis $(\overline{D'_1},\overline{D'_{2}})$. 3. $\mathrm{HH}_{2}(\co_q(SL_2),\co_q(SL_2)_{\sigma})$ is a two-dimensional vector space. Notice that Hadfield and Krähmer have computed the twisted Hochschild homology of $\co_q(SL_2)$ in [@hadKtheory]. However, there is a misprint in [@hadKtheory Theorem 1.1] in the dimension of $\mathrm{HH}_{2}(\co_q(SL_2),\, _{\sigma^{-1}}\co_q(SL_2)) \simeq \mathrm{HH}_{2}(\co_q(SL_2),\co_q(SL_2)_{\sigma})$, as the authors have confirmed. [MMMM]{} J Alev and M Chamarie, [*Dérivations et automorphismes de quelques algèbres quantiques*]{}, Comm Algebra 20 (6) (1992), 1787-1802 K A Brown and J J Zhang, [*Dualising complexes and twisted Hochschild (co)homology for noetherian Hopf algebras*]{}, posted at math.RA/0603732 G Cauchon, [*Effacement des dérivations et spectres premiers des algèbres quantiques*]{}, J Algebra 260 (2003), 476-518. G Cauchon, [*Spectre premier de $\oq(M_n(k))$ image canonique et séparation normale*]{}, J Algebra 260 (2003), 519–569 P Feng and B Tsygan, [*Hochschild and cyclic homology of quantum groups*]{}, Comm Math Phys 140 (1991), no 3, 481-521 T Hadfield and U Krähmer, [*Twisted Homology of Quantum $SL(2)$*]{}, K-Theory 34, (2005), 327-360. T Hadfield and U Krähmer, [*On the Hochschild homology of quantum $SL(N)$*]{}, posted at math.QA/0509254, C R Math Acad Sci Paris, Ser I 343 (2006), 9-13 A Klimyk and K Schmüdgen, [*Quantum groups and their representations*]{}, Texts and Monographs in Physics, Springer-Verlag, Berlin, 1997 J Kustermans, G J Murphy and L Tuset, [*Differential calculi over quantum groups and twisted cyclic cocycles*]{}, J Geom Phys 44 (2003), no 4, 570-594 S Launois and T H Lenagan, [*Primitive ideals and automorphisms of quantum matrices*]{}, posted at math.RA/0511409, to appear in Algebras and Representation Theory T Levasseur and J T Stafford, [*The quantum coordinate ring of the special linear group*]{}, J Pure Appl Algebra 86 (1993), no 2, 181-186 J M Osborn and D S Passman, [*Derivations of skew polynomial rings*]{}, J Algebra 176 (1995), 417-448 B Parshall and J Wang, [*Quantum linear groups*]{}, Mem Amer Math Soc 89 (1991), no. 439 M Van den Bergh, [*A relation between Hochschild homology and cohomology for Gorenstein rings*]{}, Proc Amer Math Soc 126 (1998), no. 5, 1345-1348; [*Erratum*]{}, Proc Amer Math Soc 130 (2000), no. 9, 2809-2810 S Launois:\ School of Mathematics, University of Edinburgh,\ James Clerk Maxwell Building, King’s Buildings, Mayfield Road,\ Edinburgh EH9 3JZ, Scotland\ E-mail : stephane.launois@ed.ac.uk\ T H Lenagan:\ School of Mathematics, University of Edinburgh,\ James Clerk Maxwell Building, King’s Buildings, Mayfield Road,\ Edinburgh EH9 3JZ, Scotland\ E-mail: tom@maths.ed.ac.uk [^1]: This research was supported by a Marie Curie Intra-European Fellowship within the $6^{\mbox{th}}$ European Community Framework Programme and by Leverhulme Research Interchange Grant F/00158/X
{ "pile_set_name": "ArXiv" }
--- author: - 'Pietro Galli,' - Kevin Goldstein - and Jan Perz bibliography: - 'Anharmonic.bib' title: On anharmonic stabilisation equations for black holes --- Introduction ============ Among the efforts to systematise the construction of non-supersymmetric black hole solutions in four-dimensional $N=2$ supergravity one can discern two intersecting lines of research: on the one hand the generalisation [@Galli:2009bj; @Galli:2010mg] of Denef’s formalism [@Denef:2000nb], applicable to stationary extremal black holes, and the H-FGK approach [@Mohaupt:2011aa; @Meessen:2011aa] for static extremal and non-extremal solutions on the other. In distinct ways each arrives at a set of relationships, which we shall call stabilisation equations, between duality-covariant combinations of physical degrees of freedom and ansätze for spatial functions $H^M\!({\mathbf{x}})$. These relationships remain unchanged for various types of black holes, which means that all black hole solutions (supersymmetric, extremal, non-extremal) in a given model take the same form in terms of the functions $H$ and only the functions themselves vary. For supersymmetric extremal solutions, functions $H$ are known to be harmonic, with poles corresponding to physical magnetic and electric charges carried by the black hole [@Sabra:1997kq; @Behrndt:1997ny]. In the context of the H-FGK formalism a harmonic ansatz has been used also for non-supersymmetric, static, spherically symmetric extremal black holes, whereas for their non-extremal counterparts a hyperbolic (exponential) ansatz has been employed [@Mohaupt:2010fk; @Galli:2011fq; @Meessen:2012su; @Bueno:2012jc; @GOPS:2012]. In this short note we examine the exhaustiveness of these ansätze in the static, spherically symmetric case, i.e. with the metric of the form $$\label{eq:StaticMetric} {\mathrm{d}}s^2 = -{\mathrm{e}}^{2U(\tau)}{\mathrm{d}}t^2 + {\mathrm{e}}^{-2U(\tau)}\left(\frac{r_0^4}{\sinh^4(r_0\tau)}{\mathrm{d}}\tau^2 + \frac{r_0^2}{\sinh^2(r_0\tau)}({\mathrm{d}}\theta^2 + \sin^2\!\theta\,{\mathrm{d}}\phi^2)\right),$$ providing in the process some portions of a dictionary between the generalised formalism of Denef and the H-FGK formulae. Non-superysmmetric extremal black holes ======================================= In [@Galli:2010mg], to which we refer the reader for the description of the general setup and whose numerical conventions we follow (occasionally adopting some of the notation from the H-FGK literature), the generating single-center underrotating solution [@Bena:2009ev] for the metric warp factor $U({\mathbf{x}})$ and the complex scalars $z^a({\mathbf{x}})$ from $n_\mathrm{v}$ vector multiplets in models with cubic prepotentials has been recast in the form of stabilisation equations $$2\operatorname{Im}\left({\mathrm{e}}^{-U-{\mathrm{i}}\alpha}\,\Omega^M\!(z,\bar{z})\right) = H^M\!({\mathbf{x}}) {{\,}},$$ where $\Omega(z,\bar{z})$ is the covariantly holomorphic symplectic section (period vector) of special geometry, $\alpha$ is a phase and the single superscript $M$ is understood to run over $2(n_\mathrm{v}+1)$ components, otherwise indexed with subscripts and superscripts ${}^{0,a}$ and $_{0,a}$. $H$ was written in [@Galli:2010mg] as a sum of harmonic functions and a ratio of harmonic functions (note the minus sign in the zeroth magnetic component): $$\label{eq:HwithJ} (H^M) = \left(h^0-p^0\tau,0;0,h_a+q_a\tau\right) + \left(0,0;\frac{b+J\tau^2\cos\theta}{h^0-p^0\tau},0\right),$$ where $\tau$ is a radial coordinate and the anharmonic part persists also in the absence of rotation ($J = 0$), when the solution reduces to that of [@Cardoso:2007ky; @Gimon:2007mh]. Although the quotient form of $H$ was later confirmed by [@Bossard:2012xs], one could nonetheless wonder whether the anharmonic part is necessary (as opposed to being an artifact of the specific rewriting with the particular coefficients used) and whether the solutions that seem to require it do not carry NUT charge (which would render them only locally asymptotically flat). To answer these questions we solve the spherically symmetric, static case of the $t^3$ model[^1] for the charge configuration $(Q^M) = (0,p^1;q_0,0)$, dual to that in eq.  for $n_\mathrm{v} = 1$. It is easiest to start with the equation (2.27) of [@Meessen:2011aa],[^2] which corresponds to the equation of motion for the warp factor: $$\label{eq:EOMforU} \frac{1}{2}\frac{\partial\log{\mathrm{e}}^{-2U}}{\partial H^M}(\ddot H^M - r_0^2 H^M) + \left(\frac{\dot H^M H_M}{2{\mathrm{e}}^{-2U}}\right)^2 = 0 {{\,}},$$ where the dot denotes differentiation with respect to $\tau$, the index $M$ has been lowered with the symplectic form $\left(\begin{smallmatrix}0 & -1\\1 & 0\end{smallmatrix}\right)$ and where $${\mathrm{e}}^{-2U} = \sqrt{-\tfrac{10}{3}(H^1)^3 H_0 - (H^0 H_0)^2 - 2 H^0 H^1 H_0 H_1 + \tfrac13(H^1 H_1)^2 + \tfrac{8}{45} H^0 (H_1)^3}{{\,}}.$$ As remarked in [@Meessen:2011aa], when $r_0 = 0$ (extremal black holes) and upon assuming, $$\label{eq:HdotH} \dot H^M H_M = 0{{\,}},$$ (\[eq:EOMforU\]) reduces to $$\label{eq:simpleEOM} \frac{\partial\log{\mathrm{e}}^{-2U}}{\partial H^M}\ddot H^M = 0{{\,}},$$ which can be solved by harmonic functions $\ddot H^M = 0$. The harmonic function solution sets each term in (\[eq:simpleEOM\]) to zero individually. One may however relax the assumption (\[eq:HdotH\]), setting $H_1 = 0$, taking only the two functions corresponding to non-vanishing charges to be harmonic with arbitrary coefficients ($H^1 = A^1 + B^1\tau$, $H_0 = A_0 + B_0\tau$) and leaving $H^0$ unspecified. Eq.  then becomes $$(A_0 + B_0\tau)^2 H^0\ddot H^0 - \frac12\left(B_0 H^0 - (A_0 + B_0\tau)\dot H^0\right)^2=0{{\,}},$$ a model-dependent differential equation for $H^0(\tau)$, whose solution reads $$\label{eq:H0ext} H^0 = \pm\left(c_1\sqrt{A_0 + B_0\tau} + \frac{c_2}{\sqrt{A_0 + B_0\tau}}\right)^2{{\,}},$$ with constants of integration $c_1, c_2$. The remaining equations of motion fix the coefficients as either $$c_1 = 0{{\,}}, {{\qquad}}B_0 = -q_0{{\,}}, {{\qquad}}B^1 = p^1{{\,}},$$ in exact analogy with eq. , or $$c_1 = 0{{\,}}, {{\qquad}}c_2 = 0{{\,}}, {{\qquad}}B_0 = 0{{\,}}, {{\qquad}}B^1 = 0{{\,}},$$ which leads to a (doubly extremal) solution with constant scalars. The other parameters and the overall sign in are determined by the asymptotic boundary conditions. In particular, for the non-constant solution (we suppress the superscript $1$ on the single scalar $z = \Omega^1/\Omega^0$): $$\operatorname{sgn}(H^0) = -\operatorname{sgn}(\operatorname{Re}z_\infty){{\,}}, {{\qquad}}c_2^2 = \left\lvert\frac{\operatorname{Re}z_\infty}{\operatorname{Im}z_\infty}\right\rvert.$$ Non-extremal black holes ======================== For $r_0 \neq 0$ and with the additional assumption $\dot H^M H_M = 0$, eq.  reduces to $$\frac{\partial\log{\mathrm{e}}^{-2U}}{\partial H^M}(\ddot H^M - r_0^2 H^M) = 0{{\,}},$$ which can be solved by hyperbolic functions $\ddot H^M = r_0^2 H^M$. Searching for a more general solution we take, similarly to the extremal case above, $H_1 = 0$, $H^1 = A^1\cosh(r_0\tau) + \frac{B^1}{r_0}\sinh(r_0\tau)$ and $H_0 = A_0\cosh(r_0\tau) + \frac{B_0}{r_0}\sinh(r_0\tau)$. $H^0$ is then determined from $$\begin{split} &\left(A_0\cosh(r_0\tau) + \tfrac{B_0}{r_0}\sinh(r_0\tau)\right)^2 H^0 (\ddot H^0 - r_0^2 H^0)\\ &-\frac12\left[\Bigl(r_0 A_0\sinh(r_0\tau) + B_0\cosh(r_0\tau)\Bigr) H^0 - \left(A_0\cosh(r_0\tau) + \tfrac{B_0}{r_0}\sinh(r_0\tau)\right)\dot H^0\right]^2=0\raisetag{10ex} \end{split}$$ and turns out to be $$H^0 = \pm\frac{\left(c_1\cosh(r_0\tau) + \frac{c_2}{r_0}\sinh(r_0\tau)\right)^2}{A_0\cosh(r_0\tau)+\frac{B_0}{r_0}\sinh(r_0\tau)}{{\,}}.$$ Numerical tests indicate that the analytical solution for the coefficients $$\begin{gathered} B_0 = c_2 A_0{{\,}}, {{\qquad}}B^1 = c_2 A^1{{\,}},\\ \begin{split} & c_2 = \pm c_1\left( 75 (A^1)^4 (A_0)^2 (p^1)^2 - 45 c_1^4 A^1 A_0 (p^1)^2 + 45 c_1^4 (A^1)^2 p^1 q_0 + 25 (A^1)^6 (q_0)^2 + 9 c_1^8 r_0^2\right.\\ &\left.{} + 60 (A^1)^3 A_0 c_1^4 r_0^2 + 100 (A^1)^6 (A_0)^2 r_0^2\right)^\frac{1}{2}\Big/\left(4 c_1^4+10 A_1^3 A_0\right)\raisetag{4ex} \end{split}\end{gathered}$$ is the only admissible solution. Such coefficients lead to a constant scalar, which must take the extremal attractor value. It follows that $c_1=0$, so ultimately $H^0 = 0$, the solution is given purely in terms of hyperbolic functions (and compatible with the condition $\dot H^M H_M = 0$). Discussion and conclusions ========================== In spite of their different origins, the non-supersymmetric extension of Denef’s approach and the H-FGK formalism both match the scalar degrees of freedom with the vector part of the action in the same way, one that respects duality covariance. The corresponding non-differential stabilisation equations have consequently (up to the differences in conventions) identical form. The fact that the $H$-functions differ stems from the specific additional assumptions made in the H-FGK literature, namely that $\dot H^M H_M = 0$ and that the rest of eq.  vanishes term by term. The condition $\dot H^M H_M = 0$ in the BPS context is synonymous with the absence of NUT charge [@Bellorin:2006xr]. For the non-supersymmetric extremal solution discussed here this cannot be the case, since all the equations of motion are satisfied with the static metric , whose NUT charge is $0$. Indeed, ref. [@Galli:2010mg], eq. (3.28) showed that the spatial Hodge dual of the spatial exterior derivative of the one-form $\omega$ encoding the relevant part of the metric depends on two terms, $$\boldsymbol{\operatorname{\star}_0}{\boldsymbol{{\mathrm{d}}}}\omega = {\langle {{\boldsymbol{{\mathrm{d}}}}H}, {H} \rangle} - 2{\mathrm{e}}^{-2U}\boldsymbol{\eta}{{\,}},$$ the first of which directly generalises $\dot H^M H_M$. (The second term measures the non-closure of the fake electromagnetic field strength two-form introduced therein.) We see that for the left-hand side to be zero it suffices that rather than each part vanishes, as happens for BPS solutions, the two terms only cancel each other, as in the extremal example discussed above.[^3] It is worth pointing out that the inverse harmonic part of the functions $H$ is essential for the non-trivial behaviour of the real parts of $z^a$, usually referred to as axions. We have checked that the constant $c_2$ (or $c$ in [@Cardoso:2007ky], $B$ in [@Gimon:2007mh] and $b$ in eq. ), originating here from the product $H^0 H_0$, cannot be consistently extracted from the other constants when $H^M$ are purely harmonic (the system equations that one would write does not admit any solution), even if none of them were a priori vanishing. The non-extremal case remains less lucid. The existence of non-hyperbolic solutions has been postulated in [@Mohaupt:2012tu], but the non-hyperbolic part of the natural generalization of the extremal anharmonic solution in our example turned out to be zero. Arguably however, by setting some of the $H^M$ to be harmonic or hyperbolic functions we might not yet have searched for the most general extremal or non-extremal solution. We greatly benefitted from discussions with Prof. P. Meessen and Prof. T. Ortín. PG wishes to thank the University of the Witwatersrand for hospitality. The work of PG has been supported in part by grants FIS2008-06078-C03-02 and FIS2011-29813-C02-02 of Ministerio de Ciencia e Innovación (Spain) and ACOMP/2010/213 from Generalitat Valenciana. The work of KG is supported in part by the National Research Foundation. Any opinion, findings and conclusions or recommendations expressed in this material are those of the authors and therefore the NRF do not accept any liability with regard thereto. JP’s work has been supported by the ERC Advanced Grant no. 226455 (SUPERFIELDS). Comparison of conventions ========================= Some of the original symbols have been replaced with those used here to make the meaning of the expressions clearer. Comparison with the respective papers provides a dictionary. $\hat\Omega = {\mathrm{e}}^{-U-{\mathrm{i}}\alpha}\Omega(z,\bar{z})$. ref. [@Galli:2010mg] ref. [@Meessen:2011aa] (H-FGK) here ----------------------------- ------------------------------------------------------------------- ------------------------------------------------------------------------------ ------------------------------------------------------------------- metric signature $(-,+,+,+)$ $(+,-,-,-)$ $(-,+,+,+)$ $\tau\in$ $(0,\infty)$ $(0,-\infty)$ $(0,\infty)$ physical scalars $z^a$ $Z^i$ $z^a$ vector super- and subscript $I = 0,a$ $\Sigma = 0,i$ not used single index not used $M = {}^\Sigma,{}_\Sigma$ $M$ $H$-functions $2\operatorname{Im}\hat\Omega = \mathcal{J}$ $\operatorname{Im}\hat\Omega^M = H^M$ $2\operatorname{Im}\hat\Omega^M = H^M$ symplectic form $\left(\begin{smallmatrix}0 & -1\\1 & 0\end{smallmatrix}\right)$ $\left(\begin{smallmatrix}0 & 1\\-1 & 0\end{smallmatrix}\right)$ $\left(\begin{smallmatrix}0 & -1\\1 & 0\end{smallmatrix}\right)$ warp factor ${\mathrm{e}}^{-2U} = {\mathrm{i}}\hat\Omega^M\bar{\hat\Omega}_M$ ${\mathrm{e}}^{-2U} = -\frac{{\mathrm{i}}}{2}\hat\Omega^M\bar{\hat\Omega}_M$ ${\mathrm{e}}^{-2U} = {\mathrm{i}}\hat\Omega^M\bar{\hat\Omega}_M$ poles of BPS $H$ $\Gamma\tau$ $-\frac{Q^M}{\sqrt{2}}\tau$ $Q^M\tau$ Note the symplectic form hidden in the expression for the warp factor. In this paper by “stabilisation equations” we mean $\operatorname{Im}\hat\Omega \propto H$, whereas the H-FGK papers use that term for the relations between the real and imaginary parts of $\hat\Omega$: $\operatorname{Re}\hat\Omega = \operatorname{Re}\hat\Omega(\operatorname{Im}\hat\Omega)$. [^1]: Normalization: $\Omega_0 = \frac{5}{6}(\Omega^1)^3/(\Omega^0)^2$. (Here, unlike in [@Galli:2010mg], $\Omega_0$ stands for one of the components of $\Omega$.) [^2]: This equation can also be derived from a further generalisation of Denef’s formalism to non-extremal solutions. Although this derivation does not appear in the literature, we do not include the rather technical details here since they are not directly relevant to our discussion. [^3]: Cf. also [@GMO:2012] for the discussion of gauge dependence of the condition $\dot H^M H_M = 0$.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Quantum noise in a model of singly resonant frequency doubling including phase mismatch and driving in the harmonic mode is analyzed. The general formulae about the fixed points and their stability as well as the squeezing spectra calculated linearizing around such points are given. The use of a nonlinear normalization allows to disentangle in the spectra the dynamic response of the system from the contributions of the various noisy inputs. A general “reference” model for one-mode systems is developed in which the dynamic aspects of the problem are not contaminated by static contributions from the noisy inputs. The physical insight gained permits the elaboration of general criteria to optimize the noise suppression performance. With respect to the squeezing in the fundamental mode the optimum working point is located near the first turning point of the dispersive bistability induced by cascading of the second order nonlinear response. The nonlinearities induced by conventional crystals appear enough to reach it being the squeezing ultimately limited by the escape efficiency of the cavity. In the case of the harmonic mode both, finite phase mismatch and/or harmonic mode driving allow for an optimum dynamic response of the system something not possible in the standard phase matched Second Harmonic Generation. The squeezing is then limited by the losses in the harmonic mode, allowing for very high degrees of squeezing because of the non-resonant nature of the mode. This opens the possibility of very high performances using artificial materials with resonantly enhanced nonlinearities. It is also shown how it is possible to substantially increase the noise reduction and at the same time to more than double the output power for parameters corresponding to reported experiments.' address: 'Instituto de Estructura de la Materia, CSIC, Serrano 123, 28006 Madrid, Spain.' author: - 'C. Cabrillo[@email], J. L. Rold[á]{}n, P. Garc[í]{}a-Fern[á]{}ndez' title: Quantum noise reduction in singly resonant optical devices --- Introduction ============ [\[I\]]{} Second Harmonic Generation (SHG) has nowadays quite a long tradition as a mean of squeezed light generation [@Per88; @Siz90; @Kur93; @Pas94; @Ral95; @Tsu95; @You96]. The preferred experimental setup has been the doubly resonant configuration as, at least in principle, permits arbitrarily large squeezing. However, such scheme has been hampered by the technical difficulties arising from keeping the resonance in both modes simultaneously. Thus, in spite of the development of very ingenious stabilizing procedures [@Kur93], for the moment it has been only possible to maintain the double resonance for a few seconds. Certainly, this kind of experimental delicacy can hardly surprise when dealing with the generation of non-classical states of light. In view of such difficulties, some experimental efforts have been recently redirected to singly resonant configurations [@Pas94; @Ral95; @Tsu95]. Although, the maximum noise suppression is then limited to a 90% [@Pas94], the efforts resulted in very stable intense squeezed light sources with degrees of squeezing even surpassing those reported in the doubly resonant counterparts [@Tsu95]. This evolution highlights the importance of reducing to a minimum the technical demanding of new proposals in a so experimentally challenging field. At the same time, singly resonant Optical Parametric Oscillation (OPO), the most successful method to squeeze the vacuum [@Pol92; @Bre95], has been generalized to singly resonant Optical Parametric Amplification (OPA), i.e. a laser driving in the harmonic mode has been added, again showing an extraordinary stability at quite high noise suppression values in the fundamental mode [@Sch96a]. Although the squeezed beams are in this case much less intense than the in SHG counterpart, this setup permits a control of the phase of the squeezed quadrature, something which allowed a spectacular demonstration using quantum tomography, of the different kinds of squeezed states [@Bre97]. In view of this experimental success it seems timely to extend the quantum mechanical model beyond the pure phase matched cases. More specifically, we address here quantum noise reduction in an extension of the conventional singly resonant SHG to include also a coherent input in the harmonic mode as well as phase mismatch between the interacting waves. On the other hand, an increasingly number of papers is being devoted to study quantum noise in systems combining different kinds of nonlinearities (see, for instance, [@Sun95; @Mar95a; @Mar95b; @Cab97; @Khe98] for some recent contributions). In particular, the combination of $\chi^{(2)}$ with Kerr-like $\chi^{(3)}$ nonlinearities in c.w. cavity systems has been quite extensively studied [@Cab97; @Tom84; @Tom86; @Gar89; @Cab92a; @Cab92b; @Cab93; @Kry96; @Khe97]. Even exact full quantum results have been obtained showing, for instance, the emergence of tristability not present in the classical counterpart [@Khe97]. With respect to the squeezing performance the results appear as very promising at least in degenerate doubly resonant configurations [@Cab97; @Cab93]. The simplest system from the implementation point of view, combining this two kinds of nonlinearities that the authors can think of is precisely a singly resonant second order nonlinear system with phase mismatched interacting waves as then, by virtue of the cascading effect, an effective Kerr-like third order nonlinearity appears. In order to make the search of strong noise reduction through the parameter space affordable we are bounded to the standard linearization procedures, the only capable of yielding analytical results. Inside the linear approximation perfect squeezing is possible at dynamic instabilities. We make use of this fact to find optimum working points showing up maximum squeezing. They are, however, an artifact of the method as the linear approximation breaks down at the instabilities. What matters for the practical implementation of new squeezed light sources (our ultimate goal) are the optimum paths through the parameter space approaching such points. Let us explain a little more what we mean. Fixed a particular parameter there will be a set of values of the remaining parameters (including the frequency as such) tuned up to yield the maximum noise reduction. When the chosen parameter is varied an optimum path is defined by the set of parameter values maximizing the noise reduction at any stage of the variation. The real optimum working point will be somewhere along these paths before reaching an instability. Thus, these paths would guide the experimentalist towards the optimum working point in the real experimental setup. The essence of our approach to find such paths will be to isolate the dynamic aspect of the squeezing behavior from the static contributions to the noise coming from the different inputs. A simple adequate normalization will disentangle the two aspects of the quantum noise behavior. This simplifies the analysis sufficiently to allow a characterization of the optimum paths. Another crucial issue when dealing with the squeezing performance of a system is precisely the election of the most relevant parameter to compare the different configurations with each other or with the reported related experiments. There is no universal criterion to determine the squeezing efficiency of a given device. An efficient setup regarding power consumption, i.e., when compared for fixed input power could well be deceptive when compared for the same output power and perhaps inadequate to some spectroscopic applications. However, within the state of the art of the present squeezed light generators, the main concern is to improve the squeezing figures themselves having other considerations such as the power consumption a relative importance. Under this perspective probably the parameter of utmost importance as far as c.w. resonant systems are concerned is the energy load inside the cavity. Indeed, the usual causes of squeezing degradation such as blue-light-induced red absorption come from an excessive mean photon number inside the cavity capable of significantly degrade the material optical response at the relevant frequencies. These considerations will lead us to define another normalization this time useful for the evaluation of the squeezing efficiency with respect to the intra-cavity photon number. The sketch of the article is as follows. In section \[QMM\] the quantum mechanical model is presented. In section \[LEE\] the evolution equations are linearized, the fixed points of the system obtained and their stability studied. Section \[SS\] gives all the formulae regarding quantum noise spectra in the system. In section \[SP\] a general approach to one-mode systems is developed which allows the definition of general criteria to characterize the optimum paths and applied to the specific case here addressed. Finally the limits of the model and possible implementations are thoroughly discussed in section \[DC\], concluding the article with a summary of the most relevant results obtained. Quantum mechanical model ======================== [\[QMM\]]{} The system we want to address consists in a second order nonlinear medium coupling two modes of frequency $\omega$ (fundamental) and $2 \omega$ (harmonic) respectively and placed inside a ring cavity resonant only with the fundamental mode. We will also assume just one input-output mirror of finite reflectivity. The effect of phase mismatch when only the fundamental mode is driven has been experimentally studied in [@Whi96a] where bistability induced by cascading was demonstrated. The classical evolution equation of the fundamental mode, $\alpha$, as given in [@Whi96a], reads $$\label{eq:claeqa} \frac{d\alpha}{dt} = -\left [ \gamma + i\delta + \nu K(\Delta k) |\alpha|^{2} \right ]\alpha + \sqrt{2 \gamma_{c}}\, \alpha_{in} \,.$$ The nonlinear coupling depends on the wave vector mismatch $\Delta k = k(2\omega)-k(\omega)$ as $K(\Delta k) = 2 \int_{0}^{L_{m}} \int_{0}^{z} u^{*}(\Delta k,z) u(\Delta k, z^{\prime}) d z^{\prime} d z / L_{m}^{2}$ being $L_{m}$ the length of the nonlinear medium, $u(k,z)$ the spatial dependence of the resonator mode and $\nu$ is proportional to the second order nonlinear susceptibility (see below). Splitting $K$ in its real and imaginary parts, Eq. (\[eq:claeqa\]) can be recast as $$\label{eq:claeqa2} \frac{d\alpha}{dt} = -\left [ \gamma + \mu |\alpha|^{2} + i(\delta+\Gamma |\alpha|^{2}) \right ]\alpha + \sqrt{2 \gamma_{c}} \alpha_{in} \,.$$ For a plane wave geometry \[eq:mu&Ga\] $$\begin{aligned} \label{eq:mu} \mu\;\equiv\;\nu K_{r} & = &\nu \left ( {\rm sinc}\frac{\Delta k L_{m}} {2} \right )^{2} \\ \label{eq:Ga} \Gamma\;\equiv\;\nu K_{i} & = &\frac{2\nu} {\Delta k L_{m}} \left[{\rm sinc} \frac{\Delta k L_{m}} {2} \cos \frac{\Delta k L_{m}} {2} -1 \right]\,,\end{aligned}$$ where $K_{r}$ and $K_{i}$ denote the real and imaginary part of $K(\Delta k)$. In this way the nonlinear dynamics is divided in a nonlinear absorption (the up conversion of photons) and a nonlinear dispersion (the cascading effect). The behavior of both parameters with $\Delta k$ is plotted in Fig. \[fg:fg\]. Notice that for any finite $\Delta k$ the nonlinear dispersion is also finite, periodically completely dominating (null frequency doubling). ![The dependence of $K_{i}$ and $K_{r}$ with respect to the phase mismatch.[]{data-label="fg:fg"}](fkgk.eps) Quantization of Eq. (\[eq:claeqa2\]) is then accomplished independently for each effect. Regarding the nonlinear absorption we use the two-photon model proposed in [@Col91], while nonlinear dispersion is accounted for by a fourth order Hamiltonian, $H = (\hbar \Gamma/2)\, a^{\dagger \,2} a^{2}$ as in the standard theory of optical Kerr effect. It represents a Hamiltonian modification of the two-photon absorption model so that the quantum mechanical equation reads $$\label{eq:quant} \frac{d a}{dt} = -\left [ \gamma + i \delta + (\mu + i \Gamma)a^{\dagger}a \right ]a + 2\sqrt{\mu}\,a^{\dagger}b_{in}+\sqrt{2 \gamma_{c}}\, a_{in} + \sqrt{2\gamma_{s}}\,w_{in}\,,$$ where Latin characters denote the annihilation operators for the corresponding classical (Greek characters) modes. Two extra terms not present in the classical analog appear, namely, a white noise input, $w_{in}$, accounting for the fluctuations induced by the scattering and the absorption in the crystal ($\gamma_{s} = \gamma-\gamma_{c}$) and a parametric “gain” term coming from the, classically empty, incoming harmonic mode, $b_{in}$. Eq. (\[eq:quant\]) is complemented with the boundary conditions [@Pas94] \[eq:boundary\] $$\begin{aligned} \label{eq:boundarya} a_{out} & = & \sqrt{2\gamma_c}\, a - a_{in} \,,\\ \label{eq:boundaryb} b_{out} & = & \sqrt{\mu}\,a^{2} - b_{in} \,,\end{aligned}$$ from which the output spectra can be computed. Input fields are assumed to be in coherent states. In particular, allowing a coherent state different from the vacuum for the incoming harmonic mode we generalize the system to the case of driving both modes. In the case $\Gamma = \delta = 0$, the squeezing properties as well as the applicability to quantum nondemoliton measurements of this system have been studied in detail in [@Sch95]. The used definitions for the creation operators give the following relations with the usual experimental parameters (see appendix in [@Sch95]): the input and output powers are $P_{\omega,in/out} = \hbar \omega \langle a^{\dagger}_{in/out} a_{in/out} \rangle$ and $P_{2\omega,in/out}= \hbar 2 \omega \langle b^{\dagger}_{in/out} b_{in/out}\rangle$; the circulating power is $\hbar \omega \langle a^{\dagger} a \rangle / \tau$ being $\tau$ the round-trip time and the single-pass power-conversion efficiency (in W$^{-1}$) is $2 \tau^{2} \nu/ \hbar \omega$. Linearized evolution equations and linear stability analysis ============================================================ [\[LEE\]]{} Defining fluctuation operators as \[eq:fluctope\] $$\begin{aligned} a & = &\alpha +\delta a \,, \label{eq:fluctopea}\\ a_{in,out} & = & \alpha_{in,out} +\delta a_{in,out} \,, \label{eq:fluctopeain}\\ b_{in,out} & = & \beta_{in,out}+\delta b_{in,out} \,, \label{eq:fluctopebin}\end{aligned}$$ a linearization of Eqs. (\[eq:quant\]) and (\[eq:boundary\]) yields $$\begin{aligned} \label{eq:linquant} \frac{d\,\delta a}{dt} & = & -\left [ \gamma + i \delta + 2(\mu + i \Gamma)|\alpha|^{2} \right]\delta a + \left [ 2\sqrt{\mu}\beta_{in} - (\mu+i \Gamma)\alpha^{2}\right ] \delta a^{\dagger} \nonumber \\ & + & 2\sqrt{\mu}\,\alpha^{*}\delta b_{in}+ \sqrt{2 \gamma_{c}}\,\delta a_{in} + \sqrt{2\gamma_{s}}\,w_{in}\,,\end{aligned}$$ and \[eq:linboundary\] $$\begin{aligned} \delta a_{out} & = & \sqrt{2 \gamma_{c}}\,\delta a - \delta a_{in}\,, \label{eq:linboundarya} \\ \delta b_{out} & = & 2 \alpha \sqrt{\mu}\, \delta a - \delta b_{in}\,, \label{eq:linboundaryb}\end{aligned}$$ being $\alpha_{in,out},\beta_{in.out}$ the mean values of the corresponding input and output modes and $\alpha$ a stable fixed point of the classical counterpart of Eq. (\[eq:quant\]), i.e. $$\label{eq:class} \frac{d \alpha}{dt} = -\left [ \gamma + i \delta + (\mu + i \Gamma) |\alpha|^{2} \right ]\alpha + 2\sqrt{\mu}\,\alpha^{*}\beta_{in}+\sqrt{2 \gamma_{c}}\,\alpha_{in} \,.$$ Equating to zero the l.h.s. of Eq. (\[eq:class\]) a “state equation” for the fixed points is obtained, namely, $$\label{eq:state} \alpha = \frac{\sqrt{2 \gamma_{c}}\left \{ \left[ \gamma+\mu n -i (\delta + \Gamma n) \right ] \alpha_{in}+ 2 \sqrt{\mu} \,\beta_{in}\, \alpha_{in}^{*} \right \}} {(\gamma + \mu n)^{2}+(\delta+\Gamma n)^{2}-4 \mu |\beta_{in}|^{2}} \,,$$ with $n = |\alpha|^{2}$. Let be $\theta,\, \phi$ and $\varphi$ the phases of $\alpha,\, \alpha_{in}$ and $\beta_{in}$ respectively. Then, dividing both sides of Eq. (\[eq:state\]) by $e^{i\varphi/2}$ $$\begin{aligned} \label{eq:statea} |\alpha|e^{i(\theta-\varphi/2)} \left [ (\gamma + \mu n)^{2} + (\delta+\Gamma n)^{2} - 4 \mu |\beta_{in}|^{2} \right ] \: = \: & & \nonumber \\ |\alpha_{in}| \sqrt{ 2 \gamma_{c}} \left [ \left ( \gamma + \mu n- i (\delta+\Gamma n) \right ) e^{i(\phi-\varphi/2)}+ 2 \sqrt{\mu}\,|\beta_{in}|e^{-i(\phi-\varphi/2)} \right ] \,. & &\end{aligned}$$ Taking the squared modulus in both sides a quintic equation for $n$ is obtained $$\begin{aligned} \label{eq:neqn} 0 & = & n \left [ (\gamma + \mu n)^{2} + (\delta+\Gamma n)^{2} - 4 \mu |\beta_{in}|^{2} \right ]^{2} - 2 \gamma_{c}|\alpha_{in}|^{2} \left\{ (\gamma + \mu n)^{2} + (\delta+\Gamma n)^{2} + 4 \mu |\beta_{in}|^{2} + \right . \nonumber \\ & & \left . 4 \sqrt{\mu}\,|\beta_{in}| \left[ (\gamma+\mu n) \cos(2 \phi-\varphi)+ (\delta+\Gamma n) \sin(2 \phi -\varphi) \right] \right\} \,.\end{aligned}$$ The real and the imaginary part of Eq. ($\ref{eq:statea}$) determine the $\sin(\theta-\varphi/2)$ and $\cos(\theta-\varphi/2)$ as functions of the solutions of Eq. (\[eq:neqn\]) \[eq:phases\] $$\begin{aligned} \cos(\theta-\varphi/2) & = & \frac{|\alpha_{in}|}{|\alpha|} \sqrt{2\gamma_{c}}\, \frac{(\gamma + \mu n+2\sqrt{\mu}\,|\beta_{in}|)\cos (\phi-\varphi/2)+(\delta+\Gamma n) \sin (\phi-\varphi/2)} {(\gamma+\mu n)^{2}+(\delta+\Gamma n)^{2}-4 \mu |\beta_{in}|^{2}} \label{eq:cos} \\ \sin(\theta-\varphi/2) & = & \frac{|\alpha_{in}|}{|\alpha|} \sqrt{2 \gamma_{c}}\, \frac{(\gamma + \mu n-2\sqrt{\mu}\,|\beta_{in}|)\sin (\phi-\varphi/2)-(\delta+\Gamma n) \cos (\phi-\varphi/2)} {(\gamma+\mu n)^{2}+(\delta+\Gamma n)^{2}-4 \mu |\beta_{in}|^{2}} \,.\end{aligned}$$ \[eq.sin\] Eq. (\[eq:neqn\]) allows for numerical calculation of the fixed points given the input fields. But it can be interpreted also as a linear equation for $|\alpha_{in}|^{2}$, i.e. $$\label{eq:ain} 2 \gamma_{c}|\alpha_{in}|^{2} = \frac{n \left [ (\gamma + \mu n)^{2} + (\delta+\Gamma n)^{2} - 4 \mu |\beta_{in}|^{2} \right ]^{2}} {|\gamma + \mu n + 2 \sqrt{\mu} |\beta_{in}| e^{i (2 \phi-\varphi)}|^{2} + 4 \sqrt{\mu}\,|\beta_{in}| (\delta+\Gamma n) \sin(2 \phi -\varphi)} \,.$$ The positive character of the r.h.s. is not always guaranteed and therefore not for every value of the parameters a real positive $n$ is possible. Notice, however, that in the cases in which this happens a simultaneous change of the sign of $\delta$ and $\Gamma$ yields a consistent set of parameter values. As we shall see, this fact will have useful consequences in regarding the analysis of the quantum noise behavior in the system. The stability of the fixed points is governed by the real part of the eigenvalues of the drift matrix associated with the linearized evolution equation (\[eq:linquant\]). Very simple algebra yields $$\label{eq:eigen} \lambda_{\pm}=-(\gamma+2 \mu \,n) \pm \sqrt{|(\mu+ i \Gamma)\alpha^{2}-2 \sqrt{\mu}\,\beta_{in}|^{2}- (\delta+2\Gamma\,n)^{2}}\,.$$ Provided that the real part of both eigenvalues are negative the fixed point will be stable. With respect to the phase-matched SHG case ($\Gamma =0$ and $\beta_{in}=0$, always stable), although both $\Gamma$ and $\delta$ alone tend to stabilize the dynamics, in combination are able of destabilize the system. A finite $\beta_{in}$, on the other hand, can promote instability depending on its relative phase with respect to $\alpha$, the case $\theta - \varphi/2 =\pm \pi/2$ maximizing the effect. All of these new instabilities, however, correspond to zero eigenvalues without a finite imaginary part. In other words, contrary to the double resonant SHG there is no Hopf bifurcation and, consequently, no selfpulsing solution. Squeezing spectra ================= [\[SS\]]{} For a given quadrature of the electric field, $X_{\theta_{s}}^{out}(t) \equiv a_{out}(t) e ^{-i \theta_{s}}+ a_{out}^{\dagger}(t) e^{i\theta_{s}}$, the squeezing spectrum is simply the noise spectrum of such a quantity, i.e. $$\begin{aligned} \label{eq:Sw} S(\omega) & = & C \int_{-\infty}^{\infty} \langle \delta X_{\theta_{s}}^{out}(t) \delta X_{\theta_{s}}^{out}(t+\tau) \rangle \, e^{-i\omega \tau} d \tau \nonumber \\ & = & C \int_{-\infty}^{\infty} \langle \delta X_{\theta_{s}}^{out}(\omega) \delta X_{\theta_{s}}^{out}(-\omega^{\prime}) \rangle d \omega^{\prime} \,,\end{aligned}$$ being $C$ some normalization constant and the averages are assumed stationary. As a function of the annihilation and creation operators Eq. (\[eq:Sw\]) is rewritten as $$\label{eq:Sw2} S(\omega) = C \left [ \langle \delta a^{\dagger}_{out}(\omega) \delta a_{out}(-\omega)\rangle + {\rm Re}\{\exp(-i2\theta_{s}) \langle \delta a_{out}(\omega) \delta a_{out}(-\omega) \rangle \} \right ]\,,$$ where use has been made of the stationarity of the average and Re denotes real part. From this expression it is evident that the noise is minimized and therefore the squeezing effect maximized for a quadrature phase such as $$\label{eq:Sw3} S(\omega) = C \left [ \langle \delta a^{\dagger}_{out}(\omega) \delta a_{out}(-\omega)\rangle - \left | \langle \delta a_{out}(\omega) \delta a_{out}(-\omega) \rangle \right | \right ]\,,$$ corresponding to a phase $$\label{eq:sqphase} \theta_{m} = \frac{\nu(\omega) - \pi}{2} \,,$$ where $\nu(\omega)$ is the phase of $\langle \delta a_{out}(\omega) \delta a_{out}(-\omega) \rangle$. The spectrum of the conjugate quadrature (i.e. with a phase $\nu(\omega)/2)$ corresponds to a plus sign in Eq. (\[eq:Sw3\]) and by virtue of the Heisenberg principle shows an excess noise above the vacuum. Taking $C=1$ (corresponding to vacuum noise units) and splitting Eq. (\[eq:Sw3\]) into a vacuum noise component plus a normally ordered part we finally arrive to $$\label{eq:Swf} S_{-,+}(\omega) = 1 + \langle : \delta a^{\dagger}_{out}(\omega) \delta a_{out}(-\omega) : \rangle \mp \left | \langle : \delta a_{out}(\omega) \delta a_{out}(-\omega) : \rangle \right | \,,$$ for both the squeezing and the “stretching” spectra. After tedious but simple algebra, the spectra of the fundamental and second harmonic modes can be written as \[eq:spectrum\] $$\begin{aligned} S^{a}_{-,+}(\omega) & = & 1 + 4 \gamma_{c} |B| \frac{N_{-,+}}{D}\,, \label{eq:spectruma} \\ S^{b}_{-,+}(\omega) & = & 1 + 8 \mu n |B| \frac{N_{-,+}}{D}\,, \label{eq:spectrumb} \end{aligned}$$ where $B =2\sqrt{\mu} \,\beta_{in} -(\mu +i \Gamma) \alpha^{2}$ and \[eq:Sc\] $$\begin{aligned} N_{-,+} & = & 2 |B| (\gamma + 2 \mu n) \mp \sqrt{\left [ (\gamma + 2 \mu n)^{2} - (\delta + 2 \Gamma n)^{2} + |B|^{2} +\omega^{2} \right ]^{2} + 4 (\gamma + 2 \mu n)^{2} (\delta + 2 \Gamma n)^{2}}\,, \label{eq:ScN}\\ D & = & \left[(\gamma+2\mu n)^{2} + (\delta + 2 \Gamma n)^{2} - |B|^{2} - \omega^{2} \right ]^{2} + 4 (\gamma+2\mu n)^{2} \omega^{2}\,. \label{eq:ScD}\end{aligned}$$ The correlations defining the squeezing phase $\nu(\omega)$ are given by \[eq:corr\] $$\begin{aligned} \label{eq:corra} \langle \delta a_{out}(\omega)\delta a_{out}(-\omega) \rangle & = & 4\gamma_{c} B \left [\omega^{2} + |B|^{2} + (\gamma + 2 \mu n)^{2} - (\delta + 2 \Gamma n)^{2} + i \, 2 (\gamma + 2 \mu n) (\delta + 2 \Gamma n)\right ]/D \,,\\ \label{eq:corrb} \langle \delta b_{out}(\omega)\delta b_{out}(-\omega) \rangle & = & 8\mu\,\alpha^{2} B \left [\omega^{2} + |B|^{2} + (\gamma + 2 \mu n)^{2} - (\delta + 2 \Gamma n)^{2} + i\, 2 (\gamma + 2 \mu n) (\delta + 2 \Gamma n)\right ]/ D \,.\end{aligned}$$ The trigonometric equations for the corresponding phases are quite complicated and rather useless. However, an interesting consequence can directly be drawn from Eqs. (\[eq:corr\]), namely, for detuning-s such as $\delta+2 \Gamma n =0$ the phases are independent of $\omega$ equaling those of $B$ and $\alpha^{2} B$ respectively. Squeezing performance ===================== [\[SP\]]{} In the previous sections we have developed the general raw formulae regarding quantum noise in the system. This section is devoted to the analysis of the quantum noise behavior implied by them an in particular to proceed with our program of finding optimum quantum noise reduction. However, before any specific assessment of the quantum noise performance we will elaborate a little more on the formulae (mainly by adequate normalizations) in order to gain physical insight and ease our task. As an aftermath we shall obtain general results going well beyond the specifics of the system addressed here. General results concerning one-mode systems ------------------------------------------- Let us begin defining a nonlinear and a total decay rate as $\gamma_{nl} \equiv 2 \mu n$ and $\gamma_{t} \equiv \gamma +\gamma_{nl}$ respectively. We shall scale the evolution with this total decay rate defining an dimensionless time $\tau \equiv \gamma_{t}\,t$. In the spectra (\[eq:spectrum\]) the only dependence on $\theta$ is through $B$ disappearing for $\beta_{in}=0$. It is also possible to restrict this dependence to such a term directly in Eq. (\[eq:linquant\]) and Eq. (\[eq:linboundary\]) by means of appropriate phase shifts of the modes. All together account for $$\begin{aligned} \label{eq:normeq} \frac{d\,\delta c}{d \tau} & = & -\left [1 + i \Delta \right]\delta c + \tilde{B}\,\delta c^{\dagger} \nonumber \\ & + & \sqrt{2 \tilde{\gamma}_{nl}}\;\delta d_{in} + \sqrt{2 \tilde{\gamma}_{c}}\;\delta c_{in} + \sqrt{2\tilde{\gamma}_{s}}\,s_{in}\,,\end{aligned}$$ where the tilde represents divided by $\gamma_{t}$, $\Delta = \tilde{\delta} + 2 \tilde{\Gamma} n $, $\tilde{B} = 2\sqrt{\tilde{\mu}}\,\delta_{in} - (\tilde{\mu}+i \tilde{\Gamma})n$ and the modes are redefined as \[eq:nfields\] $$\begin{aligned} c & \equiv & a \, e ^{-i \theta}\,, \label{eq:c}\\ c_{in,out} & \equiv & \frac{a_{in,out}}{\sqrt{\gamma_{t}}} \, e ^{-i \theta}\,, \label{eq:cinout}\\ d_{in,out} & \equiv & \frac{b_{in,out}}{\sqrt{\gamma_{t}}} \, e ^{-i 2 \theta}\,, \label{eq:dinout}\\ s_{in} & \equiv & \frac{w_{in}}{\sqrt{\gamma_{t}}} \, e ^{-i \theta}\,. \label{eq:sin} \end{aligned}$$ In agreement with the previous notation $\delta_{in}$ denotes the mean value of $d_{in}$. The boundary conditions of the new modes are \[eq:nlinboundary\] $$\begin{aligned} \delta c_{out} & = & \sqrt{2 \tilde{\gamma}_{c}}\,\delta c - \delta c_{in}\, \label{eq:nlinboundaryc} \\ \delta d_{out} & = & \sqrt{2 \tilde{\gamma}_{nl}}\, \delta c - \delta d_{in}\,. \label{eq:nlinboundaryd}\end{aligned}$$ For coherent states, the correlations of the new input modes remain as white noise but in the scaled time $\tau$. We shall refer to the previous formulae as the tilde normalization. The evolution equation (\[eq:normeq\]) encodes the dynamic response of the intracavity system to a series of noisy input channels ($\delta d_{in}$, $\delta c_{in}$ and $s_{in}$). Quantum Mechanical consistency, i.e., conservation of equal-time conmutators, imposes a fluctuation-dissipation relation which under this normalization reads $$\label{eq:add1} \tilde{\gamma}_{nl}+\tilde{\gamma}_{c}+\tilde{\gamma}_{s}=1\,.$$ The evolution equation (\[eq:normeq\]) along with Eqs.  (\[eq:nlinboundary\]) are now written in such a way that the input-output couplings are real-valued as in the standard input-output formalism [@Coll84; @Gar91]. This is a completely general result. Provided a well defined linearized theory in the sense of preserving equal-time conmutators we only need the adequate set of phase shifts of the input channels (a trivial unitary transformation preserving conmutators) making the couplings real-valued to obtain a theory formally equal to the standard input-output formalism simply because this is the theory preserving the equal-time conmutators when the couplings are real-valued. Thus, for any system with only one effective mode there is a formulation in which the intracavity field follows $$\label{eq:geneqnorm} \frac{d\,\delta c}{d \tau} = -\left [1 + i \Delta \right]\delta c + \tilde{B}\,\delta c^{\dagger} + \sum_{n=1}^{N} \sqrt{2 \tilde{\gamma}_{n}} \,\delta c_{in}^{n}\,,$$ with $$\label{eq:genadd1} \sum_{n=1}^{N} \tilde{\gamma}_{n} = 1 \,.$$ The frequency scale $\gamma_{t}$ defining the dimensionless time $\tau$ is just the real part of the factor multiplying $\delta c$ after the phase shifts. N-1 of the input channels will have a time-reversed counterpart corresponding to the outgoing channels fulfilling $$\label{eq:gennlinboundary} \delta c_{out}^{n} = \sqrt{2 \tilde{\gamma}_{n}}\,\delta c - \delta c_{in}^{n}\,.$$ The remaining input channel will account for the irreversible losses. The corresponding spectra are related with the intracavity spectra by $$\label{eq:genspectra} S_{-,+}^{n}(\tilde{\omega}) = 1 + :S_{-,+}^{n}(\tilde{\omega}): \; = \; 1 + 2 \tilde{\gamma}_{n} :S_{-,+}(\tilde{\omega}): \,,$$ where $:S_{-,+}(\tilde{\omega}):$ denotes the intracavity spectra and we have made use of the proportionality of normally ordered intracavity and outgoing correlations [@Gar91]. The spectra $S_{-,+}^{n}(\tilde{\omega})$ coincide with the spectra of the original formulation as the new outgoing modes are just a phase sift of the originals. Let us now define a sort of “reference” system with only one time-reversible input channel, i.e., $$\label{eq:refeq} \frac{d\,\delta c}{d \tau} = -\left [1 + i \Delta \right]\delta c + \tilde{B}\,\delta c^{\dagger} + \sqrt{2}\, \delta c_{in}^{ref}\,,$$ and $$\label{eq:refbc} \delta c_{out}^{ref} = \sqrt{2}\, \delta c - \delta c_{in}^{ref} \,.$$ Obviously $:S_{-,+}^{ref}(\tilde{\omega}): \; = 2 :S_{-,+}(\tilde{\omega}):$ so that we get finally $$\label{eq:Scentral} S_{-,+}^{n}(\tilde{\omega}) = 1 + \tilde{\gamma}_{n} :S_{-,+}^{ref}(\tilde{\omega}): \,.$$ This is the central result of this section. Let us elaborate a little about its interpretation. Squeezing in a given output channel means that for a certain range of phase shifts the corresponding quadratures show an intensity of their fluctuations below that of the associated incoming channel (assumed in a coherent state). In view of Eqs. (\[eq:gennlinboundary\]), the amplitude of the outgoing fluctuations is a coherent superposition of the intracavity and the incoming fluctuations. Squeezing is possible if an adequate correlation between $\delta c$ and the relevant input channel is established. But the intracavity field is nothing else than the dynamic response of the intracavity system to the incoming channels. The input channels are uncorrelated and so the dynamic response of the intracavity system to them. A given input channel can consequently correlate only with the dynamic response to itself. The presence of any other input channel can only degrade the effect. The great advantage of the tilde normalization is that makes this fact explicit. Indeed, Eq. (\[eq:Scentral\]) express the output spectra as the dynamic response of the system to an isolated input channel, i.e., $:S_{-,+}^{ref}(\tilde{\omega}):$, scaled down by the “static” contribution to the noise owing to the presence of extra input channels. The scale factor $\tilde{\gamma}_{n}$ is just the ratio between the coupling constant of the chosen output channel and the sum of all of them. Eq. (\[eq:Scentral\]) greatly simplifies our task of finding the optimum path to maximum noise reduction as we can center our efforts onto the simple reference system described by Eqs. (\[eq:refeq\]) and (\[eq:refbc\]). Even more interesting the results concerning the reference system will be of general applicability to any one-mode system including as such any multiply resonant system under adiabatic elimination of all the modes but one. The normally ordered spectra of the reference system are easily calculated as $$\label{eq:SN} :S^{ref}_{-,+}(\tilde{\omega}):\; = 4 |\tilde{B}| \frac{ 2|\tilde{B}| \mp \sqrt{(1+\tilde{\omega}^{2}+|\tilde{B}|^{2}-\Delta^{2})^{2} +4 \Delta^{2}}}{(1-\tilde{\omega}^{2}-|\tilde{B}|^{2}+\Delta^{2})^{2} +4 \tilde{\omega}^{2}} \,.$$ Our first step is to determine if the dynamic response is capable of a total noise suppression. Perfect squeezing can only occur at a dynamic instability. Equaling to zero the l.h.s. of Eq. (\[eq:eigen\]) (the only possible unstable eigenvalue) and after proper normalization an equation determining the instability can be written as $$\label{eq:refins} 1 + \Delta^{2} = |\tilde{B}|^{2} \,.$$ Written in this way an interesting parallelism with the standard OPO below threshold shows up, i.e. an instability appears when the modulus of the “losses” coefficient equals that of the “parametric” coefficient, a sort of natural extension of the condition for the instability in the conventional OPO for which the coefficients are real. Inserting the instability condition (\[eq:refins\]) in $:S^{ref}_{-}(\tilde{\omega}):$ results in $$\label{eq:Si} :S^{ref}_{I}(\tilde{\omega}):\; = 4 |\tilde{B}| \frac{ 2|\tilde{B}|-\sqrt{4 |\tilde{B}|^{2} + \tilde{\omega}^{2}(\tilde{\omega}^{2}+4)}}{ \tilde{\omega}^{2}(\tilde{\omega}^{2}+4)} \,.$$ Applying L’Hopital’s rule with respect to $\tilde{\omega}^{2}$, $:S^{ref}_{I}(\tilde{\omega}):$ equals -1 at $\tilde{\omega}=0$, that is, perfect squeezing is obtained at the instability, again in parallel with OPO. In other words, the dynamic response of the system assuming that the condition Eq. (\[eq:refins\]) is reachable, is capable of a complete suppression of quantum noise. Spectrum (\[eq:SN\]) is simple enough to permit analytical optimization. Taking partial derivative in $:S^{ref}_{-}(\tilde{\omega}):$ with respect to $\tilde{\omega}$ and equaling to zero, $\tilde{\omega}=0$ appears as the optimum point whatever the values of $\Delta$ and $|\tilde{B}|$. The same applies to $\Delta = 0$ when taking partial derivative with respect to $\Delta$. Notice that this last condition implies also a squeezing phase independent of the frequency. The optimized noise obtained imposing these two conditions simplifies to $$\label{eq:Sopt} :S_{opt}:\; = -\frac{4 |\tilde{B}|}{(1+|\tilde{B}|)^{2}}\,,$$ with a minimum at the instability $|\tilde{B}| = 1$ approached monotonically. These conditions ($\tilde{\omega}=0$, $\Delta=0$ and $|\tilde{B}| = 1$) will help us in finding optimum paths. In particular, moving $|\tilde{B}|$ from zero to one while maintaining $\Delta=\tilde{\omega}=0$ defines an optimum path reaching the instability for the “reference” model. An optimum path is defined solely by the squeezing spectrum leaving aside the “stretching” one. It is important to study also the accompanying excess noise on the conjugate quadrature for it could invalidate in practice the optimum path if this excess noise is unbearable high. The minimal excess noise production imposed by the Heisenberg principle corresponds to $S_{-}(\omega)S_{+}(\omega)=1$. It is a perfect complementary relation between quadratures: the deamplification of fluctuations in a given quadrature must equal the amplification of fluctuations in the conjugate. In that case we are dealing with a Minimum Uncertainty State (MUS) for those quadratures, the text-book definition of a squeezed state. Adding 1 to Eq. (\[eq:SN\]) and after some minor algebra $$\label{eq:SS} S_{-,+}^{ref}(\tilde{\omega}) = \frac{\left ( 2 |\tilde{B}| \pm \sqrt{(\tilde{\omega}^{2}+|\tilde{B}|^{2}+1-\Delta^{2})^{2} +4 \Delta^{2}} \right )^{2}} {(1-\tilde{\omega}^{2}-|\tilde{B}|^{2}+\Delta^{2})^{2} +4 \tilde{\omega}^{2}} \,.$$ Straightforward algebra leads to $S_{-}^{ref}(\tilde{\omega}) S_{+}^{ref}(\tilde{\omega}) =1$. The excess noise is minimum again in parallel to the standard OPO system with real coefficients. Standard normalization ---------------------- As discussed in the introduction we are here principally interested in the squeezing behavior with respect to the photon number $n$. Unfortunately the tilde normalization is inappropriate to such a task as the frequency scale depends on $n$ itself. It is far more convenient to use $\gamma^{-1}$ as the time scale instead of $\gamma_{t}^{-1}$ and to normalize the photon number as $m = \nu n/\gamma$. In complete parallelism to the tilde normalization we have then, $$\begin{aligned} \label{eq:normeqhat} \frac{d\,\delta c}{d \tau} & = & -\left [1 + 2 m K_{r} + i (\hat{\delta} + 2 m K_{i}) \right]\delta c + \left [\sqrt{K_{r}}\,\eta_{in} - (K_{r}+i K_{i}) m \right ]\, \delta c^{\dagger} \nonumber \\ & + & 2 \sqrt{m K_{r}}\;\delta d_{in} + \sqrt{2 \hat{\gamma}_{c}}\;\delta c_{in} + \sqrt{2\hat{\gamma}_{s}}\,s_{in}\,,\end{aligned}$$ where the hat represents divided by $\gamma$ and $$\label{eq:etain} \eta_{in} \equiv \frac{2 \sqrt{\nu}}{\gamma}\,\beta_{in}e^{-i2\theta}\,,$$ which represents the harmonic mode input amplitude normalized to the value at the standard OPO threshold. The spectra (\[eq:spectrum\]) become now \[eq:hatspectrum\] $$\begin{aligned} S^{a}_{-,+}(\hat{\omega}) & = & 1 + 4 \hat{\gamma}_{c} |\hat{B}| \frac{\hat{N}_{-,+}}{\hat{D}}\,, \label{eq:hatspectruma} \\ S^{b}_{-,+}(\hat{\omega}) & = & 1 + 8 K_{r}m |\hat{B}| \frac{\hat{N}_{-,+}}{\hat{D}}\,, \label{eq:hatspectrumb} \end{aligned}$$ with \[eq:hatSc\] $$\begin{aligned} \hat{N}_{-,+} & = & 2 |\hat{B}| (1 + 2 K_{r} m) \mp \sqrt{\left [ (1 + 2 K_{r} m)^{2} - (\hat{\delta} + 2 K_{i} m)^{2}+ |\hat{B}|^{2} +\hat{\omega}^{2} \right ]^{2} + 4 (1 + 2 K_{r} m)^{2} (\hat{\delta} +2 K_{i} m)^{2}}\,, \label{eq:hatScN}\\ \hat{D} & = & \left [(1+2 K_{r} m)^{2}+(\hat{\delta}+ 2 K_{i} m)^{2} - |\hat{B}|^{2} - \hat{\omega}^{2} \right ]^{2}+ 4 (1+2 K_{r} m)^{2} \hat{\omega}^{2} \,, \label{eq:hatScD}\end{aligned}$$ and $\hat{B} = \sqrt{K_{r}}\,\eta_{in} - (K_{r}+i K_{i}) m$. We shall refer to the above formulae as the hat normalization. To refer the frequency to the cavity decay constant with the subsequent re-normalization of the system parameters, i.e. the hat normalization, is quite a standard procedure in the literature. Some care must be taken when studying the quantum noise behavior as a function of $n$ (or $m$). It is not a free parameter of the problem as would be the input fields or the phase mismatch but it is in a nonlinear relation with them. We need, therefore, to check that the proposed values of $n$ are indeed a solution of Eq. (\[eq:neqn\]). Fortunately the spectra (\[eq:spectrum\]) do not depend on the overall sign of $\delta + 2 \Gamma n$ and therefore the conclusions reached in section \[LEE\] about the existence of $|\alpha_{in}|$ permit a safe variation of $n$ in search of strong noise reduction provided the stability of the corresponding fixed points. Squeezing at the fundamental mode --------------------------------- Applying Eq. (\[eq:Scentral\]) to the fundamental mode $$\label{eq:Satilde} S^{a}_{-,+}(\tilde{\omega}) = 1 + \tilde{\gamma}_{c} :S^{ref}_{-,+}(\tilde{\omega}): \,.$$ It is clear that the best performance corresponds to $\gamma_{nl} = 0$, that is, either $n=0$ or $\mu = 0$, as then $\tilde{\gamma}_{c}$ maximizes to $\eta \equiv \gamma_{c}/(\gamma_{c}+\gamma_{s})$ (the escape efficiency of the cavity). The case $n=0$ corresponds to the very well known case of squeezed vacuum generation. For $\mu =0$ and finite $n$ the system is formally equivalent to a resonant optical Kerr effect system whose quantum noise behavior has been amply studied previously [@Rey89]. The condition (\[eq:refins\]) reduces for $\mu =0$ to $\delta = -2 n \Gamma \pm \sqrt{n^{2}\Gamma^{2}-\gamma^{2}}$, the well known turning points of optical dispersive bistability [@Rey89] but with the nonlinear dispersion induced by cascading. Indeed, such cascading induced bistability has been experimentally demonstrated in [@Whi96a]. Rewriting it within the hat normalization the condition reads $$\label{eq:insKerr} \hat{\delta}_{\pm} = -2 m K_{i} \pm \sqrt{m^{2} K_{i}^{2} - 1} \,,$$ where $K_{i}=-1/\pi$ as it is evaluated at $\Delta k L_{m} = 2 \pi$. Once $\tilde{\gamma}_{c}$ is independent of $\Delta k$ and $n$, the optimum path with respect to $m$ (the only remaining free parameter) is determined solely by the reference system. It corresponds to increase $m$ till $m=\pi$ (where the condition (\[eq:insKerr\]) is reached) while maintaining $\hat{\delta} = 2 m / \pi$ and $\omega = 0$. Fig. \[fg:SaKerr\] displays the evolution of both the maximum squeezing and the maximum excess noise following such a path for three values of the escape efficiency, namely, 0.9, 0.99 and the ideal 1. The noise is expressed in dB’s with respect to the vacuum noise. A Heisenberg limited excess noise appears in such a case as a specular image of the squeezing. The instability is signaled by the divergence in the excess noise. Above it, the curves shown are not physical as they correspond to unstable fixed points. The case $\eta = 0.99$ in Fig. \[fg:SaKerr\] shows an excellent behavior with an almost Heisenberg limited excess noise till near the instability. ![Noise spectra at zero frequency of the fundamental mode following and optimum path for three escape efficiencies of the cavity including the ideal case $\eta = 1$. The curves above the divergences are not physical.[]{data-label="fg:SaKerr"}](SaKerr.eps) Fig. \[fg:SaKcomp\] illustrates the idea of optimum path by comparing the $\eta=0.99$ plot of Fig. \[fg:SaKerr\] against various cases with fixed values of $\hat{\delta}$. Below $m=\pi$, for a given $m$ the maximum squeezing is obtained when $\Delta = 0$ as expected. Above $m=\pi$ it is not possible to reach the minimum noise of $1-\eta$ fulfilling $\Delta = 0$. ![Comparison among noise spectra at zero frequency (fundamental mode) following various paths in the parameter space. The optimum one corresponds to $\Delta = 0$. Above the divergences the results are not physical.[]{data-label="fg:SaKcomp"}](SaKcomp.eps) Being $\mu =0$ the formulae simplify enough for allowing a simple expression for the squeezing phase. More specifically, from $B=-i\Gamma \alpha^{2}$, $\theta_{m}=\theta + \pi/2$. On the other hand substituting Eq. (\[eq:state\]) in Eq. (\[eq:boundarya\]) results in $\sqrt{2 \gamma_{c}}\alpha_{out} = \alpha (\gamma_{c}-\gamma_{s} + i \Gamma n)$, giving a squeezing phase relative to that of the output field of $$\frac{\pi}{2} - \arctan \left (\frac{\Gamma n}{\gamma_{c}-\gamma_{s}} \right )\,.$$ At the instability $\Gamma n = \gamma$ and for low $\gamma_{s}$ it approaches $45^{o}$. There is a possibility of an extra control of the squeezing phase not present in the conventional Kerr effect system by making use of the harmonic mode. Taking a finite $\mu$ but low enough so that $\mu n \ll \gamma$ and at the same time a $\beta_{in}$ high enough to imply $2\sqrt{\mu} \beta_{in} \approx \gamma$, still we will have $\gamma_{nl} \approx \gamma$ while the squeezing phase relative to the output field will depend on both the modulus and the relative phase between the input fields. In practice, however, maintaining $\mu n$ very low could imply a exceedingly high $\beta_{in}$ in order to have a $2\sqrt{\mu}\beta_{in}$ intense enough for a significant influence on the final phases. Of course, the instability point would accordingly depend on $\beta_{in}$. There is no hope of any behavior similar to the reported in [@Cab97] as a competition between second and third order nonlinearities needs $\mu \neq 0$, opening the fundamental mode to the fluctuations of the input harmonic mode with strong deleterious effects. At most, some remnants of the enhanced efficiency coming from the competition between nonlinearities can be observed for low $\gamma_{nl}$. Then, as shown in Fig. \[fg:Sa\], the best working point is not necessarily located at $\mu = 0$, i.e. maximum squeezing is obtained with a finite mismatch. ![Squeezing in the fundamental mode at zero frequency as a function of the phase mismatch for a low intracavity photon number.[]{data-label="fg:Sa"}](Sa.eps) Squeezing at the harmonic mode ------------------------------ For the harmonic mode Eq. (\[eq:Scentral\]) yields $$\label{eq:Sbtilde} S^{b}_{-,+}(\tilde{\omega}) = 1 + \tilde{\gamma}_{nl} :S^{ref}_{-,+}(\tilde{\omega}): \,.$$ Now, the situation is the complete opposite: the performance is favored by a finite $\mu$ in order to have a non-zero $\gamma_{nl}$ and a large $n$ to approach the ratio $\tilde{\gamma}_{nl}$ to one. In fact, under ideal conditions of perfect dynamic noise suppression and no absorption and scattering losses ($\gamma_{s}=0$), the squeezing in both modes are complementary in the sense of $$\label{eq:abcomple} S^{a}_{-}+S^{b}_{-} = 2 -\frac{\gamma_{c}}{\gamma_{t}}-\frac{\gamma_{nl}}{\gamma_{t}} \;=\; 1\,,$$ a direct consequence of the fluctuation-dissipation relation (\[eq:genadd1\]). This complementarity has been previously reported for the doubly resonant degenerate parametric oscillator [@Fab90]. The maximum squeezing available for the harmonic mode whatever the dynamic response of the system is easily obtained by setting $:S^{ref}_{-,+}(\tilde{\omega}):$ to -1 in Eq. (\[eq:Sbtilde\]), that is, $$\label{eq:ilimit} S_{M} = 1 - \frac{2 m K_{r}}{1+2 m K_{r}} \; = \; \frac{1}{1+2 m K_{r}} \,.$$ This static contribution to the noise is now nonlinear in the sense that it depends on the phase mismatch and $m$. An immediate consequence of Eq. (\[eq:ilimit\]) is the possibility of an arbitrarily large quantum noise reduction for any finite value of $K_{r}$. The 1/9 limit of the conventional phase-matched SHG is therefore due to a failure of the setup to maximize the dynamic response of the system. Let us center then, firstly in the SHG-like case with $\beta_{in}=0$ as it includes the above mentioned conventional setup (the experiments in [@Pas94] and [@Tsu95]). The instability points are now given by (directly in the hat normalization) $$\label{eq:hatinsb} \hat{\delta}_{\pm} = - 2 m K_{i} \pm \sqrt{m^{2} (K_{i}^{2}-3 K_{r}^{2}) - 4 K_{r} m - 1} \,.$$ Both kinds of nonlinearities (dispersive and absorptive) are in this case necessary as the factor $K_{i}^{2}-3 K_{r}^{2}$ needs to be positive to allow $\hat{\delta}_{\pm}$ to be real. The phase-matched case is therefore excluded. ![The value of $K_{i}-3 K_{r}$ as a function of the phase mismatch.[]{data-label="fg:g-3f"}](Ki2-3Kr2.eps) Fig. \[fg:g-3f\] shows $K_{i}^{2}-3 K_{r}^{2}$ as a function of the phase mismatch and indeed near above $\pi$ it is positive. Optimum approaches to (\[eq:hatinsb\]) are now more difficult to evaluate than in the fundamental mode as both the dynamic processing of the noise and the static contribution from the noise inputs (encoded in $\tilde{\gamma}_{nl}$) depend on $m$ and $\Delta k$. With respect to to $m$ is clear that the static part is optimized at $m \rightarrow \infty$. This limit can be approached letting $\tilde{\omega} = 0$ and $\hat{\delta} = - 2 K_{i} m$ (i.e. $\Delta = 0$). $|\tilde{B}|$ reduces in this case of $\beta_{in}=0$ to $m \sqrt{K_{r}^{2}+K_{i}^{2}}/(1+2 m K_{r})$ showing a monotonic increasing behavior with respect to $m$ from 0 to the maximum (at $m \rightarrow \infty$) $$\label{eq:Bmax} |\tilde{B}| = \frac{1}{4}\left [1+\left (\frac{K_{i}}{K_{r}}\right )^{2} \right ] \,.$$ Notice that for $K_{i}^{2}=3 K_{r}^{2}$ it consistently equals 1. We have then, both $\tilde{\gamma}_{nl}=1$ and the fastest approach to 1 of $|\tilde{B}|$ when $m \rightarrow \infty$. Therefore, the squeezing along an optimum path with respect to $\Delta k$ is given by substituting Eq. (\[eq:Bmax\]) in the spectrum (\[eq:Sopt\]) and then the obtained $:S_{opt}:$ in Eq. (\[eq:Sbtilde\]). Obviously, in a real experiment $m$ can be large but always finite. Let us take as a “large” $m$ one giving a $S_{M}$ around 20 dB as in the case $\eta = 0.99$ of Fig. \[fg:SaKerr\]. This corresponds to $m=50$. Figure \[fg:Sbkm50\] displays $S^{b}_{-,+}$ as a function of the phase mismatch in such a case. To illustrate the modulation exerted by $S_{M}$ we have take this time $\hat{\delta}$ equal to the real part of Eq. (\[eq:hatinsb\]) plus a very small number. In this way the plot remains valid for the whole range of the $\Delta k L_{m}$. While Eq. (\[eq:hatinsb\]) is complex the condition $\Delta =0$ is almost fulfilled and above the instability the noise suppression reduction follows $S_{M}$. Again the pernicious effect of the instability regarding the excess noise has a very short range. For comparison $S_{M}$ is also depicted. ![Noise spectra at zero frequency of the harmonic mode following a nearly optimum path with respect to the phase mismatch for the SHG like case.[]{data-label="fg:Sbkm50"}](Sbkm50.eps) The optimum path with respect to $m$ is much more complicated to find because the intricate dependence of $K_{i}$ and $K_{r}$ with respect to the phase mismatch. Figure \[fg:Sboptbin0\] has been generated finding numerically the minima of $S_{-}^{b}(0)$ while scanning the range of $m$. For comparison the phase-matched case is also depicted showing an asymptotic behavior towards $-10 \log 9$. For low values of $m$ the effect of $\tilde{\gamma}_{nl}$ overwhelms the dynamic response so that the best value corresponds to maximize $K_{r}$. As soon as the two curves depart from each other the dynamic response dominates the behavior and the minimum noise is at the instability as in Fig. \[fg:Sbkm50\]. At this stage the optimum path begins to follow the instability all the time. It should be taken then, as a mathematical limit. However, in view of Fig. \[fg:Sbkm50\], before reaching it, bearable values of the excess noise are accessible with a slight diminution of the squeezing. ![Squeezing in the harmonic mode along an optimum path with respect to the normalized intracavity photon number ($m$) for the SHG like case compared with the phase-matched SHG case.[]{data-label="fg:Sboptbin0"}](Sboptbin0.eps) It is worth to mention that a squeezing as large as 48% induced by cascading has been very recently reported [@Kas97]. The cascading was due, however, to a detuning of the pump mode in a triply resonant non-degenerate OPO with a much lower finesse for the pump mode rather than by phase mismatch. Under such conditions a cascaded $\chi^{(3)}$ is also induced leading ideally to perfect squeezing in the pump mode. Although a finite mismatch allows to reach $S_{M}$, the overall optimum working point corresponding to $\Delta k = 0$ is out of reach. The question arises then, of if it is possible to fulfill Eq. (\[eq:refins\]) at $\Delta k = 0$. A glance at the definition of $B$ suggests it should be possible adding a driving to the harmonic mode. In such a case $\tilde{\gamma}_{nl}$ simplifies to $2m/(1+2m)$ obviously independent of $\eta_{in}$. Constructing an optimum path with respect to the harmonic input reduces then, to set $\Delta=\tilde{\omega}=0$. Phase matching along with $\Delta = 0$ implies $K_{r} = 1$, $K_{i} = 0$ and $\hat{\delta} = 0$ so that the instability condition (\[eq:refins\]) simplifies to $$\label{eq:etains} 1 + 2 m = |\eta_{in} - m| \,,$$ a perfectly achievable condition. We can further optimize by choosing the phase of $\eta_{in}$ adequately to approach $\tilde{B}$ to one as much as possible. The extreme cases correspond to $\eta_{in}$ real, i.e., $\eta_{in}= - (1 + m)$ and $\eta_{in}= 1 + 3 m$. From Eqs. (\[eq:phases\]) and (\[eq:ain\]) it is easy to check that they correspond respectively to $\phi-\varphi/2 = \pi$ and $\phi-\varphi/2=0$. The negative case maximizes $\tilde{B}$. It has been previously reported in [@Sch95]. Taking squared modulus of Eq. (\[eq:boundaryb\]) the negative case appears as promoting harmonic output power while the converse is valid for the positive. The squeezing phase is also easy to calculate in this case. In particular, given the correlation (\[eq:corrb\]), $\nu(\omega)$ is determined by the phase of $\alpha^{2} B$ (independent of $\omega$ as $\Delta =0$), something proportional to $$(\eta_{in}-m) e^{i 4 \theta} \,.$$ The corresponding squeezing phases are $\theta_{m} = 2\theta + \pi$ for the negative case while for the positive case it changes from $\theta_{m} = 2\theta + \pi$ to $\theta_{m} = 2\theta + \pi/2$ at $\eta_{in}=m$. On the other hand, the output harmonic amplitude is proportional to (see Eq. (\[eq:boundaryb\])) $$\label{eq:bout} b_{out} \propto (\eta_{in} - 2 m) e^{i (2 \theta + \pi)} \,.$$ Consequently, the relative squeezing phase for the negative $\eta_{in}$ is $\pi$, i.e., amplitude squeezing. The positive case is more complicated. It remains equal $\pi$ (amplitude squeezing) till $\eta=m$. Above this value it changes to $\pm \pi/2$ depending on the sign of $\eta_{in}/2 - m$ yielding in any case phase squeezing. At a first glance, it appears there is a sudden change from amplitude to phase squeezing when the input phases are fix to $\phi-\varphi/2=0$ and $|\eta_{in}|$ passes through $m$. It is not so however, as at this point $B=0$ and the state collapses to a coherent state with no squeezing. The situation is clearly depicted in figure \[fg:Sbin50\] where $S_{-,+}^{b}(0)$ are displayed as a function of $\eta_{in}$ assumed real. The r.h.s. of the plot corresponds to $\phi-\varphi/2 = 0$ while the l.h.s. to $\phi-\varphi/2 = \pi$ and for negative ordinates it should be considered as an optimum path with respect to $\eta_{in}$. The behavior is completely symmetric with respect to $\eta_{in}=m$ where both the squeezing and the excess noise equal that of the vacuum. ![Noise spectra at zero frequency (harmonic mode) following an optimum path with respect to the normalized input harmonic amplitude ($\eta_{in}$). The curves are not physical above the divergences.[]{data-label="fg:Sbin50"}](Sbin50.eps) The optimum path with respect to $m$ is now given by $S_{M}$ at $K_{r}=1$. Figure \[fg:SM\] is the equivalent to Fig. \[fg:Sboptbin0\] for the new situation. It represents the maximum efficiency as far as quantum noise reduction is concerned the system can yield in any way with respect to $m$. The improvement with respect to the standard phase-matched SHG as well as to the optimized SHG is certainly high. ![Maximum squeezing (harmonic mode) as a function of the normalized intracavity photon number ($m$) in nonlinear second order singly resonant device. For comparison the phase-matched and optimized SHG cases are also shown.[]{data-label="fg:SM"}](SM.eps) Discussion and conclusions ========================== [\[DC\]]{} Two are the main purposes of the present work. On one side, to gain physical insight about the origins of quantum noise in singly resonant systems. On the other, to explore their potential as squeezed light sources. In such a task we have used a model including all the relevant physics we wanted to address but simple enough to be tractable. The results shown in the previous sections certainly reveal a high potential of the studied configurations. An evaluation of the limits of the model in reproducing the real physical situation as well as a discussion of possible implementations, seems, therefore, in order. One obvious idealization of the model is to assume perfectly coherent inputs neglecting the excess noise of real lasers something expected to have deleterious effects at low frequencies. White and coworkers [@Whi96b] have developed an analytical approach to this problem resulting in an impressive agreement with the experiments. As expected, the excess noise completely destroys the squeezing at low frequencies. In their experiments, however, the deleterious effect was restricted to only 7 MHz by adding a mode cleaner to the system, the spectrum coinciding with the ideal one out of this range. Even better, in [@Bre97] the laser noise was shot-noise-limited down to 1 MHz, again using an external mode cleaner. Considering as sensible the assumption of coherent states for the input modes as well as a value of $m$ around 3 (we will see below it looks like the case) our main concern about the fundamental mode results summarized in Fig. \[fg:SaKerr\] refers to the feasibility of the chosen escape efficiencies. The ratio $\gamma_{c}/(\gamma_{c}+\gamma_{s})$ is difficult to maximize in a resonant mode because, by its own resonant nature, $\gamma_{c}$ must be rather low. Thus, in [@Whi96a] it was only of 0.52, while in [@Pas94] it was 0.36. Even in [@Kur93], a doubly resonant system specifically designed to squeeze the fundamental mode, the escape efficiency was around 0.9, limiting the maximum squeezing achievable to 90% (in practice, a 52% of noise reduction was reached). It appears, then, that nowadays the $\eta=0.99$ should be taken rather as an ideal illustrative case. In contrast, the ultimate limit for the noise suppression in the harmonic mode (Eq. (\[eq:ilimit\])) is pushed up by the fundamental mode photon number, opening a way to bypass the usual untouchable limit imposed by the escape efficiency of the cavity (as in the fundamental mode). Therefore, the squeezing in the harmonic mode can be arbitrarily large under the ideal assumption that the energy load inside the cavity can be also arbitrarily large. However, this is not totally true as the model does not take into account the losses in the harmonic mode which necessarily limit the degree of noise suppression. We can estimate this limitation assuming the absorption in one single pass through the nonlinear material equivalent to the effect of a beam splitter with the adequate reflectivity. Taking an absorption of 0.6%/cm as in [@Kur93] and a length of 1 cm, the equivalent reflectivity would be of $6 \;10^{-3}$. The spectrum after the beam splitter is given by $S_{out}=1 + T :S_{in}:$. Setting $:S_{in}: = -1$ and $T = 1-R$, the ultimate squeezing achievable is precisely $R = 6\;10^{-3}$, i.e. - 22 dB. In other words, the chosen value of $m=50$ in Figs. \[fg:Sbkm50\] and \[fg:Sbin50\] represents more or less the maximum the model can stand without the inclusion of the harmonic mode losses. Of course, we still cannot assume $m=50$ as a realistic limit for the state of the art devices as $m$ depends not only on the intra-cavity photon number but on the ratio $\nu/\gamma$ between the nonlinearity and losses. This ratio must be high enough in order to prevent a degradation of the nonlinear optical response in the system as commented in the introduction. Besides, this ratio scales down the power available in the external sources. In view of these complications, probably the most reliable way of setting the physical scale of $m$ is to compare the results with the reported experiments. In [@Tsu95] the quoted noise reduction was of -5.2 dB. Setting to zero $\Delta k$ and $\beta_{in}$ in Eq. (\[eq:hatspectrumb\]) corresponding to phase-matched SHG, a -5.2 dB squeezing results at $m=2.5$, far from the $m=50$ limit. Fortunately, the limit (\[eq:ilimit\]) grows up quite quickly for low $m's$ (see Fig. \[fg:SM\]). Thus, 10 dB of noise suppression are reached at $m = 5$, no such unthinkable value. However, -15 dB of noise reduction requires $m=15$, while a -20 dB figure is at the $m=50$ limit, an order of magnitude higher. New nonlinear materials seem the only possibility for such high squeezing degrees. A promising via consists in the use of resonant nonlinearities in asymmetric quantum wells (AQW). Huge nonlinearities have been demonstrated in frequency doubling experiments and even a tuning of the nonlinearity with a d.c. field [@Sir92]. Obviously, also the absorption is enhanced by the resonance. This can be a problem as the ratio $\nu/\gamma$ could at the end of the day not be increased. To asses this possibility requires quite a detailed analysis out of the scope of the present work. We can foreseen, however, a promising advantage in the fact that the losses in the harmonic mode have little influence on the performance. By maintaining a strong two photon resonance but relaxing the one photon counterpart (tuning with a d.c. field or by an adequate energy level engineering), the nonlinearity would be certainly enhanced while the losses at the fundamental mode would not increase so strongly, thus enhancing $\nu/\gamma$. With only one passage through the cavity of the harmonic mode and taking into account that a very thin layer of material is capable of SHG [@Sir92], the corresponding deleterious effect cannot be very large. Even a more exciting possibility comes from the recent experimental demonstrations of absorption inhibition in AQW induced by quantum interference [@Fai96; @Schm97; @Fai97]. The absorption transparency and the resonant enhancement can be combined using an adequate quantum well engineering leading to very efficient frequency doublers (see [@Schm96], where precisely a scheme only resonant at the harmonic mode is proposed). These are certainly promising perspectives but we should not dismiss the improvements arising at the range of the present nonlinear crystals performances. Let us center then, around $m=2.5$. As shown in the previous section the best strategy corresponds to drive both modes with relative phases $\varphi-\phi/2 = \pi/2$ (negative $\eta_{in}$) and $\Delta=0$. ![Noise spectra at zero frequency (harmonic mode) for $\Delta k = \hat{\delta} = 0$ as function of the normalized intracavity photon number at various “distances” from the dynamic instability.[]{data-label="fg:sqm5"}](Sbm5.eps) In Fig. \[fg:sqm5\] the noise behavior till $m=5$ is displayed for various “distances” to the instability (\[eq:etains\]). Even at half the instability $\eta_{in}$ value, the squeezing at $m=2.5$ grows from -5.1 dB (69%) to -7.2 dB (80%). The excess noise, on the other hand, rapidly increases at low $m's$ but it also saturates quickly to bearable values. The improvement, although nothing spectacular is quite substantial. In [@Sch95] it was not reckoned so because the noise suppression was studied as function of the input power. Given its nonlinear relation with $m$ the improvement is much slower with respect to this variable. Besides the squeezing, the output power is also enhanced. Taking a negative $\eta_{in}$ in Eq. (\[eq:bout\]), the output power results in $$\label{eq:Pw} P_{out} \propto (2 m + |\eta_{in}|)^{2} \,,$$ and thus, the harmonic mode input contributes constructively to it. As shown in Fig. \[fg:Pw\] at half a way of the instability the power is nearly doubled. Although from the theoretical point of view the injection of a coherent signal in the harmonic mode looks quite harmless, the experimental implementation is not trivial. However, the remarkable achievements in [@Sch96a; @Bre97] with the OPA strongly support the feasibility of the idea. ![The harmonic output power corresponding to the cases of Fig. \[fg:sqm5\].[]{data-label="fg:Pw"}](Pw.eps) Finally, a word of caution about the design of the device. It is important to avoid the setting of oscillations out of the fundamental mode (the so called subharmonic pumped OPO [@Sch96b; @Sch97]), something capable of destroying the noise reduction [@Whi97]. At a first glance, finite values of $\eta_{in}$ would favor the effect by promoting the down conversion. But it is not necessarily so as the down conversion is encouraged only for a given range of the relative phase between the two driving fields. Thus, for the negative $\eta_{in}$ case studied above, being the harmonic output power maximized, the down conversion is minimized. To conclude, let us summarize the most relevant results. Firstly, for any system with only one effective mode we have given a systematic approach capable of isolating the processing of quantum noise by the dynamic response of the system. This dynamic processing is maximized at zero frequency, zero generalized nonlinear detuning ($\Delta$ as defined in section \[SP\]) and at a dynamic instability. The static contributions to the noise coming from the different noisy inputs can in some cases, move the overall optimum working point away from that corresponding to maximum dynamic noise suppression. In spite of such, to have a rule to maximize the dynamic quantum noise suppression resulted very useful to characterize the squeezing behavior when applied to a specific optical system. In particular, for the case of a singly resonant second order nonlinear device, the squeezing at the fundamental mode is limited by the escape efficiency of the cavity, the best working point being within figures of merit of conventional nonlinear crystals. In the harmonic mode high squeezing requires new materials but it is only limited by the losses in the non-resonant harmonic mode opening the possibility of using multiple quantum wells with resonantly enhanced nonlinearities. However, with standard nonlinear crystals still is possible a substantial improvement with respect to the reported experiments by injecting a coherent driving in the harmonic mode. Besides, the output power is highly enhanced. C. C. thanks S. Schiller and specially A. G. White for useful comments and suggestions. Work supported in part by grants No. TIC95-0563-C05-03, No. PB96-00819, CICYT (Spain) and Comunidad de Madrid 06T/039/96 (Spain). E-mail: ccabrilo@foton0.iem.csic.es S. F. Pereira et al., Phys. Rev. A38, 4931 (1988). A. Sizmann et al, Opt. Commum 80, 138 (1990). P. Kurz et al., Europhys. Lett. 24, 449 (1993). R. Paschotta et al., Phys. Rev. Lett. 72, 3807 (1994). T. C. Ralph et al., Opt. Lett. 20, 1316 (1995). H. Tsuchida, Opt. Lett. 20, 2240 (1995). S. Youn et al, Opt. Lett. 21, 1597 (1996). E. S. Polzik, J. Carri, H. J. Kimble, Appl. Phys. B 55, 279 (1992). G. Breitenbach et al, J. Opt. Soc. Am. B 12, 2304 (1995). K. Schneider et al, Opt. Lett. 21, 1396 (1996). G. Breitenbach, S. Schiller and J. Mlynek, Nature 387, 471 (1997). K. Sundar, Phys. Rev. Lett. 75, 2116 (1995). M. A. M. Marte, Phys. Rev. Lett. 76, 4815 (1995). M. A. M. Marte, J. Opt. Soc. Am. B 12, 2296 (1995). C. Cabrillo, J. L. Rold[á]{}n and P. Garc[í]{}a-Fern[á]{}ndez, Phys. Rev. A56, 5131 (1997). K. V. Kheruntsyan et al, Phys. Rev. A57, 535 (1998). P. Tombesi and H. P. Yuen in [*Coherence and Quantum Optics V*]{}, edited by L. Mandel and E. Wolf (Plenum 1984). P. Tombesi, in [*Quantum Optics IV*]{}, edited by J. D. Harvey and D. F. Walls, (Springer, 1986). P. Garc[í]{}a-Fern[á]{}ndez et al, Phys. Rev. A43, 4923 (1991). C. Cabrillo et al, Phys. Rev. A45, 3216 (1992). C. Cabrillo and F. J. Bermejo, Phys. Lett. A 170, 300 (1992). C. Cabrillo and F. J. Bermejo, Phys. Rev. A 48, 2433 (1993). G. Yu. Kryuchkyan and K. V. Kherunstsyan, Opt. Commun. 127, 230 (1996). K. V. Kheruntsyan et al, Opt. Commun. 139, 157 (1997). A. G. White, J. Mlynek and S. Schiller, Europhys. Lett. 35, 425 (1996). M. J. Collet and R. B. Levien, Phys. Rev. A43, 5086 (1991). S. Schiller et al, Appl. Phys. B 60, S77 (1995). M. J. Collet and C. W. Gardiner, Phys. Rev. A30, 1386 (1984). C. W. Gardiner, [*Quantum Noise*]{}, chapter 5.3 (Springer-Verlag, 1991). S. Reynaud et al, Phys. Rev. A40, 1440 (1989). C. Fabre et al, Quantum Opt. 2, 159 (1990). K. Kasai, G. Jiangrui and C. Fabre, Europhys. Lett. 40, 25 (1997). A. G. White et al, Phys. Rev. A54, 3400 (1996). C. Sirtori et al, Appl. Phys. Lett. 60, 151 (1992). J. Faist et al, Opt. Lett. 21, 985 (1996). H. Schmidt et al, Appl. Phys. Lett. 70, 3455 (1997). J. Faist et al, Nature 390, 589 (1997). H. Schmidt and A. Imamoglu, Opt. Commun.131, 333 (1996). S. Schiller et al, Appl. Phys. Lett. 68, 3374 (1996). S. Schiller, R. Bruckmeier and A. G. White, Opt. Commun. 138, 158 (1997). A. G. White et al, Phys. Rev. A55, 4511 (1997).
{ "pile_set_name": "ArXiv" }
--- abstract: 'This note is devoted to exploring Hölder’s quasicontinuity for the Riesz-Morrey potentials, and its application to the corresponding nature of some nonnegative weak solutions of the quasilinear Lane-Emden equations for the $p$-Laplacian.' address: - 'Department of Mathematics, University of Kentucky, Lexington, KY 40506-0027' - 'Department of Mathematics and Statistics, Memorial University, St. John’s, NL A1C 5S7, Canada' author: - 'David R. Adams' - Jie Xiao title: | The Hölder Quasicontinuity for\ Riesz-Morrey Potentials and Lane-Emden Equations --- [^1] Introduction {#s1} ============ Let $(\alpha,p,\lambda)\in (0,n)\times [1,\infty)\times (0,n]$ and $\Omega$ be a bounded domain in the $2\le n$-dimensional Euclidean space $\mathbb R^n$. The main ideas in [@AX04; @AX11b; @AX11a; @AX11c; @AX12a; @Ser] suggest us to deal with two basic concepts in the theory of Morrey spaces and their potentials. The first one is the so-called Riesz-Morrey potential – the $\alpha$-order Riesz singular integral $$I_\alpha f(x)=\int_{\mathbb R^n} f(y)|y-x|^{\alpha-n}\,dy=\int_{\Omega} f(y)|y-x|^{\alpha-n}\,dy$$ of $f$ (whose value on $\mathbb R^n\setminus\Omega$ is defined to be $0$) in the Morrey space $$L^{p,\lambda}(\Omega)=\left\{g\in L^p(\Omega):\ \|g\|_{L^{p,\lambda}(\Omega)}=\sup_{x\in\Omega,\ 0<r<\hbox{diam}(\Omega)} \Big(r^{\lambda-n}\int_{B(x,r)\cap\Omega}|g|^p\Big)^\frac1p<\infty\right\},$$ where $\hbox{diam}(\Omega)$ is the diameter of $\Omega$, $B(x,r)$ is the open ball with center $x$ and radius $r$, and the integral is taken with respect to the $n$-dimensional Lebesgue measure $dy$. The second one is the Riesz-Morrey capacity of a set $E\subseteq\Omega$: $$C_\alpha(E;L^{p,\lambda}(\Omega))=\inf_{E\subseteq\ open\ O\subseteq\Omega} C_\alpha(O;L^{p,\lambda}(\Omega))=\inf_{E\subseteq\ open\ O\subseteq\Omega}\sup_{compact\ K\subseteq\ open\ O}C_\alpha(K;L^{p,\lambda}(\Omega)),$$ where $$C_\alpha(K;L^{p,\lambda}(\Omega))=\inf\Big\{\|h\|_{L^{p,\lambda}(\Omega)}^p:\ 0\le h\in L^{p,\lambda}(\Omega)\ \ \&\ \ I_\alpha h\ge 1_K\Big\}$$ for which $1_K$ stands for the characteristic function of the compact $K\subseteq O$. In this note, through using the Riesz-Morrey capacity we study the quasicontinuous representative and the Hölder quasicontinuity of each Riesz-Morrey potential – see Theorems \[t31\] & \[t33\]. Certainly, the discovered properties show their worth in connection with investigating Hölderian quasicontinuity of some nonnegative weak solutions $u$ of the quasilinear Lane-Emden equations for $p$-Laplacian: $$-\Delta_p u=-\hbox{div}(|\nabla u|^{p-2}\nabla u)=u^{q+1}\quad\hbox{or}\quad e^u \quad\hbox{in}\quad \Omega,$$ where $(p,q)\in (1,n)\times (0,\infty)$ – see Theorems \[p42\] & \[p43\]. [*Notation*]{}. In what follows, $\Omega$ is always assumed to be a bounded domain in $\mathbb R^n$. For $E\subseteq\Omega$ define $\fint_E$ to be the integral mean over $E$ with respect to the Lebesgue measure $dy$. And, $\mathsf{X}\lesssim\mathsf{Y}$ stands for $\mathsf{X}\le c\mathsf{Y}$ for a constant $c>0$. Moreover, $\mathsf{X}\approx\mathsf{Y}$ means both $\mathsf{X}\lesssim\mathsf{Y}$ and $\mathsf{Y}\lesssim \mathsf{X}$. Riesz-Morrey potentials {#s3} ======================= Quasicontinuous representation for $I_\alpha L^{p,\lambda}$ {#s31} ----------------------------------------------------------- A function $g$ on $\Omega$ is said to be $C_\alpha(\cdot;L^{p,\lambda})$-quasicontinuous provided that for any $\epsilon>0$ there is a continuous function $\tilde{g}$ on $\Omega$ such that $$C_\alpha\big(\{x\in\Omega: \tilde{g}(x)\not=g(x)\};L^{p,\lambda}(\Omega)\big)<\epsilon.$$ Naturally, $\tilde{g}$ is called a $C_\alpha(\cdot;L^{p,\lambda}(\Omega))$-quasicontinuous representative of $g$. The forthcoming Theorem \[t31\] is an extension of [@AH Theorem 6.2.1] from $L^p$ to $L^{p,\lambda}$. \[l31\] For $1<p<\infty$ and $0<\gamma\le n$, let $L^{p,\gamma}_0(\Omega)$ be the Zorko space (cf. [@Z1986]) of all $f\in L^{p,\gamma}(\Omega)$ that can be approximated by $C^1$ functions with compact support in $\Omega$ under the norm $\|\cdot\|_{L^{p,\gamma}(\Omega)}$. Then $L^{p,\lambda}(\Omega)\subseteq L^{p,\gamma}_0(\Omega)\ \ \forall\ \ \lambda\in (0,\gamma)$. If $\gamma=n$, then $L^{p,\gamma}_0(\Omega)=L^p(\Omega)$, and hence $$\int_{\Omega\cap B(x,r)}|f|^p\le\|f\|_{L^{p,\lambda}(\Omega)}^p \big(\hbox{diam}(\Omega)\big)^{n-\lambda}\quad\forall\quad (\lambda,x,r)\in (0,n)\times\Omega\times \big(0,\hbox{diam}(\Omega)\big),$$ thereby deriving the desired inclusion. If $\gamma<n$, then the desired inclusion follows from [@AX12a Lemma 3.4]. \[t31\] Let $g=I_\alpha f$, $f\in L^{p,\lambda}(\Omega)$, and $1<p< \lambda/\alpha<\mu/\alpha\le n/\alpha$. Then there is a set $\Sigma\subset\Omega$ such that: [(i)]{} $C_\alpha(\Sigma;L^{p,\mu}(\Omega))=0$; [(ii)]{} $ \lim_{r\to 0}\fint_{B(x,r)}g=\tilde{g}(x)\quad\forall\quad x\in \Omega\setminus\Sigma; $ [(iii)]{} $\lim_{r\to 0}\fint_{B(x,r)}\big|g-\tilde{g}(x)\big|=0\quad\forall\quad x\in \Omega\setminus\Sigma.$ Moreover, one has: [(iv)]{} For any $\epsilon>0$ there is an open set $O\subset\Omega$ such that $C_\alpha(O; L^{p,\mu}(\Omega))<\epsilon$ and the convergence in (ii)-(iii) is uniform on $\Omega\setminus O$; [(v)]{} $\tilde{g}$ is a $C_\alpha(\cdot;L^{p,\mu}(\Omega))$-quasicontinuous representative of $g$; [(vi)]{} $ \tilde{g}(x)=g(x)\quad\forall\quad x\in\Omega\setminus O. $ Given $r\in (0,\infty)$, let $$\chi(x)={1_{\mathbb B^n}(x)}{\omega_n}^{-1}\quad\&\quad\chi_r(x)=r^{-n}\chi(x/r),$$ where $\omega_n$ is the volume of the unit ball $\mathbb B^n$ of $\mathbb R^n$. For $f\in L^{p,\lambda}(\Omega)$, $\epsilon>0$ and $\mu\in (\lambda,n]$, we use Lemma \[l31\] to find a Schwartz function $f_0$ on $\mathbb R^n$ such that $f_0=0$ in $\mathbb R^n\setminus\Omega$ and $\|f-f_0\|_{L^{p,\mu}(\Omega)}<\epsilon$. Consequently, $g_0=I_\alpha f_0$ is a Schwartz function and $\chi_r\ast g_0$ converges to $g_0$ on $\Omega$ as $r\to 0$. Note that $$\fint_{B(x,r)}g=\chi_r\ast g(x)\quad\&\quad \fint_{B(x,r)}g_0=\chi_r\ast g_0(x).$$ Thus, for $\delta>0$ letting $$\mathsf{J}_\delta g(x)=\sup_{0<r<\delta}(\chi_r\ast g)(x)-\inf_{0<r<\delta}(\chi_r\ast g)(x),$$ we have $$\mathsf{J}_\delta g(x)\le \mathsf{J}_\delta (g-g_0)(x)+\mathsf{J}_\delta g_0(x).$$ By the previously-stated convergence, for any given $\epsilon>0$ there exists $\delta>0$ so small that $\sup_{x\in\Omega}\mathsf{J}_\delta g_0(x)<\epsilon$. If $\mathcal{M}$ stands for the Hardy-Littlewood maximal operator, then $$|\chi_r\ast(g-g_0)(x)|\le \mathcal{M}(g-g_0)(x)\quad\forall\quad x\in\Omega,$$ and hence $$\mathsf{J}_\delta g(x)\le \mathcal{M}(g-g_0)(x)+\epsilon\quad\forall\quad x\in\Omega.$$ Upon choosing $\omega/2>\epsilon>0$, the last estimate gives $$E_\omega:=\{x\in\Omega: \mathsf{J}_\delta g(x)>\omega\}\subseteq\{x\in\Omega: \mathsf{J}_\delta (g-g_0)(x)>\omega/2\}=:F_\omega.$$ In view of the definition of $C_\alpha(\cdot;L^{p,\mu}(\Omega))$ and the boundedness of $\mathcal{M}$ acting on $L^{p,\mu}(\Omega)$, we find $$C_\alpha(E_\omega;L^{p,\mu}(\Omega))\le C_\alpha(F_\omega;L^{p,\mu}(\Omega))\lesssim \omega^{-p}\|f-f_0\|^p_{L^{p,\mu}(\Omega)}\lesssim({\epsilon}{\omega}^{-1})^p.$$ For each natural number $j$ let $\omega=2^{-j}$, $\epsilon=4^{-j}$, and $\delta_j$ be their induced number. If $$G_j=\{x\in\Omega: \mathsf{J}_{\delta_j}g(x)>2^{-j}\},$$ then $$C_\alpha(G_j; L^{p,\mu}(\Omega))\lesssim 2^{-jp}.$$ Furthermore, $$O_k=\cup_{j=k}^\infty G_j\Rightarrow C_\alpha(O_k; L^{p,\mu}(\Omega))\lesssim\sum_{j=k}^\infty 2^{-jp}\to 0\quad\hbox{as}\quad k\to\infty.$$ Consequently, under $$\begin{cases} 1<p<\mu/\alpha;\\ \mu-\alpha p<d\le n;\\ 0<q<dp/(\mu-\alpha p), \end{cases}$$ one has $$C_\alpha(\cap_{k=1}^\infty O_k; L^{p,\mu}(\Omega))=0.$$ Note that $$x\notin O_k\Rightarrow\mathsf{J}_\delta g(x)\le 2^{-j}\quad\forall\quad\delta\le\delta_j\quad\&\quad j\ge k.$$ So, $$\lim_{r\to 0}\chi_r\ast g(x)=\tilde{g}(x)\ \ \hbox{exists\ for\ any}\ \ x\notin\cap_{k=1}^\infty O_k.$$ Clearly, this convergence is uniform on $\Omega\setminus O_k$ with sufficiently small $C_\alpha(O_k; L^{p,\mu}(\Omega))$. This proves the results of Theorem \[t31\] with $\Sigma=\cap_{k=1}^\infty O_k$ except the part (iii). However, the demonstration of the part (iii) follows from a slight modification of the above argument plus defining $$\mathsf{J}_\delta(g-\tilde{g})(x)=\sup_{0<r\le\delta}(\chi_r\ast|g-\tilde{g}(x)|)(x)$$ and so establishing $$\mathsf{J}_\delta(g-\tilde{g})(x)\le \mathcal{M}(g-g_0)(x)+|(\tilde{g}-g_0)(x)|+\epsilon$$ under $$\mathsf{J}_\delta(g-{g}_0(x))(x)<\epsilon.$$ Hölderian quasicontinuity for $I_\alpha L^{p,\lambda}$ {#s32} ------------------------------------------------------ Given $\beta\in (0,1]$. We say that $g\in Lip_\beta(\Omega)$ provided that $g$ satisfies $$\sup\left\{\frac{|g(x)-g(y)|}{|x-y|^\beta}:\quad x,y\in\Omega,\ x\not=y\right\}<\infty.$$ In particular, if $\beta\in (0,1)$ or $\beta=1$ then $g$ is called $\beta$-Hölder continuous or Lipschitz continuous. Moreover, a function $g$ defined on $\Omega$ is called Hölder quasicontinuous if for any $\epsilon>0$ there is a set $E\subset\Omega$ of a given capacity smaller than $\epsilon$ such that $g$ is of the Hölder continuity on $\Omega\setminus E$. The forthcoming Theorem \[t33\] shows that any function in $I_\alpha L^{p,\lambda}$ is of Hölder quasicontinuity. To be more precise, let us recall the Sobolev-Morrey type imbedding (cf. [@ADuke; @ALecture]): $$I_\alpha: L^{p,\lambda}(\Omega)\mapsto\begin{cases} L^{\frac{\lambda p}{\lambda-\alpha p},\lambda}(\Omega)\cap L^{p,\lambda-\alpha p}(\Omega), & 1<p<\lambda/\alpha; \\ BMO(\Omega), & 1<p=\lambda/\alpha, \\ \end{cases}$$ where $$f\in BMO(\Omega)\Longleftrightarrow\sup_{x\in\Omega,\ 0<r<\hbox{diam}(\Omega)}\fint_{B(x,r)\cap\Omega}\left|f-\fint_{B(x,r)\cap\Omega}f\right|<\infty.$$ Interestingly, the above imbedding can be extended from $p\le\lambda/\alpha$ to $p>\lambda/\alpha$. \[l32\] Let $g=I_\alpha f$, $f\in L^{p,\lambda}(\Omega)$, and $(\alpha,p,\lambda)\in (0,n)\times(1,\infty)\times(0,n]$. [(i)]{} If $0<\delta=\alpha-\lambda/p<1$, then $g\in Lip_\delta(\Omega)$. [(ii)]{} If $$\begin{cases} 1<p<\lambda/\alpha;\\ 1<q<\min\{p,\lambda/\alpha\};\\ \mu=n-(n-\lambda)q/p;\\ 0<\beta<\min\left\{1,\alpha(1-q/p),{\lambda(1-q/p)}/\big({\lambda+(1-\alpha)q}\big)\right\}, \end{cases}$$ then for any $r\in (0,1)$ there exist $f_r\in L^{p,\lambda}(\Omega)$ and $g_r=I_\alpha f_r$ such that $$\begin{cases} \|f-f_r\|_{L^{q,\mu}(\Omega)}\lesssim r^{\beta};\\ |g_r(x)-g_r(y)|\lesssim |x-y|^\beta\quad\forall\quad y\in B(x,r)\subseteq\Omega. \end{cases}$$ \(i) Since $\alpha=\delta+\lambda/p$, an application of [@ADuke Corollary (iii)] and [@Carl page 91] gives $$I_\alpha L^{p,\lambda}(\Omega)= I_\delta I_{\lambda/p} L^{p,\lambda}(\Omega)\subseteq I_\delta BMO(\Omega)\subseteq Lip_\delta(\Omega),$$ whence implying $g\in Lip_\delta(\Omega)$. \(ii) Without loss of generality, we may assume $\|f\|_{L^{p,\lambda}(\Omega)}\le 1$ and $f|_{\mathbb R^n\setminus\Omega}=0$. Since $\Omega$ is bounded, there is a big ball $B(x,r)$ with center $x\in\Omega$ and radius $r\le\hbox{diam}(\Omega)$ such that $\Omega\subseteq B(x,r)$, and consequently, $$\|f\|_{L^p(\Omega)}^p=\int_\Omega|f|^p\le\big(\hbox{diam}(\Omega)\big)^{n-\lambda}.$$ For $r\in (0,1)$ let $O_r=\{x\in\Omega: |f(x)|>s_r\}$, $s_r=r^{\beta q/(q-p)}$, and $$f_r=\begin{cases} f\quad\hbox{on}\quad \Omega\setminus O_r;\\ 0\quad\hbox{on}\quad O_r. \end{cases}$$ Evidently, $$\int_{O_r}1_{O_r}\le s_r^{-p}\big(\hbox{diam}(\Omega)\big)^{n-\lambda}$$ and $g_r=I_\alpha f_r$ is bounded. Moreover, by Hölder’s inequality and the definition of $\|\cdot\|_{L^{q,\mu}(\Omega)}$, one gets $$\begin{aligned} \|f-f_r\|_{L^{q,\mu}(\Omega)}^q&\le&\|f\|_{L^{p,\lambda}(\Omega)}^{q}\left(\int_{O_r}1_{O_r}\right)^\frac{\mu-\lambda}{n-\lambda}\\ &\le&{(\hbox{diam}(\Omega)\big)^{\mu-\lambda}}{s_r^{\frac{p(\lambda-\mu)}{n-\lambda}}}\\ &\le& (\hbox{diam}(\Omega)\big)^{(n-\lambda)(1-q/p)} r^{q\beta}.\end{aligned}$$ Meanwhile, thanks to $f_r\le s_r$, we can use (i) above to get that if $$p<\bar{p}=\frac{\lambda(p-q)-\beta pq}{\alpha(p-q)-\beta p}\quad\&\quad 0<\bar{\beta}=\alpha-\lambda/\bar{p}<1,$$ then $$|g_r(x)-g_r(y)|=|I_\alpha f_r(x)-I_\alpha f_r(y)|\lesssim\|f_r\|_{L^{\bar{p},\lambda}(\Omega)}|x-y|^{\bar{\beta}}\quad\forall\quad y\in B(x,r).$$ Another application of the Hölder inequality gives $$\|f_r\|^{\bar{p}}_{L^{\bar{p},\lambda}(\Omega)}\le s_r^{\bar{p}-p}\|f\|^p_{L^{p,\lambda}(\Omega)}\le s_{r}^{\bar{p}-p}.$$ Thus, $|g_r(x)-g_r(y)|\lesssim r^\beta$ holds for any $y\in B(x,r)$. Below is the Hölder quasicontinuity for the Riesz-Morrey potentials which actually gives a nontrivial generalization of [@Mal Theorem 7] (see [@HaK] for a further development of [@Mal]). \[t33\] Let $g=I_\alpha f$, $f\in L^{p,\lambda}(\Omega)$, and $1<p<\lambda/\alpha\le n/\alpha$. If $$\begin{cases} 1<q<\min\{p,\lambda/\alpha\}=p;\\ \mu=n-(n-\lambda)q/p;\\ 0<\gamma<\min\left\{1,\alpha(1-q/p),{\lambda(1-q/p)}/\big({\lambda+(1-\alpha)q}\big)\right\}, \end{cases}$$ then for any $\epsilon>0$ there exists an open set $O\subseteq\Omega$ and a $\gamma$-Hölder continuous function $h$ on $\Omega$ such that $$C_\alpha(O;L^{q,\mu}(\Omega))<\epsilon\ \ \&\ \ g=h\quad\hbox{in}\quad\Omega\setminus O.$$ The notations introduced in Lemma \[l32\] and its proof will be used in the sequel. Given $\gamma\in (0,\beta)$ with $\beta$ as in Lemma \[l32\]. Now, for each natural number $j$ let $r_j$ be chosen so that $$\label{qo} r_0=1\quad\&\quad \Big({r_{j+1}}/{r_j}\Big)^\gamma\le 1/2.$$ For simplicity, set $h_j=g_{r_j}$ and then $f_j$ be the corresponding $f_{r_j}$ and $$\sum_{j=1}^\infty\|f_{j+1}-f_{j}\|_{L^{p,\lambda}(\Omega)}<\infty.$$ Choosing $$\begin{cases} w_j=\max\big\{-r_j^\gamma,\min\{r_j^\gamma,h_{j+1}-h_j\}\big\};\\ O_j=\{x\in\Omega: |h_{j+1}(x)-h_j(x)|>r_j^\gamma\}, \end{cases}$$ we use the already-established estimate $$\|f-f_r\|_{L^{q,\mu}(\Omega)}\le(\hbox{diam}(\Omega)\big)^{(n-\lambda)(1/q-1/p)} r^{\beta}$$ and the definition of $C_\alpha(\cdot; L^{q,\mu}(\Omega))$ to obtain $$C_\alpha(O_j; L^{q,\mu}(\Omega))\le r_j^{-\gamma q}\|f_{j+1}-f_j\|_{L^{q,\mu}(\Omega)}^q\lesssim r_j^{(\beta-\gamma) q},$$ Consequently, for any $\epsilon>0$ there is a big integer $J$ such that $$\sum_{j=J}^\infty C_\alpha(O_j; L^{q,\mu}(\Omega))\lesssim \sum_{j=J}^\infty r_j^{q(\beta-\gamma)}<\epsilon.$$ Putting $$O=\cup_{j=J}^\infty E_j\ \ \&\ \ h=h_{J}+\sum_{j=J}^\infty w_j,$$ we find that $O$ is an open subset of $\Omega$ and $$C_\alpha(O; L^{q,\mu}(\Omega))<\epsilon\quad\&\quad h=g\quad\hbox{on}\quad \Omega\setminus O.$$ It remains to check that $h$ is $\beta$-Hölder continuous on $\Omega$. Of course, it is enough to verify $$\label{short3} |h(x)-h(y)|\lesssim |x-y|^\beta\quad\forall\quad x,y\in\Omega\quad\hbox{with}\quad |x-y|\le r_{J}.$$ Obviously, $h_J$ is $\beta$-Hölder continuous. To show the similar property for $\sum_{j=J}^\infty w_j$, we may assume $$x,y\in\Omega;\quad 0<|x-y|\le r_J;\quad r_{k+1}<|x-y|\le r_k.$$ From (\[qo\]) it follows that $$\label{short4} k\le\Big(\frac{\gamma}{\ln 2}\Big)\ln\frac1{r_k}\le\Big(\frac{\gamma}{(\beta-\gamma)\ln 2}\Big)r_k^{\gamma-\beta}\le\Big(\frac{\gamma}{(\beta-\gamma)\ln 2}\Big)|x-y|^{\gamma-\beta}$$ When $1\le j\le k$, an application of the last estimate in Lemma \[l32\] gives $$|w_j(x)-w_j(y)|\lesssim|x-y|^{\beta}.$$ When $j>k$, another application of (\[qo\]) yields $$|w_j(x)-w_j(y)|\le 2r_j^\gamma\le 2^{k-j+2}r_{k+1}^\gamma\le 2^{k-j+2}|x-y|^\gamma.$$ This, together with (\[short4\]) and $h=h_{J}+\sum_{j=J}^\infty w_j$, derives $$|h(x)-h(y)|\lesssim |x-y|^\gamma+k|x-y|^{\beta}\lesssim|x-y|^\gamma.$$ Lane-Emden Equations {#s2} ==================== Hölderian quasicontinuity for $-\Delta_p u=u^{q+1}$ --------------------------------------------------- Recall that for $1\le p<\infty$ the Sobolev space $W^{1,p}(\Omega)$ consists of all functions $f$ with $$\|f\|_{W^{1,p}(\Omega)}=\left(\int_{\Omega}|f|^p\right)^\frac1p+\left(\int_{\Omega}|\nabla f|^p\right)^\frac1p<\infty$$ and the Sobolev space $W^{1,p}_0(\Omega)$ is the completeness of $C^1_0(\Omega)$ (all $C^1$ functions $f$ with compact support in $\Omega$) under $\|\cdot\|_{W^{1,p}(\Omega)}$. According to [@GiTr Lemma 7.14], any $f\in W^{1,1}_0(\Omega)$ can be represented via $$\label{e41} f(x)=\frac{\Gamma\big(\frac{n}{2}\big)}{2\pi^\frac{n}{2}}\int_{\Omega}|x-y|^{-n}(x-y)\cdot\nabla f(y)\,dy\ \ \&\ \ |f(x)|\le \left(\frac{\Gamma\big(\frac{n}{2}\big)}{2\pi^\frac{n}{2}}\right)\big(I_1|\nabla f|\big)(x)\quad\hbox{a.e.}\quad x\in\Omega,$$ where $\Gamma(\cdot)$ is the usual gamma function. As a variant of $C_1(K; L^{p,n}(\Omega))$ the variational $p$-capacity of a compact $K\subseteq\Omega$ is defined by $$C(K;W_0^{1,p}(\Omega))=\inf\left\{\int_{\Omega}|\nabla f|^p:\ f\in W^{1,p}_0(\Omega)\ \&\ f\ge 1_K\right\}.$$ Clearly, this definition is extendable to an arbitrary set $E\subseteq\Omega$ through (cf. [@HKM p.27]) $$C(E;W_0^{1,p}(\Omega))=\inf_{E\subseteq\ open\ O\subseteq\Omega} C(O;W_0^{1,p}(\Omega))=\inf_{E\subseteq\ open\ O\subseteq\Omega}\sup_{compact\ K\subseteq\ open\ O}C(K;W_0^{1,p}(\Omega)).$$ Importantly, such a capacity can used to establish the following relatively independent Sobolev embedding whose (v) is indeed a sort of motivation to investigate the quasilinear Lane-Emden equations. \[p23\] Given $1<p<\min\{n,q\}$ and $0<r<q(1-p^{-1})$, let $\nu$ be a nonnegative Radon measure on $\Omega$. Then the following properties are mutually equivalent: [(i)]{} $I_1$ is a continuous operator from $L^{p}(\Omega)$ into $L^q(\Omega,\nu)$; [(ii)]{} $W^{1,p}_0(\Omega)$ continuously embeds into $L^q(\Omega,\nu)$. [(iii)]{} Isocapacitary inequality $\nu(K)\lesssim {C}(K; W^{1,p}_0(\Omega))^\frac{q}{p}$ holds for all compact sets $K\subset\Omega$; [(iv)]{} Isocapacitary inequality $\nu(B(x,r))\lesssim r^\frac{q(n-p)}{p}$ holds for all $B(x,r)\subseteq\Omega$; [(v)]{} Faber-Krahn’s inequality $\nu(O)^{\frac{p}{q}-1}\lesssim\lambda_{p,\nu}(O)$ holds for all bounded open sets $O\subseteq\Omega$, where $$\lambda_{p,\nu}(O)=\inf\left\{\frac{\int_O|\nabla f|^p}{\int_O|f|^p\,d\nu}:\ f\in C_0^1(O)\ \&\ f\not\equiv 0\ \hbox{on}\ O\right\}.$$ (ii)$\Leftrightarrow$(iii)$\Leftrightarrow$(iv)$\Leftrightarrow$(i) is essentially known – see, for example, [@Maz0; @Maz0a] and [@AH Theorem 7.2.2]. So, it remains to prove (ii)$\Leftrightarrow$(v). If (ii) is valid, then the Hölder inequality yields that for any open set $O\subseteq\Omega$ and $f\in C_0^1(O)$, $$\int_O|f|^p\,d\nu\le\left(\int_{O}|f|^q\,d\nu\right)^\frac{p}{q}\nu(O)^{1-\frac{p}{q}}\lesssim\Big(\int_O|\nabla f|^p\Big)\nu(O)^{1-\frac{p}{q}}$$ holds, whence giving (v). For the converse, we use the argument methods in [@Cha pp. 159-161] and [@Coh] to proceed. Suppose (v) is true. Then for any $f\in W^{1,p}_0(\Omega)$ and any $t>0$, $$\begin{aligned} \int_{\Omega}|f|^p\,d\nu&\le& \int_{\{y\in\Omega:\ |f(y)|>t\}}|f|^p\,d\nu+ t^{p-1}\int_{\{y\in\Omega:\ |f(y)|\le t\}}|f|\,d\nu\\ &\lesssim&\frac{\int_{\{y\in\Omega:\ |f(y)|>t\}}|\nabla f|^p}{\nu(\{y\in\Omega:\ |f(y)|>t\})^{\frac{p}{q}-1}} + t^{p-1}\int_{\{y\in\Omega:\ |f(y)|\le t\}}|f|\,d\nu\\ &\lesssim&\left(t^{-1}{\int_{\Omega}|f|\,d\nu}\right)^{1-\frac{p}{q}}\int_{\Omega}|\nabla f|^p+t^{p-1}\int_{\{y\in\Omega:\ |f(y)|\le t\}}|f|\,d\nu.\end{aligned}$$ Choosing $$t=\left(\frac{\int_{\Omega}|\nabla f|^p}{\big(\int_{\Omega}|f|\,d\nu\big)^\frac{p}{q}}\right)^\frac{q}{p(q-1)},$$ we get a constant $c>0$ such that $$\int_{\Omega}|f|^p\,d\nu\le 2c\left(\int_{\Omega}|\nabla f|^p\right)^\frac{q(p-1)}{p(q-1)}\left(\int_{\Omega}|f|\,d\nu\right)^\frac{q-p}{q-1}.$$ Replacing this $f$ by $$f_k=\min\big\{\max\{f-2^k, 0\},2^k\big\},\ k=0,\pm 1,\pm 2,...,$$ we have $$\begin{aligned} \left(\int_{\Omega}f_k^p\,d\nu\right)^\frac{p(q-1)}{q(p-1)}\le (2c)^\frac{p(q-1)}{q(p-1)}\left(\int_{\Omega}|\nabla f_k|^p\right)\left(\int_{\Omega}f_k\,d\nu\right)^\frac{p(q-p)}{q(p-1)}.\end{aligned}$$ This implies $$\begin{aligned} &&\left(2^{kp}\nu(\{y\in\Omega:\ f(y)\ge 2^{k+1}\})\right)^\frac{p(q-1)}{q(p-1)}\\ &&\le (2c)^\frac{p(q-1)}{q(p-1)}\left(\int_{\{y\in\Omega:\ \ 2^k\le f(y)<2^{k+1}\}}|\nabla f|^p\right)\left(2^{k}\nu(\{y\in\Omega:\ f(y)\ge 2^{k}\})\right)^\frac{p(q-p)}{q(p-1)}.\end{aligned}$$ Setting $$\left\{\begin{array} {r@{\quad}l} a_k & =\ 2^{kq}\nu(\{y\in\Omega:\ f(y)\ge 2^{k}\});\\ b_k &=\ \int_{\{y\in\Omega:\ \ 2^k\le f(y)<2^{k+1}\}}|\nabla f|^p;\\ \theta &=\ \frac{q(p-1)}{p(q-1)}, \end{array} \right.$$ one has $a_{k+1}\le 2^{1+q}c b_k^\theta a_k^{p(1-\theta)}$. This, together with Hölder’s inequality, derives $$\begin{aligned} \sum_{k}a_k&\le&2^{1+q}c\sum_{k}b_k^\theta a_k^{p(1-\theta)}\\ &\le&2^{1+q}c\Big(\sum_{k}b_k\Big)^\theta\Big(\sum_{k} a_k\Big)^{p(1-\theta)}\\ &\le&2^{1+q}c\Big(\int_{\Omega}|\nabla f(y)|^p\,dy\Big)^\theta\Big(\sum_{k} a_k\Big)^{p(1-\theta)}.\end{aligned}$$ A simplification of these estimates yields (ii). \[r22\] The part on Faber-Krahn’s inequality under $(p,q,d\nu)=(2,{2n}/{(n-2)},dy)$ of Proposition \[p23\] appeared in [@Car; @Heb; @Xia1; @Xia2; @Xia3]. In particular, if $$d\nu=\omega dy\ \ \&\ \ 1<p<q<{pn}/{(n-p)},$$ then condition (iv) above says that $0\le\omega$ belongs to the Morrey space $L^{1,n-(n-p)q/p}(\Omega)$ – in other words – the Sobolev imbedding under this circumstance is fully controlled by this Morrey space; see [@MazV] for a similar treatment on the Schrödinger operator $-\Delta +\mathcal{V}$. Furthermore, when the last $\omega$ equals identically $1$, there is a nonnegative function $u\in W^{1,p}_0(\Omega)$ such that the Euler-Lagrange (or Lane-Emden type) equation $$-\Delta_p u=\lambda_{p,\nu}(\Omega)u^{p-1}\quad\hbox{in}\quad \Omega$$ holds in the weak sense: $$\int_{\Omega}|\nabla u|^{p-2}\nabla u\cdot\nabla\phi=\lambda_{p,\nu}(\Omega)\int_{\Omega}u^{p-1}\phi\quad\forall\quad\phi\in W_0^{1,p}(\Omega);$$ see e.g. [@KP] and its related references. In view of Proposition \[p23\], Remark \[r22\], and the research of the Lane-Emden equations in [@Pa; @Pa1; @Pa2; @Pa3; @PoV; @APAMS; @SeZ; @Zou; @GS; @CE], we consider the nonnegative weak solutions of the quasilinear Lane-Emden equation with index $(p,q)\in (1,n)\times (0,\infty)$: $$\label{e47} -\Delta_p u= u^{q+1}\quad\hbox{in}\quad\Omega,$$ and utilize Theorem \[t33\] to get the following result. \[p42\] Let $$\begin{cases} (p,q)\in (1,n)\times(0,\infty);\\ \tilde{q}\ge\max\{p,q+2\};\\ n\ge\lambda\ge\max\left\{\frac{n(q+2)}{\tilde{q}},p\big(\frac{n}{\tilde{q}}+1\big)\right\}. \end{cases}$$ If $u\in L^{\tilde{q}}(\Omega)$ is a nonnegative weak solution of (\[e47\]), then for any $\epsilon>0$ there is an open set $O\subseteq\Omega$ such that $C_1(O;L^{\hat{q},\hat{\mu}}(\Omega))<\epsilon$ and $I_1|\nabla u|$ is $\hat{\gamma}$-Hölder continuous in $\Omega\setminus O$ where $$\begin{cases} 1<\hat{q}<p\le\lambda<\hat{\mu}=n-(n-\lambda)\hat{q}/p;\\ 0<\hat{\gamma}<1-\hat{q}/p. \end{cases}$$ Suppose $u\in L^{\tilde{q}}(\Omega)$ is a nonnegative weak solution of (\[e47\]). Then $$\label{e47a} \int_\Omega u^{q+1}\phi=\int_\Omega |\nabla u|^{p-2}\nabla u\cdot\nabla\phi\quad\forall\quad \phi\in W_0^{1,p}(\Omega).$$ Given $x_0\in\Omega$ and $0<r<\hbox{diam}(\Omega)$. Upon taking a test function $\phi=u\eta^2$ such that $$\label{e31f} \begin{cases} \eta(x)=1\quad\hbox{for}\quad x\in B(x_0,r/3);\\ \eta(x)=0\quad\hbox{for}\quad x\in \mathbb R^n\setminus B(x_0,r/2);\\ |\nabla \eta(x)|\lesssim r^{-1}\quad\hbox{for}\quad x\in B(x_0,r), \end{cases}$$ we utilize (\[e47a\]) to get $$\int_\Omega u^{q+2}\eta^2=\int_\Omega |\nabla u|^{p-2}|\nabla u|^2\eta^2+2^{-1}\int_\Omega|\nabla u|^{p-2}\big(\nabla (u^2)\big)\cdot\big(\nabla(\eta^2)\big).$$ Through the properties of $\eta$, Young’s inequality $$\label{CS} ab\le \frac{\epsilon a^{\theta}}{\theta}+\frac{\epsilon^{\frac{1}{1-\theta}}b^{\theta'}} {\theta'}\quad{\forall}\quad a,\ b,\ \epsilon,\ \theta-1>0\ \ \&\ \ \theta'=\frac{\theta}{\theta-1},$$ (applied to the last integral), and Hölder’s inequality, we find $$\begin{aligned} \int_{B(x_0,r/3)\cap\Omega}|\nabla u|^p&\lesssim&\int_{B(x_0,r/3)\cap\Omega}u^{2+q}+r^{-p}\int_{B(x_0,r/3)\cap\Omega}u^p\\ &\lesssim& \left(\int_{B(x_0,r/3)\cap\Omega}u^{\tilde{q}}\right)^{\frac{2+q}{\tilde{q}}}r^{n(1-\frac{2+q}{\tilde{q}})} +\left(\int_{B(x_0,r/3)\cap\Omega}u^{\tilde{q}}\right)^\frac{p}{\tilde{q}}r^{n(1-\frac{p}{\tilde{q}})-p}\\ &\lesssim&\left(\|u\|^{q+2}_{L^{\tilde{q}}(\Omega)}+\|u\|_{L^{\tilde{q}}(\Omega)}^p\right)r^{n-\lambda}(\hbox{diam}(\Omega))^{\tilde{\lambda}},\end{aligned}$$ where the assumption on $p,q,\tilde{q},\lambda$ and the following definition $$\tilde{\lambda}=\lambda-\max\left\{\frac{n(q+2)}{\tilde{q}},\ \frac{p(n+\tilde{q})}{\tilde{q}}\right\}\ge 0$$ have been used. Therefore, $|\nabla u|\in L^{p,\lambda}(\Omega)$ and desired assertion follows from applying Theorem \[t33\] to the Riesz-Morrey potential $I_1|\nabla u|$. Hölderian quasicontinuity for $-\Delta_p u=e^u$ ----------------------------------------------- The recent works [@W1; @W2; @D; @F], along with Theorem \[p42\], have driven us to consider the nonnegative weak solutions to the quasilinear Lane equation for the $1<p<n$ Laplacian: $$\label{e477} -\Delta_p u= e^u\quad\hbox{in}\quad\Omega,$$ thereby discovering the following fact. \[p43\] Let $1<p<n$. If $u$ with $\int_{\Omega}ue^u<\infty$ is a nonnegative weak solution of (\[e477\]), then for any $\epsilon>0$ there is an open set $O\subseteq\Omega$ such that $C_1(O;L^{\hat{q},n}(\Omega))<\epsilon$ and $I_1|\nabla u|$ is $\hat{\gamma}$-Hölder continuous in $\Omega\setminus O$ where $$\begin{cases} 1<\hat{q}<p;\\ 0<\hat{\gamma}<1-\hat{q}/p. \end{cases}$$ Suppose $u\ge 0$ is a weak solution of (\[e477\]) with the integrability $\int_{\Omega}ue^u<\infty$. Then $$\label{e47ab} \int_\Omega e^u\phi=\int_\Omega |\nabla u|^{p-2}\nabla u\cdot\nabla\phi\quad\forall\quad \phi\in W_0^{1,p}(\Omega).$$ Given $(x_0,r)\in\Omega\times \big(0,\hbox{diam}(\Omega)\big)$. Choosing $\phi=u\eta^2$ with (\[e31f\]) we obtain via (\[e47ab\]): $$\int_\Omega ue^u \eta^2=\int_\Omega |\nabla u|^{p-2}|\nabla u|^2\eta^2+2^{-1}\int_\Omega|\nabla u|^{p-2}\big(\nabla (u^2)\big)\cdot\big(\nabla(\eta^2)\big).$$ An application of the Young inequality (\[CS\]), the Hölder inequality and the assumption $p\in (1,n)$ yields $$\begin{aligned} \int_{B(x_0,r/3)\cap\Omega}|\nabla u|^p&\lesssim&\int_{B(x_0,r/3)\cap\Omega}u e^u+r^{-p}\int_{B(x_0,r/3)\cap\Omega}u^p\\ &\lesssim& \int_{B(x_0,r/3)\cap\Omega} u e^{u} + r^{-p}\int_{B(x_0,r/3)\cap\Omega} u^{1-p/n}u^{p-1+p/n}\\ &\lesssim&\int_{B(x_0,r/3)\cap\Omega} u e^{u} + r^{-p}\int_{B(x_0,r/3)\cap\Omega} (ue^u)^{1-p/n}\\ &\lesssim&\int_{\Omega} u e^{u} +\left(\int_{\Omega} ue^{u}\right)^{1-p/n}.\end{aligned}$$ Thus, $|\nabla u|\in L^{p,n}(\Omega)$. This, together with Theorem \[t33\] for the Riesz-Morrey potential $I_1|\nabla u|$, derives the desired assertion. [99]{} D. R. Adams, [*A note on Riesz potentials*]{}, [Duke Math. J.]{} 42(1975)765-778. D. R. Adams, [*Lectures on $L^p$-Potential Theory*]{}, Volume 2, Department of Mathematics, University of Ume, 1981. D. R. Adams, [*On F. Pacard’s regularity for $-\Delta u=u^p$*]{}, Electron. J. Differential Equations 2012, No. 125, 6 pp. D. R. Adams and L. I. Hedberg, *Function Spaces and Potential Theory*. [Springer-Verlag]{}, Berlin Heidelberg, 1996. D. R. Adams and J. Xiao, [*Nonlinear analysis on Morrey spaces and their capacities*]{}, [Indiana Univ. Math. J.]{} [53]{}(2004)1629-1663. D. R. Adams and J. Xiao, [*Morrey potentials and harmonic maps*]{}, [Comm. Math. Phys.]{} 308 (2011)439-456. D. R. Adams and J. Xiao, [*Morrey spaces in harmonic analysis*]{}, [Ark. Mat.]{} 50(2012)201-230 D. R. Adams and J. Xiao, [*Regularity of Morrey commutators*]{}, [Trans. Amer. Math. Soc.]{} 364(2012)4801-4818. D. R. Adams and J. Xiao, [*Singularities of nonlinear elliptic systems*]{}, Comm. Partial Diff. Equ. 38(2013)1256-1273. L. Carleson, [*Selected Problems in Exceptional Sets*]{}, Van Nostrand, Princeton, N.J., 1967. G. Carron, *Inégalités isopérimétriques de Faber-Krahn et conséquences*, [Publications de l’Institut Fourier]{}, 220, 1992. I. Chavel, *Isoperimetric Inequalities*. Cambridge University Press, 2001. G. Christopher and P. Enea, *On the asymptotics of solutions of the Lane-Emden problem for the $p$-Laplacian*, Arch Math. (Basel) 91(2008)354-365. T. Coulhon, [*Espaces de Lipschitz et inégalités de Poincáre*]{}, [J. Funct. Anal.]{} [136]{}(1996)81-113. L. Dupaigne, M. Ghergu, O. Goubet and G. Warnault, [*The Gel’fand problem for the biharmonic operator*]{}, Arch. Ration. Mech. Anal. 208(2013)725-752. A. Farina, [*Stable solutions of $-\Delta u$ on $\mathbb R^N$*]{}, C. R. Math. Acad. Sci. Paris 345 (2007)63-66. B. Gidas and J. Spruck, [*Global and local behavior of positive solutions of nonlinear elliptic equations*]{}, Comm. Pure Appl. Math. XXXIV(1981)525-598. D. Gilbarg and N. S. Trudinger, [*Elliptic Partial Differential Equations of Second Order*]{}, Springer-Verlag, Berlin 2001. P. Hajlasz and J. Kinnunen, [*Hölder quasicontinuity of Sobolev functions on metric spaces*]{}, Revista Math. Iberoamericana 14(1998)601-622. E. Hebey, *Nonlinear Analysis on Manifolds: Sobolev Spaces and Inequalities*, [Courant Institute of Mathematical Sciences]{}, [5]{}, [American Mathematical Society, Providence, RI,]{} 1999. J. Heinonen, T. Kilpeläinen and O. Martio, [*Nonlinear Potential Theory of Degenerate Elliptic Equations*]{}, Dover Publications, Inc., Mineola, New York, 2006. B. Kawohl and P. Lindqvist, [*Positive eigenfunctions for the $p$-Laplace operator revisited*]{}, Analysis (Munich) 26(2006)545-550. J. Malý, [*Hölder type quasicontinuity*]{}, Potential Anal. 2(1993)249-254. V. G. Maz’ya, [*Sobolev Spaces*]{}, Springer-Verlag, Berlin-Heidelberg, 1985. V. G. Maz’ya, [*Lectures on isoperimetric and isocapacitary inequalities in the theory of Sobolev spaces*]{}, [Contemp. Math.]{} [338]{}(2003)307-340. V. G. Maz’ya and I. E. Verbitsky, [*Infinitesimal form boundedness and Trudinger’s subordination for the Schrödinger operator*]{}, [Invent. Math.]{} 162(2005)81-136. F. Pacard, [*A note on the regularity of weak solutions of $-\Delta u=u^\alpha$, $n\ge 3$*]{}, [Houston J. Math.]{} 18(1992)621-632. F. Pacard, [*A regularity criterion for positive weak solutions of $-\Delta u=u^\alpha$*]{}, [Comment. Math. Helv.]{} 68(1993)73-84. F. Pacard, [*Partial regularity for weak solutions of a nonlinear elliptic equation criterion*]{}, [Manuscripta Math.]{} 79(1993)161-172. F. Pacard, [*Existence and convergence of positive weak solutions of $-\Delta u=u^\frac{n}{n-2}$ in bounded domains of $\mathbb R^n, n\ge 3$*]{}, [Cal. Var.]{} 1(1993)243-265. A. Porretta and L. Véron, [*Separable solutions of quasilinear Lane-Emden equations*]{}, J. Eur. Math. Soc. 15(2013)755-774. J. Serrin, [*A remark on the Morrey potential*]{}, [Contemp. Math.]{} 426(2007)307-315. J. Serrin and H. H. Zou, [*Cauchy-Liouville and universal boundedness theorems for quasilinear elliptic equations and inequalities*]{}, [Acta Math.]{} 189(2002)79-142. K. Wang, [*Partial regularity of stable solutions to the Emden equation*]{}, Calc. Var. Partial Differential Equations 44(2012)601-610. K. Wang, [*Erratum to: Partial regularity of stable solutions to the Emden equation*]{}, Calc. Var. Partial Differential Equations 47(2013)433-435. J. Xiao, [*The $p$-Faber-Krahn inequality noted*]{}, In: [Around the Research of Vladimir Maz’ya I. Function Spaces]{}, pp. 373-390, Springer, 2010. J. Xiao, [*$L^p$-Green potential estimates on noncompact Riemannian manifolds*]{}, [J. Math. Phys.]{} 51, 063515(2010)1-17. J. Xiao, [*Isoperimetry for semilinear torsion problems in Riemannian two-manifolds*]{}, [Adv. Math.]{} 229(2012)2379-2404. C. T. Zorko, [*Morrey spaces*]{}, [Proc. Amer. Math. Soc.]{} 98(1986)586-592. H. H. Zou, [*A priori estimates and existence for quasi-linear elliptic equations*]{}, [Calc. Var.]{} 33(2008)417-437. [^1]: JX is in part supported by NSERC of Canada and URP of Memorial University.
{ "pile_set_name": "ArXiv" }
--- author: - 'Daphné <span style="font-variant:small-caps;">Dieuleveut</span>' bibliography: - '../Biblio/Bibliographie.bib' title: The UIPQ seen from a point at infinity along its geodesic ray --- <span style="font-variant:small-caps;">Abstract:</span> We consider the uniform infinite quadrangulation of the plane (UIPQ). Curien, Ménard and Miermont recently established that in the UIPQ, all infinite geodesic rays originating from the root are essentially similar, in the sense that they have an infinite number of common vertices. In this work, we identify the limit quadrangulation obtained by rerooting the UIPQ at a point at infinity on one of these geodesics. More precisely, calling $v_k$ the $k$-th vertex on the “leftmost” geodesic ray originating from the root, and $Q_{\infty}^{(k)}$ the UIPQ re-rooted at $v_k$, we study the local limit of $Q_{\infty}^{(k)}$. To do this, we split the UIPQ along the geodesic ray $(v_k)_{k\geq 0}$. Using natural extensions of the Schaeffer correspondence with discrete trees, we study the quadrangulations obtained on each “side” of this geodesic ray. We finally show that the local limit of $Q_{\infty}^{(k)}$ is the quadrangulation obtained by gluing the limit quadrangulations back together. Introduction {#S: Setting} ============ Finite and infinite planar maps are a popular model for random geometry. While finite maps have been studied since the sixties, infinite models were only introduced a decade ago, with the works of Angel and Schramm [@AngSchr; @Ang_Peeling]. They were the first to define the uniform infinite planar triangulation, an infinite map which can be seen as the local limit (in distribution) of uniform finite triangulations. Krikun [@Kri] then studied its counterpart, the uniform infinite planar quadrangulation (UIPQ), defined as the limit of uniform rooted finite quadrangulations as the number of faces goes to infinity. In this article, we study what the UIPQ looks like seen from a point “at infinity” on a geodesic ray originating from the root. One of the main advantages of quadrangulations over other classes of planar maps is the existence of the so-called Cori-Vauquelin-Schaeffer bijection. This bijection, introduced in [@CV] and developed thoroughly in [@Sch_Thesis; @ChaSch], gives a correspondence between finite quadrangulations and well-labeled finite trees. It was in particular used by Chassaing and Durhuus [@ChaDu] as a new approach to the UIPQ: they studied the infinite quadrangulation of the plane corresponding to an infinite positive labeled tree, and it was shown later by Ménard [@Men] that this quadrangulation has the same distribution as the one defined by Krikun. Using another extension of the Cori-Vauquelin-Schaeffer bijection, Curien, Ménard and Miermont [@CuMenMi] recently showed that the UIPQ can also be obtained from a “uniform” infinite labeled tree, without the positivity constraint on the labels. This construction allowed them to prove new results on the UIPQ, and in particular to give a fine description of the geodesic arcs from a point to infinity. One of their main results states that all such geodesics are “trapped” between two distinguished geodesics, which have a simple description in terms of the corresponding labeled tree. Moreover, these two geodesics, called the maximal (or leftmost) and minimal (or rightmost) geodesics, are roughly similar, in the sense that they almost surely have an infinite number of common points. Our main goal here is to study the local limit of $Q_{\infty}^{(k)}$ as $k \rightarrow \infty$, where $Q_{\infty}^{(k)}$ denotes the UIPQ re-rooted at a point at distance $k$ from the root, on the leftmost geodesic. Our methods are again based on bijective correspondences between trees and quadrangulations. Specifically, we show that $Q_{\infty}^{(k)}$ converges in distribution to a limit quadrangulation ${\overleftrightarrow{Q}}_{\infty}$, which can be obtained by gluing together two quadrangulations of the half-plane with geodesic boundaries; we give explicit expressions for the distribution of the corresponding trees. Note that the laws of the quadrangulations of the half-plane we consider (corresponding to the parts of the UIPQ which are “on the left” and “on the right” of the leftmost geodesic ray) are orthogonal to the law of the uniform infinite quadrangulation of the half-plane (UIHPQ) which was studied in [@Ang_UIHPT] and [@CuMi]. Finally, note that the scaling limit of the uniform infinite quadrangulation, the Brownian plane, which was introduced and studied by Curien and Le Gall [@CuLG], has a similar “uniqueness” property of infinite geodesic rays started from the root. We expect our result to have a natural analog in this context. In the rest of this introduction, we give the necessary definitions to state our main results. In Section \[S: Definitions\], we first recall classical definitions on quadrangulations and labeled trees; we also describe the construction of the UIPQ given in [@CuMenMi] and the “Schaeffer-type” correspondence it relies on. Section \[S: Leftmost geodesic\] gives more details on the UIPQ re-rooted at the $k$-th point on the leftmost infinite geodesic ray starting from the root. In particular, we explain why it is enough to study the local limit of the parts on each side of this geodesic. This leads us to extend the correspondence to a larger class of infinite labeled trees, which encode planar quadrangulations with a geodesic boundary (see Section 1.3). Finally, in Section 1.4, we state our main convergence results for these trees and the associated quadrangulations. Well-labeled trees and associated quadrangulations {#S: Definitions} -------------------------------------------------- ### First definitions on finite and infinite planar maps {#S: Planar maps} A finite planar map is a proper embedding of a finite connected graph, possibly with multiple edges or loops, into the two-dimensional sphere (or more rigorously, the equivalence class of such a graph, modulo orientation-preserving homeomorphisms). We first introduce some notation for such a map $\mathfrak{m}$. Let $V(\mathfrak{m})$, $E(\mathfrak{m})$ and ${\overrightarrow{E}}(\mathfrak{m})$ denote the sets of the vertices, edges and oriented edges of $\mathfrak{m}$, respectively. The faces of $\mathfrak{m}$ are the connected components of the complement of $E(\mathfrak{m})$. We say that a face is incident to $e \in {\overrightarrow{E}}(\mathfrak{m})$ if it is the face on the left of $e$. The degree of a face is the number of edges it is incident to. A corner of $\mathfrak{m}$ is an angular sector between two edges of $\mathfrak{m}$. Note that there is a bijective correspondence between the corners of $\mathfrak{m}$ and its oriented edges; we say that a corner is incident to $e \in {\overrightarrow{E}}(\mathfrak{m})$ if it is the corner on the left of $e$, next to its origin. We say that a finite planar map is rooted if it comes with a distinguished oriented edge, called the root edge; the origin vertex of the root is called the root vertex, and the face which is incident to the root is called the root face. A planar map is a quadrangulation if all faces have degree $4$, and a tree if it has only one face. A quadrangulation with a boundary is a planar map with a distinguished face called the external face, such that the boundary of the external face is simple and all other faces have degree $4$. We let $\mathcal{Q}_f$, $\mathcal{Q}_{f,b}$ and $\mathcal{T}_f$ respectively denote the sets of finite quadrangulations, quadrangulations with a boundary and trees. Let us now define the local limit topology on these sets. For any rooted map $\mathfrak{m}$, let $B_{\mathfrak{m}}(r)$ denote the ball of radius $r$ in $\mathfrak{m}$, centered at the root-vertex (i.e. the planar map defined by the edges of $\mathfrak{m}$ whose extremities are both at distance at most $r$ from the root-vertex, for the graph-distance on $\mathfrak{m}$). For all finite planar maps $\mathfrak{m}$, $\mathfrak{m}'$, we let $$\begin{gathered} D(\mathfrak{m},\mathfrak{m}') = (1+\sup \{ r \geq 0: B_{\mathfrak{m}}(r) = B_{\mathfrak{m}'}(r) \})^{-1}.\end{gathered}$$ The local topology is the topology associated to this distance. Let $\mathcal{Q}$, $\mathcal{Q}_b$ and $\mathcal{T}$ denote the completions of $\mathcal{Q}_f$, $\mathcal{Q}_{f,b}$ and $\mathcal{T}_f$ for this topology. The elements of $\mathcal{Q}_{\infty} := \mathcal{Q} \setminus \mathcal{Q}_f$ (resp. $\mathcal{T}_{\infty} := \mathcal{T} \setminus \mathcal{T}_f$) are *infinite* planar quadrangulations (resp. trees). All the notations introduced above for finite planar maps have natural extensions to the above sets. We let $\mathcal{Q_{\infty,\infty}}$ denote the set of the quadrangulations with an infinite boundary, i.e. the elements of $\mathcal{Q}_b$ which are defined as limits of sequences of maps in $\mathcal{Q}_{f,b}$ whose external faces have degrees going to infinity. Any element $Q$ of $\mathcal{Q}_{\infty}$ or $\mathcal{Q}_{\infty,\infty}$ can be seen as a gluing of quadrangles which defines an orientable, connected, separable surface, with a boundary in the second case. See [@CuMenMi Appendix] for details. We are interested in two cases: - If the corresponding surface is homeomorphic to $S={\mathbb{R}}^2$, we say that $Q$ is an infinite quadrangulation of the plane. - If the corresponding surface is homeomorphic to $S={\mathbb{R}}\times {\mathbb{R}}_{+}$, we say that $Q$ is an infinite quadrangulation of the half-plane. In both of these cases, $Q$ can be drawn onto $S$ in such a way that every face is bounded, every compact subset of $S$ intersects only finitely many edges of $Q$, and in the second case, the union of the boundary edges is ${\mathbb{R}}\times \{0\}$. By convention, if the root edge belongs to this boundary and is oriented from left to right, we say that $Q$ is a quadrangulation of the upper half-plane, and if it is oriented from right to left, we say that $Q$ is a quadrangulation of the lower half-plane. We let $\textbf{Q}$ denote the set of the quadrangulations of the plane, and ${{\overleftarrow{\textbf{Q}}}}$ (resp. ${{\overrightarrow{\textbf{Q}}}}$) denote the set of the infinite quadrangulations $Q$ of the upper half-plane (resp. lower half-plane) such that the boundary of $Q$ is a geodesic path in $Q$. As explained in [@CuMenMi Appendix], an element $Q$ of $\mathcal{Q}_{\infty}$ is a quadrangulation of the plane if and only if it has exactly one *end* - which means, in terms of maps, that for all $r \in {\mathbb{N}}$, the map $Q \setminus B_{Q}(r)$ has exactly one infinite connected component. For an element $Q$ of $\mathcal{Q}_{\infty,\infty}$, one can check that $Q$ is a quadrangulation of the half-plane if and only if the same condition holds. Indeed, the infinite quadrangulation obtained by gluing a copy of the lattice ${\mathbb{Z}}\times {\mathbb{Z}}_-$ along the boundary also has one end (the number of ends can only decrease when we perform this operation), so it is a quadrangulation of the plane. In what follows, the trees and quadrangulations we consider will be elements of $\textbf{T} := \mathcal{T}$, $\textbf{Q}$, ${{\overleftarrow{\textbf{Q}}}}$ and ${{\overrightarrow{\textbf{Q}}}}$. The uniform infinite quadrangulation (UIPQ) is a random variable in $\textbf{Q}$ whose distribution is the limit of the uniform distribution on planar quadrangulations with $n$ faces, as $n \rightarrow \infty$. ### Well-labeled trees {#S: Labelled trees} We say that $(T,l)$ is a well-labeled (plane, rooted) tree if $T$ is an element of $\textbf{T}$ and $l$ is a mapping from $V(T)$ into ${\mathbb{Z}}$ such that ${\left|l({\mathrm{u}}) - l({\mathrm{v}})\right|} \leq 1$ for every pair of neighbouring vertices ${\mathrm{u}},{\mathrm{v}}$. Let ${\mathbb{T}_{}}$ be the set of such trees. More precisely, for all $x \in {\mathbb{Z}}$ and $n \geq 0$, let ${\mathbb{T}_{n}}(x)$ be the set of well-labeled plane rooted trees with $n$ edges and root-label $x$, and $$\begin{gathered} {\mathbb{T}_{n}} = \bigcup_{x \in {\mathbb{Z}}} {\mathbb{T}_{n}}(x).\end{gathered}$$ Similarly, for all $x \in {\mathbb{Z}}$, let ${\mathbb{T}_{\infty}}(x)$ denote the set of infinite well-labeled plane rooted trees with root-label $x$, and $$\begin{gathered} {\mathbb{T}_{\infty}} = \bigcup_{x \in {\mathbb{Z}}} {\mathbb{T}_{\infty}}(x).\end{gathered}$$ We thus have $$\begin{gathered} {\mathbb{T}_{}} = \bigcup_{n \in {\mathbb{N}}\cup \{0,\infty\}} {\mathbb{T}_{n}} = \bigcup_{n \in {\mathbb{N}}\cup \{0,\infty\}} \bigcup_{x \in {\mathbb{Z}}} {\mathbb{T}_{n}}(x).\end{gathered}$$ For any (infinite) plane rooted tree $T$, we say that $({\mathrm{u}}_i)_{i \geq 0}$ is a spine in $T$ if ${\mathrm{u}}_0$ is the root of $T$ and if for all $i \geq 0$, ${\mathrm{u}}_i$ is the parent of ${\mathrm{u}}_{i+1}$. We let ${\textbf{S}}$ be the set of all plane rooted trees having exactly one spine, and consider the corresponding sets of labeled trees: $$\begin{gathered} {\mathbb{S}}(x) = \{(T,l) \in {\mathbb{T}_{\infty}}(x): T \in {\textbf{S}}\} \qquad \forall x \in {\mathbb{Z}}, \\ {\mathbb{S}}= \{(T,l) \in {\mathbb{T}_{\infty}}: T \in {\textbf{S}}\}.\end{gathered}$$ For every $T \in {\textbf{S}}$, we let $({\mathfrak{s}}_i(T))_{i \geq 0}$ be the spine of $T$. Any vertex ${\mathfrak{s}}_i(T)$ has a subtree “to its left” and a subtree “to its right” in $T$, which we denote by $L_i(T)$ and $R_i(T)$ respectively. To give a formal definition of these subtrees, we consider two orders on $V(T)$: the depth-first order, denoted by $<$, and the partial order $\prec$ induced by the genealogy, defined for all ${\mathrm{u}},{\mathrm{v}} \in V(T)$ by ${\mathrm{u}} \prec {\mathrm{v}}$ if ${\mathrm{u}}$ is an ancestor of ${\mathrm{v}}$ in $T$. With this notation: - $L_i(T)$ is the subtree of $T$ containing the vertices ${\mathrm{v}}$ such that ${\mathfrak{s}}_i \leq {\mathrm{v}} < {\mathfrak{s}}_{i+1}$. - $R_i(T)$ is the subtree of $T$ containing ${\mathfrak{s}}_i$ and the vertices ${\mathrm{v}}$ such that ${\mathfrak{s}}_{i+1} < {\mathrm{v}}$ and ${\mathfrak{s}}_{i+1} \nprec {\mathrm{v}}$. We also use the natural extensions of these notations to well-labeled trees. ### The Schaeffer correspondence between infinite trees and quadrangulations {#S: Schaeffer} In this section, we recall the definition of the Schaeffer correspondence used in [@CuMenMi], which matches infinite well-labeled trees with infinite quadrangulations of the plane. For all $x \in {\mathbb{Z}}$, let $$\begin{gathered} {\mathbb{S}}^{\ast}(x) = \{ (T,l) \in {\mathbb{S}}(x): \inf_{i \geq 0} l({\mathfrak{s}}_i(T)) = -\infty \}.\end{gathered}$$ We fix $\theta=(T,l) \in {\mathbb{S}}^{\ast}(0)$. Let ${\mathrm{c}}_n$, $n \in {\mathbb{Z}}$ denote the corners of $T$, taken in the clockwise order, with ${\mathrm{c}}_0$ the root-corner. For all $n$, we say that the label of ${\mathrm{c}}_n$ is the label of the vertex which is incident to ${\mathrm{c}}_n$, and we define the successor $\sigma_{\theta}({\mathrm{c}}_n)$ of ${\mathrm{c}}_n$ as the first corner among ${\mathrm{c}}_{n+1}, {\mathrm{c}}_{n+2}, \ldots$ such that $$\begin{gathered} l(\sigma_{\theta}({\mathrm{c}}_n)) = l({\mathrm{c}}_n)-1.\end{gathered}$$ We now let $\Phi(\theta)$ denote the graph whose set of vertices is $V(T)$, whose edges are the pairs $\{{\mathrm{c}},\sigma_{\theta}({\mathrm{c}})\}$ for all corners ${\mathrm{c}}$ of $T$, and whose root-edge is $({\mathrm{c}}_0,\sigma_{\theta}({\mathrm{c}}_0))$. Figure \[F: Schaeffer Bijection\] gives an example of this construction. Note that $\Phi(\theta)$ can be embedded naturally in the plane, by considering a specific embedding of $T$ and drawing arcs between every corner and its successor in a non-crossing way. Moreover, Proposition 2 of [@CuMenMi] shows that for all $\theta \in {\mathbb{S}}^{\ast}(0)$, $\Phi(\theta)$ is an infinite quadrangulation of the plane. ![The quadrangulation $\Phi(\theta)$ obtained by applying the Schaeffer correspondence to a labeled tree $\theta$. The edges of $\theta$ are represented by dashed lines.[]{data-label="F: Schaeffer Bijection"}](SchaefferBijection-V3.pdf) For a technical reason, we extend this definition to trees $\theta \in {\mathbb{S}}^{\ast}(1)$ by keeping the same vertices and edges, and choosing $(\sigma_{\theta}({\mathrm{c}}_0), \sigma_{\theta} (\sigma_{\theta}({\mathrm{c}}_0)))$ as the root. (Thus the root edge of $\Phi(\theta)$ always goes from vertices with labels $0$ and $-1$ in $\theta$.) For all $\theta \in {\mathbb{S}}^{\ast}(1)$, we still have $\Phi(\theta) \in \textbf{Q}$. ### Uniform infinite labeled tree and quadrangulation {#S: UIPQ} For all $x \in {\mathbb{Z}}$, let ${\rho_{(x)}}$ be the law of a Galton–Watson tree with offspring distribution $\operatorname{Geom}(1/2)$, such that the root has label $x$ and, for any vertex ${\mathrm{v}}$ other than the root, the label of ${\mathrm{v}}$ is uniform in $\{\ell-1,\ell,\ell+1\}$, with $\ell$ the label of its parent. The uniform infinite labeled tree is the random variable $\theta_{\infty} = (T_{\infty}, l_{\infty}) \in {\mathbb{S}}(0)$ whose distribution is characterized by the following properties: - the process of the spine-labels $(S_i(\theta_{\infty}))_{i \geq 0} := (l_{\infty}({\mathfrak{s}}_i(T_{\infty})))_{i \geq 0}$ is a random walk with independent uniform steps in $\{-1,0,1\}$, - conditionally on $(S_i(\theta_{\infty}))_{i \geq 0}$, the trees $L_i(\theta_{\infty})$ and $R_i(\theta_{\infty})$ are independent labeled trees distributed according to ${\rho_{(S_i)}}$. For all $n \in {\mathbb{N}}$, we also let $\theta_n = (T_n,l_n)$ be a uniform random element of ${\mathbb{T}_{n}}(0)$. It is known that $\theta_n$ converges to $\theta_{\infty}$ for the local limit topology, as $n \rightarrow \infty$ (as noted in [@CuMenMi], it is a consequence of [@Kes Lemma 1.14]). Note that we have $\theta_{\infty} \in {\mathbb{S}}^{\ast}(0)$ almost surely, and let $Q_{\infty} := \Phi(\theta_{\infty})$. It was shown in [@CuMenMi] that the UIPQ can be seen as the random quadrangulation $\widetilde{Q}_{\infty}$ equal to $Q_{\infty}$ with probability $1/2$, and to the quadrangulation obtained by reversing the root edge of $Q_{\infty}$ with probability $1/2$. Re-rooting the UIPQ at the $k$-th point on the leftmost geodesic ray {#S: Leftmost geodesic} -------------------------------------------------------------------- Let us first clarify what we mean by the leftmost geodesic originating form the root in the UIPQ. It is known from [@CuMenMi] that for all vertices ${\mathrm{u}},{\mathrm{v}}$ of $\widetilde{Q}_{\infty}$, the quantity $$\begin{gathered} \lim_{{\mathrm{w}} \rightarrow \infty} {\left(}d_{\widetilde{Q}_{\infty}}({\mathrm{u}},{\mathrm{w}}) - d_{\widetilde{Q}_{\infty}}({\mathrm{v}},{\mathrm{w}}){\right)}\end{gathered}$$ is well defined (in the sense that the difference of those distances is the same except for a finite number of vertices ${\mathrm{w}}$), and equal to the difference of the labels of ${\mathrm{u}}$ and ${\mathrm{v}}$ in the corresponding tree. As a consequence, letting $e$ denote the root edge of $Q_{\infty}$, with $e^-$ its origin and $e^+$ its other extremity, we have $$\begin{gathered} \lim_{{\mathrm{w}} \rightarrow \infty} {\left(}d_{\widetilde{Q}_{\infty}}(e^-,{\mathrm{w}}) - d_{\widetilde{Q}_{\infty}}(e^+,{\mathrm{w}}){\right)}= 1.\end{gathered}$$ In other words, the extremity of the root edge of $\widetilde{Q}_{\infty}$ which is “closest to infinity” is well defined, and equal to $e^+$. Therefore, it is natural to say that the leftmost geodesic ray started from the root in $\widetilde{Q}_{\infty}$ is the unique path $\gamma_L=(\gamma_L(i))_{i \geq 0}$ such that $\gamma_L(0)=e^-$, $\gamma_L(1)=e^+$ and for all $i \geq 1$, $\gamma_L(i+1)$ is the first neighbour of $\gamma_L(i)$ after $\gamma_L(i-1)$ (in the clockwise order) such that $$\begin{gathered} \lim_{{\mathrm{w}} \rightarrow \infty} {\left(}d_{\widetilde{Q}_{\infty}}(\gamma_L(i),{\mathrm{w}}) - d_{\widetilde{Q}_{\infty}}(\gamma_L(i+1),{\mathrm{w}}){\right)}= 1.\end{gathered}$$ Note that the definition of the leftmost geodesic ray does not depend on whether the root edge of $\widetilde{Q}_{\infty}$ has the same orientation as that of $Q_{\infty}$ or not, so it is sufficient to work with $Q_{\infty}$ in the rest of the article. The leftmost geodesic also has a natural definition in terms of the tree $\theta_{\infty}$. For all $k \geq 0$, let ${\mathrm{e}}_k$ be the $k$-th corner on the chain of the iterated successors of ${\mathrm{e}}_0$, where ${\mathrm{e}}_0$ is the root corner of $\theta_{\infty}$. Equivalently, ${\mathrm{e}}_k$ can be seen as the first corner with label $-k$ after the root, in the clockwise order. We use the same notation for the corresponding vertex in $Q_{\infty}$. The path $\gamma_{\max} := ({\mathrm{e}}_k)_{k \geq 0}$ is a geodesic ray in $Q_{\infty}$, called the maximal geodesic in [@CuMenMi], and equal to $\gamma_L$. Curien, Ménard and Miermont proved in [@CuMenMi] that all other geodesic rays from ${\mathrm{e}}_0$ to infinity are essentially similar to $\gamma_{\max}$: almost surely, there exists an infinite sequence of distinct vertices of $Q_{\infty}$ such that every geodesic ray from ${\mathrm{e}}_0$ to infinity passes through all these vertices. Our main goal is to study the local limit of $Q_\infty^{(k)}$ as $k \rightarrow \infty$, where $Q_\infty^{(k)}$ denotes the quadrangulation $Q_{\infty}$ re-rooted at $({\mathrm{e}}_k,{\mathrm{e}}_{k+1})$. More precisely, we will study what the quadrangulation looks like on the left and on the right of the geodesic ray $\gamma_{\max}$. This leads us to introduce the “split” quadrangulation $\operatorname{Sp}(Q_{\infty})$ obtained by “cutting” $Q_{\infty}$ along $\gamma_{\max}$; formally, $\operatorname{Sp}(Q_{\infty})$ is an infinite quadrangulation of the (lower) half-plane whose boundary is formed by the edges $({\mathrm{e}}_k,{\mathrm{e}}_{k+1})$ on the left of ${\mathrm{e}}_0$, and by copies $({\mathrm{e}}'_k,{\mathrm{e}}'_{k+1})$ of these edges on the right of ${\mathrm{e}}_0$. This construction is illustrated in Figure \[F: Split UIPQ\]. For all $k \geq 0$, we let ${\overrightarrow{Q}}_{\infty}^{(k)}$ denote the quadrangulation having the same vertices and edges as $\operatorname{Sp}(Q_{\infty})$, with root $({\mathrm{e}}_k,{\mathrm{e}}_{k+1})$, and ${\overleftarrow{Q}}_{\infty}^{(k)}$ denote the quadrangulation having the same vertices and edges as $\operatorname{Sp}(Q_{\infty})$, with root $({\mathrm{e}}'_k,{\mathrm{e}}'_{k+1})$. Thus, since $({\mathrm{e}}_k)_{k \geq 0}$ and $({\mathrm{e}}'_k)_{k \geq 0}$ are geodesics in ${\overrightarrow{Q}}_{\infty}^{(k)}$ and ${\overleftarrow{Q}}_{\infty}^{(k)}$, we have the following property: For all $r \leq k$, the ball of radius $r$ in $Q_\infty^{(k)}$ is the same as the union of the balls of radius $r$ in ${\overrightarrow{Q}}_{\infty}^{(k)}$ and ${\overleftarrow{Q}}_{\infty}^{(k)}$. The main idea now consists in studying the limit of the trees encoding ${\overrightarrow{Q}}_{\infty}^{(k)}$ and ${\overleftarrow{Q}}_{\infty}^{(k)}$, and then going back to the associated quadrangulations. ![The “split” quadrangulation $\operatorname{Sp}(Q_{\infty})$ obtained from $\theta_{\infty}$. The edges of the underlying tree $\theta_{\infty}$ are represented in dashed lines, and the geodesic ray $\gamma_{\max}$ is represented in red. The labels are omitted to keep the figure readable.[]{data-label="F: Split UIPQ"}](Schaeffer-SplitUIPQ-V9.pdf) To this end, for all $k \in {\mathbb{N}}$, we introduce the tree $\theta_{\infty}^{(k)} = (T_{\infty}^{(k)}, l_{\infty}^{(k)})$, where $T_{\infty}^{(k)}$ is the tree $T_{\infty}$ re-rooted at ${\mathrm{e}}_k$, and $l_{\infty}^{(k)}:= l_{\infty} + k$. Note that the vertices ${\mathrm{e}}'_k$, $k \geq 1$, contrary to the ${\mathrm{e}}_k$, do not correspond to corners of the tree $\theta_{\infty}$. Therefore, for all $k \in {\mathbb{N}}$, we let ${\mathrm{e}}_{-k+1}$ denote the last corner of $\theta_{\infty}$ before the root (still in the clockwise order) such that $\sigma_{\theta_{\infty}} ({\mathrm{e}}_{-k+1}) = {\mathrm{e}}_k$. Equivalently, ${\mathrm{e}}_{-k+1}$ can be seen as the last corner with label $-k+1$ before the root (hence the choice of the index). Now, for all $k \in {\mathbb{N}}$, we let $\theta_{\infty}^{(-k+1)} = (T_{\infty}^{(-k+1)}, l_{\infty}^{(-k+1)})$, where $T_{\infty}^{(-k+1)}$ is the tree $T_{\infty}$ re-rooted at ${\mathrm{e}}_{-k+1}$, and $l_{\infty}^{(-k+1)}:= l_{\infty} + k$. With this notation, for all $k \in {\mathbb{N}}$, we have $\theta_{\infty}^{(k)} \in {\mathbb{S}}^{\ast}(0)$, $\theta_{\infty}^{(-k+1)} \in {\mathbb{S}}^{\ast}(1)$, $\Phi(\theta_{\infty}^{(k)}) = {\overrightarrow{Q}}_{\infty}^{(k)}$ and $\Phi(\theta_{\infty}^{(-k+1)})= {\overleftarrow{Q}}_{\infty}^{(k)}$; but more importantly, we will show in Section \[S: Joint CV of the quadrangulations\] that the local limits of ${\overrightarrow{Q}}_{\infty}^{(k)}$ and ${\overleftarrow{Q}}_{\infty}^{(k)}$ can be determined using the local limits of $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k+1)}$. Intuitively, one can anticipate that the local limit of $\theta_{\infty}^{(k)}$ will be a tree in which the right-hand side only has positive labels, and the local limit of $\theta_{\infty}^{(-k+1)}$ will be a tree in which the left-hand side only has labels greater than $1$. This leads us to extend the domain of $\Phi$ to such trees. Extending the Schaeffer correspondence {#S: Extended Schaeffer} -------------------------------------- Consider the following subsets of ${\mathbb{S}}$: $$\begin{gathered} {\overrightarrow{{\mathbb{S}}}} = \{ (T,l) \in {\mathbb{S}}(0): \min_{n \leq -1} l({\mathrm{c}}_n(T)) = 1, \lim_{n \rightarrow -\infty} l({\mathrm{c}}_n(T)) = +\infty \mbox{ and } \inf_{n \geq 0} l({\mathrm{c}}_n(T)) = -\infty \} \\ {\overleftarrow{{\mathbb{S}}}} = \{ (T,l) \in {\mathbb{S}}(1): \min_{n \geq 1} l({\mathrm{c}}_n(T)) = 2, \lim_{n \rightarrow +\infty} l({\mathrm{c}}_n(T)) = +\infty \mbox{ and } \inf_{n \leq 0} l({\mathrm{c}}_n(T)) = -\infty \}.\end{gathered}$$ Here, we show that “Schaeffer-type” constructions yield natural associations between the trees in these sets and quadrangulations of the lower and upper half-planes. Examples of quadrangulations obtained this way are given on Figure \[F: extended Schaeffer\]. In the case where $\theta \in {\overrightarrow{{\mathbb{S}}}}$, the construction is exactly the same as for $\theta \in {\mathbb{S}}^{\ast}(0)$: for all $n$, we define the successor $\sigma_{\theta}({\mathrm{c}}_n)$ of ${\mathrm{c}}_n$ as the first corner among ${\mathrm{c}}_{n+1}, {\mathrm{c}}_{n+2}, \ldots$ such that $$\begin{gathered} l(\sigma_{\theta}({\mathrm{c}}_n)) = l({\mathrm{c}}_n)-1,\end{gathered}$$ and we let $\Phi(\theta)$ denote the graph whose set of vertices is $V(T)$, whose edges are the pairs $\{{\mathrm{c}},\sigma_{\theta}({\mathrm{c}})\}$ for all corners ${\mathrm{c}}$ of $T$, and whose root-edge is $({\mathrm{c}}_0,\sigma_{\theta}({\mathrm{c}}_0))$. Now, consider the case where $\theta \in {\overleftarrow{{\mathbb{S}}}}$. If we use the above construction, then for example, for all $i$, the last corner with label $i$ has no successor. We therefore add a “shuttle” $\Lambda$, i.e. a line of new points $\lambda_i$, $i \in {\mathbb{Z}}$ on which the corners with no successor will be attached. More precisely, for all $n$, the successor of ${\mathrm{c}}_n$ is defined as $$\begin{gathered} \sigma_{\theta}({\mathrm{c}}_n) = \left\lbrace \begin{array}{ll} {\mathrm{c}}_{n'} & \mbox{for the smallest } n'\geq n \mbox{ such that } l({\mathrm{c}}_{n'}) = l({\mathrm{c}}_n)-1, \mbox{ if it exists,} \\ \lambda_{l({\mathrm{c}}_n)-1} & \mbox{otherwise,} \end{array}\right.\end{gathered}$$ and we extend this notation to the points of $\Lambda$ by letting $\sigma_{\theta} (\lambda_i) = \lambda_{i-1}$ for all $i \in {\mathbb{Z}}$. We let $\Phi(\theta)$ be the graph whose set of vertices is $V(T) \sqcup \Lambda$, whose edges are the pairs $\{{\mathrm{c}},\sigma_{\theta}({\mathrm{c}})\}$ for all corners ${\mathrm{c}}$ of $T$, and the pairs $\{\lambda_i,\lambda_{i-1}\}$ for all $i \in {\mathbb{Z}}$, and whose root-edge is $(\lambda_0,\lambda_{-1})$. (Note that the rooting convention is consistent with the one we used to define $\Phi$ on ${\mathbb{S}}^{\ast}(1)$.) ![Examples of quadrangulations $\Phi(\theta)$ obtained for $\theta \in \protect{\overleftarrow{{\mathbb{S}}}}$ (on the left-hand side) and $\theta \in \protect{\overrightarrow{{\mathbb{S}}}}$ (on the right-hand side).[]{data-label="F: extended Schaeffer"}](Extended-Schaeffer.pdf) We have the following properties: - If $\theta \in {\overrightarrow{{\mathbb{S}}}}$, then $\Phi(\theta) \in {{\overrightarrow{\textbf{Q}}}}$. - If $\theta \in {\overleftarrow{{\mathbb{S}}}}$, then $\Phi(\theta) \in {{\overleftarrow{\textbf{Q}}}}$. In both cases, it is clear that the graph $\Phi(\theta)$ has a natural embedding into the plane, and the conditions on $\liminf_{n \rightarrow -\infty} l({\mathrm{c}}_n)$ ensure that every corner is the successor of a finite number of other corners. Thus every vertex of $\Phi(\theta)$ has finite degree: $\Phi(\theta)$ is an infinite planar map. As in Schaeffer’s usual construction, a simple case study shows that for every corner ${\mathrm{c}}$ of $\theta$: - The face which is on the right of $({\mathrm{c}}, \sigma_{\theta}({\mathrm{c}}))$ is a quadrangle. - If there exists a corner ${\mathrm{c}}' < {\mathrm{c}}$ such that $l({\mathrm{c}}')=l({\mathrm{c}})$, then the face which is on the left of $({\mathrm{c}}, \sigma_{\theta}({\mathrm{c}}))$ is a quadrangle. If $\theta \in {\overleftarrow{{\mathbb{S}}}}$, this is always true. If $\theta \in {\overrightarrow{{\mathbb{S}}}}$, then the only corners for which it is not true are the ${\mathrm{c}}_{n_i}$, $i \in {\mathbb{Z}}$, with $n_i = \min \{ n \in {\mathbb{Z}}: l({\mathrm{c}}_n)=i \}$. For all $i$, we have $\sigma({\mathrm{c}}_{n_i})={\mathrm{c}}_{n_{i-1}}$, and the face which is on the left of $({\mathrm{c}}_{n_i},{\mathrm{c}}_{n_{i-1}})$ is the root face of $\Phi(\theta)$. For $\theta \in {\overleftarrow{{\mathbb{S}}}}$, we also have to study the faces which are on the left and on the right of the edges $(\lambda_i, \lambda_{i-1})$: we easily see that the first one is always a quadrangle, and that the second one is the same for all $i$. Thus: - For all $\theta \in {\overrightarrow{{\mathbb{S}}}}$, we have $\Phi(\theta) \in \mathcal{Q}_{\infty,\infty}$. - For all $\theta \in {\overleftarrow{{\mathbb{S}}}}$, letting ${\overline{\Phi(\theta)}}$ denote the map obtained by reversing the root edge of $\Phi(\theta)$, we have ${\overline{\Phi(\theta)}} \in \mathcal{Q}_{\infty,\infty}$. Note that the construction ensures that the classical bound $$\begin{gathered} \label{E: Lower bound on d_Phi(.)} d_{\Phi(\theta)}({\mathrm{u}},{\mathrm{v}}) \geq {\left|l({\mathrm{u}},{\mathrm{v}})\right|}\end{gathered}$$ still holds. As a consequence, in both cases, the boundary is a geodesic path. Moreover, the fact that $\theta$ has exactly one spine implies, by construction, that $\Phi(\theta)$ is one-ended. Note that for $\theta \in {\overrightarrow{{\mathbb{S}}}}$, for all $i<i'$, the path $(\lambda_j)_{i \leq j \leq i'}$ is the unique geodesic between $\lambda_i$ and $\lambda_{i'}$. Indeed, all neighbours of $\lambda_{i'}$ different of $\lambda_{i'-1}$ have labels equal to $i+1$, so they are at distance (at least) $i'-i+1$ from $\lambda_i$. In other words, the boundary is the *unique* geodesic path between vertices of $\Lambda$. Main results {#S: Results} ------------ The first part of our work is the identification of the limit of the joint distribution of $(\theta_{\infty}^{(k)}, \theta_{\infty}^{(-k+1)})$ as $k \rightarrow \infty$. We begin by using the convergence of $\theta_n$ towards $\theta_{\infty}$ to give an explicit description of this joint distribution. To give a more precise idea of these results, we adapt the notation of Section \[S: Leftmost geodesic\] to possibly finite trees. For all $\theta = (T,l) \in {\mathbb{T}_{}}(0)$ and $k \geq 0$ such that $\min_{V(T)} l \leq -k$, let ${\mathrm{e}}_k (\theta)$ be the first corner having label $-k$ after the root, in clockwise order, ${\mathrm{e}}_{-k} (\theta)$ be the last corner having label $-k$ before the root, and ${\mathrm{v}}_k(\theta)$ be the most recent common ancestor of ${\mathrm{e}}_k(\theta)$ and ${\mathrm{e}}_{-k+1}(\theta)$. Note that for $k=0$, this is well defined since ${\mathrm{e}}_0 (\theta) = {\mathrm{e}}_{-0} (\theta)$. Finally, we define the finite analogs of $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k+1)}$: conditionally on $\min_{V(T_n)} l_n \leq - {\left|k\right|}$, for all $n,k \in {\mathbb{N}}$, we let - $\theta_n^{(k)} = (T_n^{(k)}, l_n^{(k)})$, where $T_n^{(k)}$ is the tree $T_n$ re-rooted at ${\mathrm{e}}_k (\theta_n)$, and $l_n^{(k)}= l_n + k$, - $\theta_n^{(-k+1)} = (T_n^{(-k+1)}, l_n^{(-k+1)})$, where $T_n^{(-k+1)}$ is the tree $T_n$ re-rooted at ${\mathrm{e}}_{-k+1} (\theta_n)$, and $l_n^{(-k+1)}= l_n + k$. It is easy to see that: We have the joint convergence in distribution $$\begin{gathered} \label{E: Convergence of theta_n,k in n} (\theta_n^{(k)},\theta_n^{(-k+1)}) {\xrightarrow[n \rightarrow \infty]{}} (\theta_{\infty}^{(k)},\theta_{\infty}^{(-k+1)})\end{gathered}$$ for the local limit topology. Indeed, the operations which consist in re-rooting a tree $\theta \in {\mathbb{T}_{}}$ at ${\mathrm{e}}_k(\theta)$ and ${\mathrm{e}}_{-k+1}(\theta)$ are both continuous for the local limit topology on ${\mathbb{S}}^{\ast}(0)$. Since $\theta_{\infty}$ belongs to this set, this yields the conclusion. This lemma will allow us to give an explicit description of the joint distribution of $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k+1)}$ (see Proposition \[T: Distribution of theta(infty,k)\] for the distribution of $\theta_{\infty}^{(k)}$ alone, and Corollary \[T: Distribution of theta(infty,k) with two marked points\] for the joint distribution). We use these results to prove the convergence theorem below. Recall that ${\rho_{(x)}}$ denotes the distribution of a Galton–Watson tree with $\operatorname{Geom}(1/2)$ offspring distribution and “uniform” labels, with root label $x$. If $x$ is positive, we let ${\rho_{(x)}}^+$ denote the same distribution, conditioned to have only positive labels. We also introduce a Markov chain $\tilde{X}$ taking values in ${\mathbb{N}}$, with transition probabilities $$\begin{gathered} p_x := {\mathbb{P}\if\relax\relax(\tilde{X}_1 = x+1 | \tilde{X}_0=x)\else\left(\tilde{X}_1 = x+1 | \tilde{X}_0=x\right)\fi} = \frac{(x+4)(2x+5)}{3(x+2)(2x+3)} \\ r_x := {\mathbb{P}\if\relax\relax(\tilde{X}_1 = x | \tilde{X}_0=x)\else\left(\tilde{X}_1 = x | \tilde{X}_0=x\right)\fi} = \frac{x(x+3)}{3(x+1)(x+2)} \\ q_x := {\mathbb{P}\if\relax\relax(\tilde{X}_1 = x-1 | \tilde{X}_0=x)\else\left(\tilde{X}_1 = x-1 | \tilde{X}_0=x\right)\fi} = \frac{(x-1)(2x+1)}{3(x+1)(2x+3)}.\end{gathered}$$ Note that $\tilde{X}$ can be seen as a discrete version of a seven-dimensional Bessel process. Indeed, a theorem of Lamperti [@Lamp] shows that, under some easily checked conditions, the rescaled process $((1/\sqrt{n}) \cdot \tilde{X}_{{\lfloor nt \rfloor}})_{t \geq 0}$ converges in distribution to a diffusion process with generator $$\begin{gathered} L = \frac{\alpha}{x}\frac{d}{dx} + \frac{\beta}{2} \frac{d^2}{dx^2},\end{gathered}$$ where $$\begin{gathered} \alpha = \lim_{x \rightarrow \infty} x {\mathbb{E}\if\relax\relax[\tilde{X}_1-\tilde{X}_0 | \tilde{X}_0=x]\else\left[\tilde{X}_1-\tilde{X}_0 | \tilde{X}_0=x\right]\fi} = 2\end{gathered}$$ and $$\begin{gathered} \beta = \lim_{x \rightarrow \infty} {\mathbb{E}\if\relax\relax[(\tilde{X}_1-\tilde{X}_0)^2 | \tilde{X}_0=x]\else\left[(\tilde{X}_1-\tilde{X}_0)^2 | \tilde{X}_0=x\right]\fi} = \frac{2}{3},\end{gathered}$$ hence in our case $$\begin{gathered} L = \frac{2}{3} {\left(}\frac{3}{x}\frac{d}{dx} + \frac{1}{2} \frac{d^2}{dx^2}{\right)}.\end{gathered}$$ Thus, $((1/\sqrt{n}) \cdot \tilde{X}_{{\lfloor nt \rfloor}})_{t \geq 0}$ converges to $(Z_{2t/3})_{t \geq 0}$, where $Z$ denotes a Bessel(7) process started from 0. \[T: Joint cv of the rerooted trees\] We have the joint convergence in distribution $$\begin{gathered} (\theta_{\infty}^{(k)},\theta_{\infty}^{(-k+1)}) {\xrightarrow[k \rightarrow \infty]{}} ({\overrightarrow{\theta_{\infty}}},{\overleftarrow{\theta_{\infty}}})\end{gathered}$$ for the local topology, where ${\overrightarrow{\theta_{\infty}}} = ({\overrightarrow{T_{\infty}}}, {\overrightarrow{l_{\infty}}})$ and ${\overleftarrow{\theta_{\infty}}} = ({\overleftarrow{T_{\infty}}}, {\overleftarrow{l_{\infty}}})$ are independent random variables in ${\mathbb{S}}(0)$ and ${\mathbb{S}}(1)$, whose distributions are characterized by the following properties: - The process $(S_i({\overrightarrow{\theta_{\infty}}}))_{i \geq 1}$ has the same law as the Markov chain $\tilde{X}$ started from $1$. - Conditionally on $(S_i({\overrightarrow{\theta_{\infty}}}))_{i \geq 0}$, the subtrees $L_i({\overrightarrow{\theta_{\infty}}})$, $i \geq 0$ and $R_i({\overrightarrow{\theta_{\infty}}})$, $i \geq 1$ are independent random variables, with respective distributions ${\rho_{(S_i({\overrightarrow{\theta_{\infty}}}))}}$ and ${\rho_{(S_i({\overrightarrow{\theta_{\infty}}}))}}^+$. - We have the joint distributional identities: $$\begin{gathered} (S_i({\overleftarrow{\theta_{\infty}}})-1)_{i \geq 0} = (S_i({\overrightarrow{\theta_{\infty}}}))_{i \geq 0} \\ (L_i({\overleftarrow{T_{\infty}}}), {\overleftarrow{l_{\infty}}}-1)_{i \geq 0} = (R_i({\overrightarrow{\theta_{\infty}}}))_{i \geq 0} \\ (R_i({\overleftarrow{T_{\infty}}}), {\overleftarrow{l_{\infty}}}-1)_{i \geq 0} = (L_i({\overrightarrow{\theta_{\infty}}}))_{i \geq 0}.\end{gathered}$$ We finally extend this convergence to the associated quadrangulations: \[T: Joint CV of the quadrangulations\] Let ${\overrightarrow{Q}}_{\infty} = \Phi({\overrightarrow{\theta_{\infty}}})$ and ${\overleftarrow{Q}}_{\infty} = \Phi({\overleftarrow{\theta_{\infty}}})$. We have the joint convergence in distribution $$\begin{gathered} ({\overrightarrow{Q}}_{\infty}^{(k)}, {\overleftarrow{Q}}_{\infty}^{(k)}) {\xrightarrow[k \rightarrow \infty]{}} ({\overrightarrow{Q}}_{\infty}, {\overleftarrow{Q}}_{\infty})\end{gathered}$$ for the local topology. As a consequence, $Q_{\infty}^{(k)}$ converges in distribution towards the quadrangulation of the plane ${\overleftrightarrow{Q}}_{\infty}$ obtained by gluing together the boundaries of ${\overrightarrow{Q}}_{\infty}$ and ${\overleftarrow{Q}}_{\infty}$ in such a way that their root edges are identified. Note that $\Phi$ is not continuous at points ${\overrightarrow{Q}}_{\infty}$ and ${\overleftarrow{Q}}_{\infty}$, so this result is not a straightforward consequence of the previous theorem. In the same spirit as Ménard in [@Men], we have to show that the balls of radius $r$ in ${\overrightarrow{Q}}_{\infty}$ and ${\overleftarrow{Q}}_{\infty}$ are included into balls of radius $h(r)$ in the corresponding trees with high probability, uniformly in $k$. This is done in Proposition \[T: Ball inclusions\]. The distribution of ${\overleftrightarrow{Q}}_{\infty}$ could be the subject of further study, in particular concerning its symmetries. Informally, it would be interesting to see if it is invariant under the two following transformations: - Rerooting ${\overleftrightarrow{Q}}_{\infty}$ at the “lowest” edge $e$ belonging to an infinite geodesic $(\gamma(i))_{i \in {\mathbb{Z}}}$, such that $l(e^-)=0$ and $l(e^+)=1$; then taking the quadrangulation obtained by reflection with respect to the root edge. - Rerooting ${\overleftrightarrow{Q}}_{\infty}$ at the “lowest” edge $e$ belonging to an infinite geodesic $(\gamma(i))_{i \in {\mathbb{Z}}}$, such that $l(e^-)=0$ and $l(e^+)=1$; then reversing the root edge. In the first case, the invariance should be easy to derive from symmetries of the UIPQ. The second question appears more difficult and is work in progress. The paper is organized in the following way. In Sections \[S: First convergence\] and \[S: Joint CV of the trees\], we focus on the convergence of the trees $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k+1)}$. We first give the proof of the convergence of $\theta_{\infty}^{(k)}$ alone, and then show how the same methods can be applied to derive the joint convergence. Note that the convergence results of Section \[S: First convergence\] are not necessary in the proof of the joint convergence, but should make the structure of the proof easier to understand. Finally, Section \[S: Joint CV of the quadrangulations\] is devoted to the proof of Theorem \[T: Joint CV of the quadrangulations\]. #### Acknowledgements: This work is part of a larger project in collaboration with Grégory Miermont and Erich Baur. The author would like to thank them for the inspiring discussions and their careful proofreadings, as well as Nicolas Curien, whose idea it first was to study this transformation of the UIPQ. Convergence of $\theta_{\infty}^{(k)}$ {#S: First convergence} ====================================== Explicit expressions for the distribution of $\theta_{\infty}^{(k)}$ {#S: First dist eqns} -------------------------------------------------------------------- In this section, we work with a fixed value of $k \in {\mathbb{N}}$. Let us introduce some notation for particular vertices and subtrees of $\theta_n^{(k)}$, for $n \in {\mathbb{N}}\cup \{\infty\}$. All the variables we consider also depend on $k$, and should therefore be denoted with an exponent $^{(k)}$, but we omit it as long as $k$ is fixed, to keep the notation readable. First, let $m_n$ be the graph-distance between ${\mathrm{e}}_0(\theta_n)$ and ${\mathrm{e}}_k(\theta_n)$, and ${\mathrm{x}}_{n,0}, \ldots, {\mathrm{x}}_{n,m_n}$ denote the sequence of the vertices which appear on the path from ${\mathrm{e}}_k(\theta_n)$ to ${\mathrm{e}}_0(\theta_n)$. For all $i \in \{0,\ldots, m_n\}$, let $X_{n,i} = l_n^{(k)} ({\mathrm{x}}_{n,i})$. We also consider the subtrees which appear on each “side” of the path $({\mathrm{x}}_{n,0}, \ldots, {\mathrm{x}}_{n,m_n})$: - For all $i \in \{1, \ldots, m_n\}$, let $\tau_{n,i}$ be the subtree of $\theta_n^{(k)}$ containing the vertices ${\mathrm{v}}$ such that in $\theta_n$, we had ${\mathrm{x}}_{n,i} \leq {\mathrm{v}} < {\mathrm{x}}_{n,i-1}$. - For all $i \in \{0,\ldots, m_n\}$, let $\tau'_{n,i}$ be the subtree of $\theta_n^{(k)}$ containing the vertices ${\mathrm{v}}$ such that in $\theta_n$, we had ${\mathrm{v}}={\mathrm{x}}_{n,i}$, or ${\mathrm{x}}_{n,i} \prec {\mathrm{v}}$, ${\mathrm{x}}_{n,i-1} < {\mathrm{v}}$ and ${\mathrm{x}}_{n,i-1} \nprec {\mathrm{v}}$. We emphasize that these subtrees inherit the labels $l_n^{(k)}$ instead of $l_n$, even if we have to use the orders $<$ and $\prec$ on $T_n$ (instead of $T_n^{(k)}$) to define them. The fact that we have to use these orders may seem a bit clumsy since the subtrees are numbered starting from the root ${\mathrm{x}}_{n,0}$ of $\theta_n^{(k)}$, but it is necessary to get the distinction between $\tau_{n,m_n}$ and $\tau'_{n,m_n}$. Figure \[F: Notation on theta\_n,k\] sums up the above notation. ![Notation for the vertices and subtrees of $\theta_n^{(k)}$.[]{data-label="F: Notation on theta_n,k"}](ReRootedTreeTheta_n,k.pdf) Our first step is to characterize the joint distribution of $m_{\infty}$, $(X_{\infty,i})_{0 \leq i \leq m_{\infty}}$, $(\tau_{\infty,i})_{1 \leq i \leq m_{\infty}}$ and $(\tau'_{\infty,i})_{0 \leq i \leq m_{\infty}}$. We introduce some more notation for the sets in which these random variables take their values. For all $m,x,x' \in {\mathbb{N}}$, let $\mathcal{M}^+_{m, x \rightarrow x'}$ denote the set of the walks $(x_1, \ldots, x_m) \in {\mathbb{N}}^m$ such that $x_1=x$, $x_m = x'$ and for all $i \leq m-1$, ${\left|x_{i+1} - x_i\right|} \leq 1$. Also let $$\begin{gathered} {\mathbb{T}_{n}}^+(x) = \{(T,l) \in {\mathbb{T}_{n}}(x): l>0 \} \qquad \forall x \in {\mathbb{N}}, \\ {\mathbb{T}_{n}}^+ = \bigcup_{x \in {\mathbb{N}}} {\mathbb{T}_{n}}^+(x) \qquad \mbox{and} \qquad {\mathbb{T}_{}}^+ = \bigcup_{n \geq 0} {\mathbb{T}_{n}}^+.\end{gathered}$$ We also use the following facts on the distributions ${\rho_{(x)}}$ and ${\rho_{(x)}}^+$: for all $n \geq 0$, it is known that $$\begin{gathered} {\rho_{(x)}} (\{\theta\}) = \frac{1}{2 \cdot 12^n} \qquad \forall x \in {\mathbb{Z}},\ \theta \in {\mathbb{T}_{n}}(x)\end{gathered}$$ and Proposition 2.4 of [@ChaDu] shows that $$\begin{gathered} \label{E: w [Cha-Dur]} \sum_{n \geq 0} \frac{1}{2 \cdot 12^n}{\# {\mathbb{T}_{n}}^+(x)} = w(x) := \frac{x(x+3)}{(x+1)(x+2)} \qquad \forall x \in {\mathbb{N}}.\end{gathered}$$ In particular, for all $n \geq 0$ and $x \in {\mathbb{N}}$, we have ${\rho_{(x)}} ({\mathbb{T}_{}}^+) = w(x)$ and $$\begin{gathered} {\rho_{(x)}}^+ (\{\theta\}) = \frac{1}{2 w(x) 12^n} \qquad \forall \theta \in {\mathbb{T}_{n}}^+(x).\end{gathered}$$ Finally, for all $m \in {\mathbb{N}}$ and $(x_0,\ldots,x_m) \in {\mathbb{Z}}^{m+1}$, we let $\mu_{(x_0,\ldots,x_m)}$ denote the distribution of the forest $(\tilde{\tau}_i)_{0 \leq i \leq m}$ defined as follows. Let $I$ be a uniform random variable in $\{0,\ldots,m\}$. Let $\tilde{\tau}_I$ be a random tree distributed as $(T_{\infty},l_{\infty}+x_I)$, and $\tilde{\tau}_i$, $i \in \{0,\ldots,m\} \setminus \lbrace I \rbrace$ be independent random trees distributed according to ${\rho_{(x_i)}}$, independent of $\tilde{\tau}_I$. We can now state the proposition: \[T: Distribution of theta(infty,k)\] We have $X_{\infty,0} = 0\ {\mbox{a.s.}}$, and for all $m \in {\mathbb{N}}$, ${\underline{x}} \in \mathcal{M}^+_{m, 1 \rightarrow k}$, $$\begin{gathered} {\mathbb{P}\if\relax\relax(m_{\infty}=m, (X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}})\else\left(m_{\infty}=m, (X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}}\right)\fi} = \frac{m+1}{3^m} \prod_{i=1}^m w(x_i).\end{gathered}$$ Moreover, conditionally on $m_{\infty}=m$ and $(X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}}$: - The forests $(\tau_{\infty,i})_{1 \leq i \leq m}$ and $(\tau'_{\infty,i})_{0 \leq i \leq m}$ are independent. - The trees $\tau_{\infty,i}$, $1 \leq i \leq m$ are independent random variables distributed according to ${\rho_{(x_i)}}^+$. - The forest $(\tau'_{\infty,i})_{0 \leq i \leq m}$ is distributed according to $\mu_{(0,x_1,\ldots,x_m)}$. The proof of this proposition relies on counting the well-labeled trees with $n$ edges such that the corresponding $m_n$, $(\tau_{n,1},\ldots,\tau_{n,m})$ take a certain value, and using the convergence . We say that a well-labeled forest with $m$ trees is a $m$-tuple of well-labeled plane rooted trees $(t_1,\ldots,t_m)$, such that for all $i \in \{1,\ldots,m-1\}$, the labels of the roots of $t_i$ and $t_{i+1}$ differ by at most $1$. The number of edges of such a forest is the sum of the numbers of edges of the trees $t_1\ldots,t_m$. Let ${\mathbb{F}_{m,n}}$ be the set of well-labeled plane forests with $m$ trees and $n$ edges. Fix $m,N \geq 0$, ${\underline{t}} = (t_1,\ldots,t_m) \in {\mathbb{F}_{m,N}}$ such that the root of $t_1$ has label $1$, and all the labels in ${\underline{t}}$ are positive. For all $n \in {\mathbb{N}}\cup \{\infty\}$, let $$\begin{gathered} P_n^{(k)} (m,{\underline{t}}) = {\mathbb{P}\if\relax\relax(m_n=m,(\tau_{n,1}, \ldots, \tau_{n,m})={\underline{t}} | \min l_n \leq -k)\else\left(m_n=m,(\tau_{n,1}, \ldots, \tau_{n,m})={\underline{t}} | \min l_n \leq -k\right)\fi}.\end{gathered}$$ We are interested in the behaviour of $P_n^{(k)} (m,{\underline{t}})$ as $n \rightarrow \infty$, for fixed $k$. Since $\theta_n$ is uniform in ${\mathbb{T}_{n}}(0)$, we have $$\begin{gathered} P_n^{(k)} (m,{\underline{t}}) = \frac{\mathcal{F}_{m+1,n-(m+N)}}{{\# \{(T,l) \in {\mathbb{T}_{n}}(0): \min l \leq -k\}}},\end{gathered}$$ where for all $n' \geq 0$, $$\begin{aligned} \mathcal{F}_{m+1,n'} & = {\# \{ (t'_0,\ldots,t'_m) \in {\mathbb{F}_{m+1,n'}}: \begin{array}{l} \mbox{the root of } t'_0 \mbox{ has label } 0 \mbox{ and for all } i \geq 1,\\ \mbox{the root of } t'_0 \mbox{ has the same label as the root of } t_i \end{array} \}} \\ & = {\# \{ (t'_0,\ldots,t'_m) \in {\mathbb{F}_{m+1,n'}}: \mbox{for all } i \geq 1, \mbox{the root of } t'_0 \mbox{ has label } 0 \}}.\end{aligned}$$ First note that $$\begin{gathered} {\# {\left(}\{(T,l) \in {\mathbb{T}_{n}}(0): \min l \leq -k\}{\right)}} \sim_{n \rightarrow \infty} {\# {\mathbb{T}_{n}}(0)}.\end{gathered}$$ Moreover, it can be seen from the well-known cyclic lemma (see [@Pit_CSP]) that $$\begin{gathered} \label{E: Number of ltrees} {\# {\mathbb{T}_{n}}(0)} = \frac{3^n}{2n+1} \binom{2n+1}{n}\end{gathered}$$ and $$\begin{gathered} \label{E: Number of lforests} \mathcal{F}_{m,n} = \frac{3^n m}{2n+m} \binom{2n+m}{n}.\end{gathered}$$ Applying these formulas to our case gives $$\begin{gathered} \mathcal{F}_{m+1,n-(m+N)} = \frac{3^{n-(m+N)} (m+1)}{2n+1-(m+2N)} \binom{2n+1-(m+2N)}{n-(m+N)},\end{gathered}$$ and therefore $$\begin{gathered} P_n^{(k)} (m,{\underline{t}}) \sim_{n \rightarrow \infty} \frac{m+1}{3^{m+N}} \binom{2n+1}{n}^{-1} \binom{2n+1-(m+2N)}{n-(m+N)}.\end{gathered}$$ We now use Stirling’s formula to get an estimate of the binomial coefficients involved: $$\begin{gathered} \binom{2n+1}{n} \sim_{n \rightarrow \infty} \frac{2 \cdot 4^n}{\sqrt{\pi n}},\end{gathered}$$ and $$\begin{gathered} \binom{2n+1-(m+2N)}{n-(m+N)} \sim_{n \rightarrow \infty} \frac{4^n}{2^{m+2N-1} \sqrt{\pi n}}.\end{gathered}$$ Putting these together, we obtain $$\begin{gathered} P_n^{(k)} (m,{\underline{t}}) \sim_{n \rightarrow \infty} \frac{m+1}{3^{m+N}2^{m+2N}} = \frac{m+1}{6^m 12^N},\end{gathered}$$ so the local convergence implies that $$\begin{gathered} P_{\infty}^{(k)} (m,{\underline{t}}) = \frac{m+1}{6^m 12^N}.\end{gathered}$$ As a consequence, for all $m \in {\mathbb{N}}$, ${\underline{x}} \in \mathcal{M}^+_{m, 1 \rightarrow k}$, we have $$\begin{gathered} {\mathbb{P}\if\relax\relax(m_{\infty}=m, (X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}})\else\left(m_{\infty}=m, (X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}}\right)\fi} = \frac{m+1}{6^m} \prod_{i=1}^m {\left(}\sum_{n_i \geq 0} \frac{1}{12^{n_i}} {\# {\mathbb{T}_{n_i}}^+(x_i)} {\right)}.\end{gathered}$$ Recalling equation , we get $$\begin{gathered} {\mathbb{P}\if\relax\relax(m_{\infty}=m, (X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}})\else\left(m_{\infty}=m, (X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}}\right)\fi} = \frac{m+1}{6^m} \prod_{i=1}^m 2 w(x_i) = \frac{m+1}{3^m} \prod_{i=1}^m w(x_i).\end{gathered}$$ Furthermore, for all ${\underline{t}} = (t_1,\ldots,t_m) \in {\mathbb{F}_{m,N}}$ such that all the labels in ${\underline{t}}$ are positive, the conditional probability $$\begin{gathered} {\mathbb{P}\if\relax\relax((\tau_{\infty,1}, \ldots, \tau_{\infty,m}) = {\underline{t}}\ |\ m_{\infty}=m, (X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}})\else\left((\tau_{\infty,1}, \ldots, \tau_{\infty,m}) = {\underline{t}}\ |\ m_{\infty}=m, (X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}}\right)\fi}\end{gathered}$$ is equal to $$\begin{gathered} \frac{m+1}{6^m 12^N} \cdot {\left(}\frac{m+1}{3^m} \prod_{i=1}^m w(x_i){\right)}^{-1} = \prod_{i=1}^m \frac{1}{2 w(x_i) 12^{|t_i|}} = \prod_{i=1}^m {\rho_{(x_i)}}^+ (t_i),\end{gathered}$$ hence the conditional distribution of $(\tau_{\infty,1}, \ldots, \tau_{\infty,m_{\infty}})$. Finally, conditionally on $m_{\infty}=m$, $(X_{\infty,1}, \ldots, X_{\infty,m}) = {\underline{x}}$ and $(\tau_{\infty,1}, \ldots, \tau_{\infty,m}) = {\underline{t}} \in {\mathbb{F}_{m,N}}$, the trees $(\tau'_{n,0}, \ldots, \tau'_{n,m})$ form a uniform labeled forest with $m+1$ trees and $n-m-N$ edges, hence the distribution of the limit given in the statement. To get the limit of $\theta_{\infty}^{(k)}$, the main step will consist in showing that for any $r \in {\mathbb{N}}$, the labels $X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)}$ converge in distribution to the $r$ first steps of the Markov chain $\tilde{X}$ started at $1$, as $k \rightarrow \infty$. For the moment, we show how to make $\tilde{X}$ appear in the above expression; the fact that it is indeed the limit is the purpose of Proposition \[T: First labels convergence\]. We first introduce the random walk $(\hat{X}_i)_{i \geq 0}$ with uniform random steps in $\{-1,0,1\}$. From now on, we also adopt the usual notation ${\mathbb{E}_{x}\if\relax\relax[\ \cdot\ ]\else\left[\ \cdot\ \right]\fi}$ for the conditional expectation ${\mathbb{E}\if\relax\relax[\ \cdot\ | \hat{X}_0 = x]\else\left[\ \cdot\ | \hat{X}_0 = x\right]\fi}$, for all $x$. The expression of the lemma implies that $$\begin{gathered} {\mathbb{P}\if\relax\relax(m_{\infty}=m)\else\left(m_{\infty}=m\right)\fi} = \frac{m+1}{3} {\mathbb{E}_{1}\if\relaxsz\relax[\prod_{i=0}^{m-1} w(\hat{X}_i) {\mathbf{1}_{\lbrace \hat{X}_{m-1}=k \rbrace}}]\else\left[\prod_{i=0}^{m-1} w(\hat{X}_i) {\mathbf{1}_{\lbrace \hat{X}_{m-1}=k \rbrace}}\right]\fi}.\end{gathered}$$ (Note that the term in the expectation is zero if we do not have $\hat{X}_i \geq 1$ for all $i \leq m-1$.) Let $f(x) = x(x+3)(2x+3)$ for all $x \in {\mathbb{R}}$, and $$\begin{gathered} M_j = \frac{f(\hat{X}_j)}{f(\hat{X}_0)} \prod_{i=0}^{j-1} w(\hat{X}_i) \qquad \forall j \geq 0.\end{gathered}$$ Under the assumption $\hat{X}_0 =1$, the process $(M_i)_{i \geq 0}$ is a martingale. Using this new process, we get $$\begin{aligned} {\mathbb{P}\if\relax\relax(m_{\infty}=m)\else\left(m_{\infty}=m\right)\fi} & = \frac{m+1}{3} {\mathbb{E}_{1}\if\relaxsz\relax[ \frac{f(1)w(k)}{f(k)} M_{m-1} {\mathbf{1}_{\lbrace \hat{X}_{m-1}=k \rbrace}}]\else\left[ \frac{f(1)w(k)}{f(k)} M_{m-1} {\mathbf{1}_{\lbrace \hat{X}_{m-1}=k \rbrace}}\right]\fi} \\ & = \frac{f(1)w(k)}{3f(k)} (m+1) {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_{m-1}=k)\else\left(\tilde{X}_{m-1}=k\right)\fi},\end{aligned}$$ where $\tilde{X}$ is defined as the image of $\hat{X}$ under the measure-change given by the martingale $M$, i.e. the Markov process such that ${\mathbb{E}\if\relax\relax[\phi(\tilde{X}_i)]\else\left[\phi(\tilde{X}_i)\right]\fi} = {\mathbb{E}\if\relax\relax[M_i \phi(\hat{X}_i)]\else\left[M_i \phi(\hat{X}_i)\right]\fi}$ for every continuous bounded function $\phi$. Computing the transition probabilities of $\tilde{X}$ gives: $$\begin{gathered} p_x = {\mathbb{P}_{x}\if\relax\relax(\tilde{X_1} = x+1)\else\left(\tilde{X_1} = x+1\right)\fi} = \frac{f(x+1)w(x)}{3f(x)} \\ r_x = {\mathbb{P}_{x}\if\relax\relax(\tilde{X_1} = x)\else\left(\tilde{X_1} = x\right)\fi} = \frac{w(x)}{3} \\ q_x = {\mathbb{P}_{x}\if\relax\relax(\tilde{X_1} = x-1)\else\left(\tilde{X_1} = x-1\right)\fi} = \frac{f(x-1)w(x)}{3f(x)},\end{gathered}$$ hence the expressions given in the Introduction. Two useful quantities --------------------- To prove of the convergence of $\theta_{\infty}^{(k)}$, we will need estimates for the quantities $$\begin{gathered} {H_{x}(k)} = \sum_{m \geq 1} {\mathbb{P}_{x}\if\relax\relax(\tilde{X}_{m-1}=k)\else\left(\tilde{X}_{m-1}=k\right)\fi} \\ {H_{x}^{\ast}(k)} = \sum_{m \geq 1} (m+1) {\mathbb{P}_{x}\if\relax\relax(\tilde{X}_{m-1}=k)\else\left(\tilde{X}_{m-1}=k\right)\fi},\end{gathered}$$ depending on the values of $k, x \in {\mathbb{N}}$. In practice, these estimates are best obtained through explicit computation; the expressions we get are given in the two following lemmas. We use the notations $\tilde{T}_y = \inf \lbrace t \geq 1: \tilde{X}_t=y \rbrace$ and $h(y)= y(y+1)(y+2)(y+3)(2y+3)$, for all $y \in {\mathbb{N}}$. (In this section, we mainly work on Markov processes, and use the letters $t$ and $T$ for associated times instead of trees.) \[T: Values of SumFW\] Fix $k \geq 2$, $x \in {\mathbb{N}}$. We have the following equalities: - if $x \leq k$, $$\begin{gathered} {H_{x}(k)} = \frac{3}{10} (2k+3),\end{gathered}$$ - if $x>k$, $$\begin{gathered} {H_{x}(k)} = \frac{3 h(k)}{10 h(x)}(2k+3).\end{gathered}$$ Fix $x,k \in {\mathbb{N}}$. First note that we can write ${H_{x}(k)}$ as $$\begin{gathered} {H_{x}(k)} = \sum_{m \geq 0} {\mathbb{P}_{x}\if\relax\relax(\tilde{X}_m = k)\else\left(\tilde{X}_m = k\right)\fi} = {\mathbb{E}_{x}\if\relaxsz\relax[\sum_{m \geq 0} {\mathbf{1}_{\lbrace \tilde{X}_m=k \rbrace}}]\else\left[\sum_{m \geq 0} {\mathbf{1}_{\lbrace \tilde{X}_m=k \rbrace}}\right]\fi} = {\mathbf{1}_{\lbrace x=k \rbrace}} + {\mathbb{E}_{x}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \tilde{T}_k < \infty \rbrace}} \sum_{m \geq 0} {\mathbf{1}_{\lbrace \tilde{X}_{\tilde{T}_k+m}=k \rbrace}}]\else\left[{\mathbf{1}_{\lbrace \tilde{T}_k < \infty \rbrace}} \sum_{m \geq 0} {\mathbf{1}_{\lbrace \tilde{X}_{\tilde{T}_k+m}=k \rbrace}}\right]\fi}.\end{gathered}$$ Now, applying the Markov property at the stopping time $\tilde{T}_k$ yields $$\begin{gathered} {H_{x}(k)} = {\mathbf{1}_{\lbrace x=k \rbrace}} + {\mathbb{P}_{x}\if\relax\relax(\tilde{T}_k < \infty)\else\left(\tilde{T}_k < \infty\right)\fi} {\mathbb{E}_{k}\if\relaxsz\relax[ \sum_{m \geq 0} {\mathbf{1}_{\lbrace \tilde{X}_m=k \rbrace}}]\else\left[ \sum_{m \geq 0} {\mathbf{1}_{\lbrace \tilde{X}_m=k \rbrace}}\right]\fi}.\end{gathered}$$ For all $y \geq 0$, let $$\begin{gathered} K_{y+1,j} = \frac{q_{y+1} \ldots q_{y+j}}{p_{y+1} \ldots p_{y+j}} = \prod_{z=y+1}^{y+j} \frac{f(z-1)}{f(z+1)} = \frac{f(y)f(y+1)}{f(y+j)f(y+j+1)}.\end{gathered}$$ Since $K_{1,j}$ is the general term of a converging series, the Markov chain $\tilde{X}$ is transient, and as a consequence, we have $$\begin{gathered} \label{E: SumFW_x(k) using T_k} {H_{x}(k)} = {\mathbf{1}_{\lbrace x=k \rbrace}} + \frac{{\mathbb{P}_{x}\if\relax\relax(\tilde{T}_k < \infty)\else\left(\tilde{T}_k < \infty\right)\fi}}{{\mathbb{P}_{k}\if\relax\relax(\tilde{T}_k = \infty)\else\left(\tilde{T}_k = \infty\right)\fi}}.\end{gathered}$$ To compute these quantities, it is enough to know the expression of ${\mathbb{P}_{y+1}\if\relax\relax(\tilde{T}_y = \infty)\else\left(\tilde{T}_y = \infty\right)\fi}$ for all $y \geq k$, which is a well-known property of birth-and-death processes: $$\begin{gathered} {\mathbb{P}_{y+1}\if\relax\relax(\tilde{T}_y = \infty)\else\left(\tilde{T}_y = \infty\right)\fi} = \frac{1}{\sum_{j \geq 0} K_{y+1,j}}\end{gathered}$$ Computing the sum $\sum_{j \geq 0} K_{y+1,j}$ yields $$\begin{gathered} \label{E: P(no return to y from y+1)} {\mathbb{P}_{y+1}\if\relax\relax(\tilde{T}_y = \infty)\else\left(\tilde{T}_y = \infty\right)\fi} = \frac{10 (y+2)}{(y+4)(2y+5)}.\end{gathered}$$ As a consequence, we get the following results: - If $x<k$, then $$\begin{gathered} {\mathbb{P}_{x}\if\relax\relax(\tilde{T}_k < \infty)\else\left(\tilde{T}_k < \infty\right)\fi} = 1.\end{gathered}$$ - If $x=k$, then $$\begin{gathered} {\mathbb{P}_{x}\if\relax\relax(\tilde{T}_k < \infty)\else\left(\tilde{T}_k < \infty\right)\fi} = 1-p_k {\mathbb{P}_{k+1}\if\relax\relax(\tilde{T}_k = \infty)\else\left(\tilde{T}_k = \infty\right)\fi} = \frac{6k-1}{3(2k+3)}.\end{gathered}$$ - If $x>k$, then $$\begin{gathered} {\mathbb{P}_{x}\if\relax\relax(\tilde{T}_k < \infty)\else\left(\tilde{T}_k < \infty\right)\fi} = \prod_{y=k}^{x-1} {\mathbb{P}_{y+1}\if\relax\relax(\tilde{T}_y < \infty)\else\left(\tilde{T}_y < \infty\right)\fi} = \frac{h(k)}{h(x)}.\end{gathered}$$ Together with , this completes the proof of the lemma. Note that the values we obtain can also be computed using the recurrence relations $$\begin{gathered} \label{E: Rec SumFW_1(k)} \left\lbrace \begin{array}{l} {H_{1}(1)} = 1 + r_1 {H_{1}(1)} + q_2 {H_{1}(2)} \\ {H_{1}(k)} = p_{k-1} {H_{1}(k-1)} + r_k {H_{1}(k)} + q_{k+1} {H_{1}(k+1)} \qquad \forall k \geq 2 \end{array} \right.\end{gathered}$$ and, for all $k \in {\mathbb{N}}$, $$\begin{gathered} \label{E: Rec SumFW_x(k) in x} \left\lbrace \begin{array}{l} {H_{1}(k)} = {\mathbf{1}_{\lbrace k=1 \rbrace}} + r_1 {H_{1}(k)} + p_1 {H_{2}(k)} \\ {H_{x}(k)} = {\mathbf{1}_{\lbrace k=x \rbrace}} + p_x {H_{x+1}(k)} + r_x {H_{x}(k)} + q_x {H_{x-1}(k)} \qquad \forall x \geq 2, \end{array} \right.\end{gathered}$$ which stem from the Markov property of $\tilde{X}$. Nevertheless, we would still have to go through part of the previous calculations to get the value of ${H_{1}(1)}$. In the proof of the following lemma, we will find it easier to use this approach. \[T: Values of SumFWStar\] Fix $k \geq 2$, $x \in {\mathbb{N}}$, and let $C_x = \frac{3}{14}((x+1)(x+2)-6)$. We have the following equalities: - if $x<k$, $$\begin{gathered} {H_{x}^{\ast}(k)} = \frac{3f(k)}{f(1)w(k)} - \frac{3 C_x}{10} (2k+3),\end{gathered}$$ - if $x \geq k$, $$\begin{gathered} {H_{x}^{\ast}(k)} = \frac{3f(k)}{f(1)w(k)} - \frac{3}{10} (2k+3){\left(}C_x + 1-\frac{h(k)}{h(x)}{\right)}.\end{gathered}$$ The first step of the proof consists in computing ${H_{1}^{\ast}(1)}$. We will then obtain ${H_{x}^{\ast}(k)}$ as the unique solution of recursive systems having this initial value. Note that since ${\mathbb{P}_{1}\if\relax\relax(\tilde{X}_0=1)\else\left(\tilde{X}_0=1\right)\fi}=1$, we have $$\begin{gathered} {H_{1}^{\ast}(1)} = 2 + \sum_{m \geq 1} (m+2) {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_m = 1)\else\left(\tilde{X}_m = 1\right)\fi}.\end{gathered}$$ Let us rewrite the second term using the first return time in 1, as in the proof of the previous lemma: $$\begin{aligned} {H_{1}^{\ast}(1)} &= 2 + \sum_{t \geq 1} {\mathbb{P}_{1}\if\relax\relax(\tilde{T}_1 = t)\else\left(\tilde{T}_1 = t\right)\fi} \sum_{m \geq t} (m+2) {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_m=1 \mid \tilde{T}_1 = t)\else\left(\tilde{X}_m=1 \mid \tilde{T}_1 = t\right)\fi} \\ &= 2 + \sum_{t \geq 1} {\mathbb{P}_{1}\if\relax\relax(\tilde{T}_1 = t)\else\left(\tilde{T}_1 = t\right)\fi} \sum_{m \geq 0} (m+t+2) {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_m=1)\else\left(\tilde{X}_m=1\right)\fi} \\ &= 2 + {H_{1}^{\ast}(1)} \sum_{t \geq 1} {\mathbb{P}_{1}\if\relax\relax(\tilde{T}_1 = t)\else\left(\tilde{T}_1 = t\right)\fi} + {H_{1}(1)} \sum_{t \geq 1} t {\mathbb{P}_{1}\if\relax\relax(\tilde{T}_1 = t)\else\left(\tilde{T}_1 = t\right)\fi} \\ &= 2 + {H_{1}^{\ast}(1)} {\mathbb{P}_{1}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi} + {H_{1}(1)} {\mathbb{E}_{1}\if\relaxsz\relax[\tilde{T}_1 {\mathbf{1}_{\lbrace \tilde{T}_1 < \infty \rbrace}}]\else\left[\tilde{T}_1 {\mathbf{1}_{\lbrace \tilde{T}_1 < \infty \rbrace}}\right]\fi}.\end{aligned}$$ Thus, we have $$\begin{gathered} {H_{1}^{\ast}(1)} = \frac{1}{{\mathbb{P}_{1}\if\relax\relax(\tilde{T_1} = \infty)\else\left(\tilde{T_1} = \infty\right)\fi}} {\left(}2 + \frac{3}{2} {\mathbb{E}_{1}\if\relax\relax[\tilde{T}_1 \mid \tilde{T}_1 < \infty]\else\left[\tilde{T}_1 \mid \tilde{T}_1 < \infty\right]\fi} {\mathbb{P}_{1}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi} {\right)}.\end{gathered}$$ Using the value of ${\mathbb{P}_{1}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}$ obtained in the previous proof, we get $$\begin{gathered} {H_{1}^{\ast}(1)} = \frac{3}{2} {\left(}2 + \frac{1}{2} {\mathbb{E}_{1}\if\relax\relax[\tilde{T}_1 \mid \tilde{T}_1 < \infty]\else\left[\tilde{T}_1 \mid \tilde{T}_1 < \infty\right]\fi}{\right)}\label{E: SumFWStar_1(1)}.\end{gathered}$$ To work out the value of the above expectation, we study the process $\tilde{X}^{\ast}$ having the law of $\tilde{X}$ conditioned on returning to $1$ infinitely often. This process is a recurrent Markov chain whose transition probabilities can be computed explicitly. Indeed, letting $$\begin{gathered} p^{\ast}_x := {\mathbb{P}_{x}\if\relax\relax(\tilde{X}_1 = x+1 \mid \tilde{T}_1 < \infty)\else\left(\tilde{X}_1 = x+1 \mid \tilde{T}_1 < \infty\right)\fi} \\ r^{\ast}_x := {\mathbb{P}_{x}\if\relax\relax(\tilde{X}_1 = x \mid \tilde{T}_1 < \infty)\else\left(\tilde{X}_1 = x \mid \tilde{T}_1 < \infty\right)\fi} \\ q^{\ast}_x := {\mathbb{P}_{x}\if\relax\relax(\tilde{X}_1 = x-1 \mid \tilde{T}_1 < \infty)\else\left(\tilde{X}_1 = x-1 \mid \tilde{T}_1 < \infty\right)\fi},\end{gathered}$$ Bayes’ law yields $$\begin{aligned} p^{\ast}_x &= \frac{{\mathbb{P}_{x}\if\relax\relax(\tilde{X}_1 = x+1)\else\left(\tilde{X}_1 = x+1\right)\fi} {\mathbb{P}_{x}\if\relax\relax(\tilde{T_1} < \infty \mid \tilde{X}_1 = x+1)\else\left(\tilde{T_1} < \infty \mid \tilde{X}_1 = x+1\right)\fi}}{{\mathbb{P}_{x}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}} = \frac{p_x {\mathbb{P}_{x+1}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}}{{\mathbb{P}_{x}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}}, \\ r^{\ast}_x &= \frac{{\mathbb{P}_{x}\if\relax\relax(\tilde{X}_1 = x)\else\left(\tilde{X}_1 = x\right)\fi} {\mathbb{P}_{x}\if\relax\relax(\tilde{T_1} < \infty \mid \tilde{X}_1 = x)\else\left(\tilde{T_1} < \infty \mid \tilde{X}_1 = x\right)\fi}}{{\mathbb{P}_{x}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}} = \left\lbrace \begin{array}{l} r_x \mbox{ if } x \neq 1 \\ \frac{r_1}{{\mathbb{P}_{1}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}} \mbox{ if } x=1, \end{array} \right. \\ q^{\ast}_x &= \frac{{\mathbb{P}_{x}\if\relax\relax(\tilde{X}_1 = x-1)\else\left(\tilde{X}_1 = x-1\right)\fi} {\mathbb{P}_{x}\if\relax\relax(\tilde{T_1} < \infty \mid \tilde{X}_1 = x-1)\else\left(\tilde{T_1} < \infty \mid \tilde{X}_1 = x-1\right)\fi}}{{\mathbb{P}_{x}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}} = \left\lbrace \begin{array}{l} \frac{q_x {\mathbb{P}_{x-1}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}}{{\mathbb{P}_{x}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}} \mbox{ if } x \neq 2 \\ \frac{q_2}{{\mathbb{P}_{2}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi}} \mbox{ if } x=1. \end{array} \right.\end{aligned}$$ Note that, for all $x \geq 2$, $$\begin{gathered} {\mathbb{P}_{x+1}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi} = {\mathbb{P}_{x+1}\if\relax\relax(\tilde{T}_x < \infty)\else\left(\tilde{T}_x < \infty\right)\fi} {\mathbb{P}_{x}\if\relax\relax(\tilde{T}_1 < \infty)\else\left(\tilde{T}_1 < \infty\right)\fi},\end{gathered}$$ so we can again use equation . Finally, we get $p^{\ast}_1 = \frac{1}{3}$, $r^{\ast}_1 = \frac{2}{3}$, $q^{\ast}_1 = 0$, and for all $x \geq 2$ $$\begin{gathered} p^{\ast}_x = \frac{x}{3(x+2)} \\ r^{\ast}_x = \frac{x(x+3)}{3(x+1)(x+2)} \\ q^{\ast}_x = \frac{x+3}{3(x+1)}.\end{gathered}$$ To get the value of ${\mathbb{E}_{1}\if\relax\relax[\tilde{T}_1 \mid \tilde{T}_1 < \infty]\else\left[\tilde{T}_1 \mid \tilde{T}_1 < \infty\right]\fi}$, it is now enough to compute the invariant measure $\Pi$ of $\tilde{X}^{\ast}$. We do so by using reversibility: the detailed balanced equation $\Pi(x) p^{\ast}_x = \Pi(x+1) q^{\ast}_{x+1}$ implies $$\begin{gathered} \frac{\Pi(x+1)}{\Pi(x)} = \left\lbrace \begin{array}{l} \frac{x}{x+4} \mbox{ if } x \geq 2 \\ \frac{3}{5} \mbox{ if } x = 1. \end{array}\right.\end{gathered}$$ As a consequence, $$\begin{gathered} \sum_{x \geq 1} \Pi(x) = \Pi(1) {\left(}1 + \frac{3}{5} \sum_{x \geq 2} \frac{2 \times 3 \times 4 \times 5}{x(x+1)(x+2)(x+3)} {\right)}= \Pi(1) {\left(}1 + \frac{3}{5} \times 120 \times \frac{1}{72} {\right)},\end{gathered}$$ so $\Pi$ is a probability measure if and only if $\Pi(1) = \frac{1}{2}$. This implies $$\begin{gathered} {\mathbb{E}_{1}\if\relax\relax[\tilde{T}_1 \mid \tilde{T}_1 < \infty]\else\left[\tilde{T}_1 \mid \tilde{T}_1 < \infty\right]\fi} = \frac{1}{\Pi(1)} = 2.\end{gathered}$$ Injecting this value into gives ${H_{1}^{\ast}(1)} = \frac{9}{2}$. For the second step, we keep $x=1$, and compute the values of ${H_{1}^{\ast}(k)}$ for $k \in {\mathbb{N}}$. As above, we first shift indices and set the first term aside: $$\begin{gathered} {H_{1}^{\ast}(k)} = 2 {\mathbf{1}_{\lbrace k=1 \rbrace}} + \sum_{m \geq 0} (m+3) {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_{m+1}=k)\else\left(\tilde{X}_{m+1}=k\right)\fi}.\end{gathered}$$ Applying the Markov property at time $m$ in each of the terms gives the following recurrence relations: - For $k=1$, $$\begin{aligned} {H_{1}^{\ast}(1)} &= 2 + \sum_{m \geq 0} (m+3) {\left(}r_1 {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_m=1)\else\left(\tilde{X}_m=1\right)\fi} + q_2 {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_m=2)\else\left(\tilde{X}_m=2\right)\fi}{\right)}\\ &= 2 + r_1 {H_{1}^{\ast}(1)} + q_2 {H_{1}^{\ast}(2)} + {H_{1}(1)}-1.\end{aligned}$$ - For all $k \geq 2$, $$\begin{aligned} {H_{1}^{\ast}(k)} &= \sum_{m \geq 0} (m+3) {\left(}p_{k-1}{\mathbb{P}_{1}\if\relax\relax(\tilde{X}_m=k-1)\else\left(\tilde{X}_m=k-1\right)\fi} + r_k{\mathbb{P}_{1}\if\relax\relax(\tilde{X}_m=k)\else\left(\tilde{X}_m=k\right)\fi} + q_{k+1}{\mathbb{P}_{1}\if\relax\relax(\tilde{X}_m=k+1)\else\left(\tilde{X}_m=k+1\right)\fi}{\right)}\\ &= p_{k-1} ({H_{1}^{\ast}(k-1)} + {H_{1}(k-1)}) + r_k ({H_{1}^{\ast}(k)} + {H_{1}(k)}) + q_{k+1} ({H_{1}^{\ast}(k+1)} + {H_{1}(k+1)}) \\ &= p_{k-1} {H_{1}^{\ast}(k-1)} + r_k {H_{1}^{\ast}(k)} + q_{k+1} {H_{1}^{\ast}(k+1)} + {H_{1}(k)}.\end{aligned}$$ (Note that we have used implicitly the fact that ${H_{1}(k)}$ verifies the similar system ). Using the values obtained in Lemma \[T: Values of SumFW\], we get the recursive system $$\begin{gathered} \left\lbrace \begin{array}{l} {H_{1}^{\ast}(1)} = \frac{9}{2} \\ {H_{1}^{\ast}(2)} = \frac{1}{q_2} {\left(}(1-r_1){H_{1}^{\ast}(1)} - \frac{5}{2} {\right)}\\ {H_{1}^{\ast}(k+1)} = \frac{1}{q_{k+1}} {\left(}(1-r_k){H_{1}^{\ast}(k)} - p_{k-1} {H_{1}^{\ast}(k-1)} - \frac{3}{10} (2k+3) {\right)}. \end{array} \right.\end{gathered}$$ It is now easy to check that $\frac{3 f(k)}{f(1)w(k)}$ is also a solution of this system, and therefore is equal to ${H_{1}^{\ast}(k)}$. In the third and last step, we fix the value of $k$, and write recurrence relations for ${H_{x}^{\ast}(k)}$, $x \in {\mathbb{N}}$. To this end, we again use the Markov property, but at time 1 (with the convention that ${H_{0}^{\ast}(k)}=0$, to keep the setting general): $$\begin{aligned} {H_{x}^{\ast}(k)} &= 2 {\mathbf{1}_{\lbrace x=k \rbrace}} + \sum_{m \geq 0} (m+3) {\mathbb{P}_{x}\if\relax\relax(\tilde{X}_{m+1}=k)\else\left(\tilde{X}_{m+1}=k\right)\fi} \\ &= 2 {\mathbf{1}_{\lbrace x=k \rbrace}} + \sum_{m \geq 0} (m+3) (p_x {\mathbb{P}_{x+1}\if\relax\relax(\tilde{X}_m=k)\else\left(\tilde{X}_m=k\right)\fi} + r_x {\mathbb{P}_{x}\if\relax\relax(\tilde{X}_{m+1}=k)\else\left(\tilde{X}_{m+1}=k\right)\fi} + q_x {\mathbb{P}_{x-1}\if\relax\relax(\tilde{X}_m=k)\else\left(\tilde{X}_m=k\right)\fi}) \\ &= 2 {\mathbf{1}_{\lbrace x=k \rbrace}} + p_x {H_{x+1}^{\ast}(k)} + r_x {H_{x}^{\ast}(k)} + q_x {H_{x-1}^{\ast}(k)} + p_x {H_{x+1}(k)} + r_x {H_{x}(k)} + q_x {H_{x-1}(k)} \\ &= {\mathbf{1}_{\lbrace x=k \rbrace}} + p_x {H_{x+1}^{\ast}(k)} + r_x {H_{x}^{\ast}(k)} + q_x {H_{x-1}^{\ast}(k)} + {H_{x}(k)}.\end{aligned}$$ This gives the system $$\begin{gathered} \left\lbrace \begin{array}{l} {H_{1}^{\ast}(k)} = \frac{3 f(k)}{f(1)w(k)} \\ {H_{x+1}^{\ast}(k)} = \frac{1}{p_x} {\left(}(1-r_x){H_{x}^{\ast}(k)} - q_x {H_{x-1}^{\ast}(k)} - {H_{x}(k)} - {\mathbf{1}_{\lbrace x=k \rbrace}}{\right)}. \end{array} \right.\end{gathered}$$ We first solve these equations for $x < k$, so that the last term is zero. The solution is of the form given in the lemma if and only if $C_x$ is such that $$\begin{gathered} \left\lbrace \begin{array}{l} C_0 = C_1 = 0 \\ C_{x+1} = \frac{1}{p_x} ((1-r_x) C_x - q_x C_{x-1} + 1). \end{array} \right.\end{gathered}$$ This is indeed the case for $C_x = \frac{3}{14}((x+1)(x+2)-6)$. Now, for $x \geq k$, we seek a solution of the form $$\begin{gathered} {H_{x}^{\ast}(k)} = \frac{3f(k)}{f(1)w(k)} - \frac{3 C_x}{10} (2k+3) - C'_{k,x}.\end{gathered}$$ The recursive system can be translated into $C'_{k,k} = 0$, $C'_{k,k+1} = \frac{1}{p_x}$ and $$\begin{gathered} C'_{k,x+1} = \frac{1}{p_x} ((1-r_x)C'_{k,x} - q_x C'_{k,x-1}),\end{gathered}$$ or equivalently $$\begin{gathered} p_x(C'_{k,x+1}-C'_{k,x}) = q_x (C'_{k,x}-C'_{k,x-1}).\end{gathered}$$ Thus, for $x\geq k+1$, we get $$\begin{aligned} C'_{k,x} &= \sum_{y=k}^{x-1} \frac{q_{k+1} \ldots q_y}{p_{k+1} \ldots p_y} \frac{1}{p_k} \\ &= \sum_{y=k}^{x-1} \frac{f(k)f(k+1)}{f(y)f(y+1)} \frac{1}{p_k} \\ &= \frac{f(k)f(k+1)}{p_k} \frac{h(x)-h(k)}{10 h(k)h(x)}.\end{aligned}$$ Using the expressions of $f$, $h$ and $p_k$, we conclude that $$\begin{aligned} &= \frac{(k+4)(2k+5)}{10 (k+2) p_k} {\left(}1-\frac{h(k)}{h(x)} {\right)}\\ &= \frac{3(2k+3)}{10} {\left(}1-\frac{h(k)}{h(x)} {\right)}.\end{aligned}$$ This ends the proof. Proof of the convergence ------------------------ We are now ready to give the proof of the convergence of $\theta_{\infty}^{(k)}$. We begin with the convergence of the labels $(X_{\infty,i}^{(k)})$ towards the Markov chain $\tilde{X}$. \[T: First labels convergence\] Fix $r \in {\mathbb{N}}$. For any continuous bounded function $F$ from ${\mathbb{R}}^r$ into ${\mathbb{R}}$, we have $$\begin{gathered} {\mathbb{E}\if\relax\relax[F(X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)})]\else\left[F(X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)})\right]\fi} {\xrightarrow[k \rightarrow \infty]{}} {\mathbb{E}_{1}\if\relax\relax[F(\tilde{X}_0,\ldots,\tilde{X}_{r-1})]\else\left[F(\tilde{X}_0,\ldots,\tilde{X}_{r-1})\right]\fi}.\end{gathered}$$ Let $k \geq r$. The computations of Section \[S: First dist eqns\] show that $$\begin{gathered} {\mathbb{E}\if\relax\relax[F(X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)})]\else\left[F(X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)})\right]\fi} = \sum_{m \geq 1} \frac{m+1}{3} {\mathbb{E}_{1}\if\relaxsz\relax[F(\hat{X}_0,\ldots,\hat{X}_{r-1}) {\mathbf{1}_{\lbrace \hat{X}_{m-1}=k \rbrace}} M_{m-1} \frac{f(1)w(k)}{f(k)}]\else\left[F(\hat{X}_0,\ldots,\hat{X}_{r-1}) {\mathbf{1}_{\lbrace \hat{X}_{m-1}=k \rbrace}} M_{m-1} \frac{f(1)w(k)}{f(k)}\right]\fi}.\end{gathered}$$ Since $k \geq r$, the term ${\mathbf{1}_{\lbrace \hat{X}_{m-1}=k \rbrace}}$ is zero for $m < r$. Applying the Markov property allows us to write ${\mathbb{E}\if\relax\relax[F(X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)})]\else\left[F(X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)})\right]\fi}$ as $$\begin{gathered} \sum_{m \geq r} \frac{m+1}{3} {\mathbb{E}_{1}\if\relaxsz\relax[F(\hat{X}_0,\ldots,\hat{X}_{r-1}) \frac{f(1)}{f(\hat{X}_{r-1})} M_{r-1} {\mathbb{E}_{\hat{X}_{r-1}}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \hat{X}'_{m-r}=k \rbrace}} M'_{m-r} \frac{f(\hat{X}_{r-1})w(k)}{f(k)}]\else\left[{\mathbf{1}_{\lbrace \hat{X}'_{m-r}=k \rbrace}} M'_{m-r} \frac{f(\hat{X}_{r-1})w(k)}{f(k)}\right]\fi}]\else\left[F(\hat{X}_0,\ldots,\hat{X}_{r-1}) \frac{f(1)}{f(\hat{X}_{r-1})} M_{r-1} {\mathbb{E}_{\hat{X}_{r-1}}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \hat{X}'_{m-r}=k \rbrace}} M'_{m-r} \frac{f(\hat{X}_{r-1})w(k)}{f(k)}]\else\left[{\mathbf{1}_{\lbrace \hat{X}'_{m-r}=k \rbrace}} M'_{m-r} \frac{f(\hat{X}_{r-1})w(k)}{f(k)}\right]\fi}\right]\fi},\end{gathered}$$ where $\hat{X}'$ is an independent copy of the process $\hat{X}$, and for all $j \in {\mathbb{N}}$ $$\begin{gathered} M'_j = \frac{f(\hat{X}'_j)}{f(\hat{X}'_0)} \prod_{i=0}^{j-1} w(\hat{X}'_i).\end{gathered}$$ Therefore, ${\mathbb{E}\if\relax\relax[F(X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)})]\else\left[F(X_{\infty,1}^{(k)},\ldots,X_{\infty,r}^{(k)})\right]\fi}$ is equal to $$\begin{gathered} {\mathbb{E}_{1}\if\relaxsz\relax[M_{r-1} F(\hat{X}_0,\ldots,\hat{X}_{r-1}) \frac{f(1) w(k)}{3 f(k)} \sum_{m \geq 0} (m+r+1) {\mathbb{E}_{\hat{X}_{r-1}}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m]\else\left[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m\right]\fi}]\else\left[M_{r-1} F(\hat{X}_0,\ldots,\hat{X}_{r-1}) \frac{f(1) w(k)}{3 f(k)} \sum_{m \geq 0} (m+r+1) {\mathbb{E}_{\hat{X}_{r-1}}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m]\else\left[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m\right]\fi}\right]\fi}.\end{gathered}$$ Since $\hat{X}_{r-1} \leq r$ ${\mbox{a.s.}}$, we now have to estimate the sum $\sum_{m \geq 0} (m+r+1) {\mathbb{E}_{x}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m]\else\left[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m\right]\fi}$, for all $x \leq r$. We first express this quantity using ${H_{x}^{\ast}(k)}$ and ${H_{x}(k)}$: $$\begin{aligned} \sum_{m \geq 0} (m+r+1) {\mathbb{E}_{x}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m]\else\left[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m\right]\fi} &= \sum_{m \geq 0} (m+r+1) {\mathbb{P}_{x}\if\relax\relax(\tilde{X}'_m=k)\else\left(\tilde{X}'_m=k\right)\fi} \\ &= {H_{x}^{\ast}(k)} + (r-1) {H_{x}(k)}.\end{aligned}$$ Now, the results of Lemmas \[T: Values of SumFW\] and \[T: Values of SumFWStar\] yield $$\begin{gathered} \sum_{m \geq 0} (m+r+1) {\mathbb{E}_{x}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m]\else\left[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m\right]\fi} = \frac{3f(k)}{f(1)w(k)} + (r-1-C_x) \frac{3}{10} (2k+3).\end{gathered}$$ As a consequence, we have $$\begin{gathered} \frac{f(1) w(k)}{3 f(k)} \sum_{m \geq 0} (m+r+1) {\mathbb{E}_{x}\if\relaxsz\relax[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m]\else\left[{\mathbf{1}_{\lbrace \hat{X}'_m=k \rbrace}} M'_m\right]\fi} {\xrightarrow[k \rightarrow \infty]{}} 1\end{gathered}$$ uniformly in $x \leq r$, hence the result. Note that we have only used part of the results of Lemmas \[T: Values of SumFW\] and \[T: Values of SumFWStar\] (namely, the case where $x \leq k$). The remaining expressions will play a role in the proof of the joint convergence. The convergence of $\theta_{\infty}^{(k)}$ towards ${\overrightarrow{\theta_{\infty}}}$ can now be obtained by putting together the results of Proposition \[T: Distribution of theta(infty,k)\] and Proposition \[T: First labels convergence\]. Indeed, letting $I^{(k)}$ denote the unique index $i$ such that $\tau'_{\infty,i}$ is infinite, conditionally on $I^{(k)} \geq r$, we have that: - The points ${\mathfrak{s}}_i(\theta_{\infty}^{(k)})$ and ${\mathrm{x}}_{\infty,i}^{(k)}$ are the same for all $i \leq r$, hence the equalities $R_i(\theta_{\infty}^{(k)}) = \tau_{\infty,i}^{(k)}$ and $L_i(\theta_{\infty}^{(k)}) = (\tau'_{\infty,i})^{(k)}$ for all $i < r$. - As a consequence, $(S_i(\theta_{\infty}^{(k)}))_{1 \leq i \leq r}$ converges in distribution to $(\tilde{X}_i)_{0 \leq i \leq r-1}$ for $\tilde{X}_0=1$. - Conditionally on $(S_i(\theta_{\infty}^{(k)}))_{0 \leq i < r}$, the subtrees $L_i(\theta_{\infty}^{(k)})$, $0 \leq i < r$ and $R_i(\theta_{\infty}^{(k)})$, $1 \leq i < r$ are independent random variables, with respective distributions ${\rho_{(S_i(\theta_{\infty}^{(k)}))}}$ and ${\rho_{(S_i(\theta_{\infty}^{(k)}))}}^+$. Since $$\begin{gathered} {\mathbb{P}\if\relax\relax(I^{(k)} < r)\else\left(I^{(k)} < r\right)\fi} = {\mathbb{E}\if\relaxsz\relax[\frac{r}{m_{\infty}^{(k)}+1}]\else\left[\frac{r}{m_{\infty}^{(k)}+1}\right]\fi} \leq \frac{r}{k+1} {\xrightarrow[k \rightarrow \infty]{}} 0,\end{gathered}$$ this gives the desired convergence. Joint convergence of $(\theta_{\infty}^{(k)},\theta_{\infty}^{(-k+1)})$ {#S: Joint CV of the trees} ======================================================================= Explicit expressions for the joint distribution {#S: Bi-marked theta(n)} ----------------------------------------------- As in the previous section, we first fix $k$, and use the convergence of $(\theta_n^{(k)},\theta_n^{(-k+1)})$ to study $(\theta_{\infty}^{(k)},\theta_{\infty}^{(-k+1)})$. Let $n \in {\mathbb{N}}\cup \{\infty\}$. We introduce some new notation, summed-up in Figure \[F: Notation on theta\_n bi-rerooted\]. To simplify what follows, we write ${\mathrm{e}}_0$, ${\mathrm{e}}_k$, ${\mathrm{e}}_{-k+1}$ and ${\mathrm{v}}_k$ instead of ${\mathrm{e}}_0(\theta_n)$, ${\mathrm{e}}_k(\theta_n)$, ${\mathrm{e}}_{-k+1}(\theta_n)$ and ${\mathrm{v}}_k(\theta_n)$. We first deal with the branches between ${\mathrm{e}}_0$, ${\mathrm{e}}_k$ and ${\mathrm{e}}_{-k+1}$. Let $a_n = d_n ({\mathrm{v}}_k, {\mathrm{e}}_k)$, $b_n = d_n ({\mathrm{v}}_k, {\mathrm{e}}_{-k+1})$ and $c_n = d_n ({\mathrm{e}}_0, {\mathrm{v}}_k)$, where $d_n$ denotes the graph-distance on $\theta_n$. Let ${\mathrm{x}}_{n,0}, \ldots, {\mathrm{x}}_{n,a_n}$ be the vertices on the path from ${\mathrm{e}}_k$ to ${\mathrm{v}}_k$, ${\mathrm{y}}_{n,0}, \ldots, {\mathrm{y}}_{n,b_n}$ the ones on the path from ${\mathrm{e}}_{-k+1}$ to ${\mathrm{v}}_k$, and ${\mathrm{z}}_{n,0}, \ldots, {\mathrm{z}}_{n,c_n}$ the ones on the path from ${\mathrm{v}}_k$ to ${\mathrm{e}}_0$. For the corresponding labels, we use capital letters: $X_{n,i} = l_n^{(k)} ({\mathrm{x}}_{n,i})$, $Y_{n,i} = l_n^{(k)} ({\mathrm{y}}_{n,i})$ and $Z_{n,i} = l_n^{(k)} ({\mathrm{z}}_{n,i})$ for all $i$. We now add notation for the subtrees which are grafted on these branches. Again, we use the orders $\prec$ and $<$ on the vertices of $\theta_n$ in these definitions, even if we think of these trees as subtrees of $\theta_n^{(k)}$ (in particular, they inherit the labels $l_n^{(k)}$). - For all $i \in \{1, \ldots, a_n+c_n\}$, let $\tau_{n,i}$ be the subtree containing the vertices ${\mathrm{v}}$ such that : - if $i \leq a_n$, then ${\mathrm{x}}_{n,i} \leq {\mathrm{v}} < {\mathrm{x}}_{n,i-1}$, - if $a_n+1 \leq i \leq a_n+c_n$, then ${\mathrm{z}}_{n,i-a_n} \leq {\mathrm{v}} < {\mathrm{z}}_{n,i-a_n-1}$. - For all $i \in \{1,\ldots, b_n+c_n\}$, let $\tau'_{n,i}$ be the subtree containing the vertices ${\mathrm{v}}$ such that : - if $i \leq b_n$, either ${\mathrm{v}}={\mathrm{y}}_{n,i}$, or ${\mathrm{y}}_{n,i} \prec {\mathrm{v}}$, ${\mathrm{y}}_{n,i-1} < {\mathrm{v}}$ and ${\mathrm{y}}_{n,i-1} \nprec {\mathrm{v}}$, - if $b_n+1 \leq i \leq b_n + c_n$, either ${\mathrm{v}}={\mathrm{z}}_{n,i-b_n}$, or ${\mathrm{z}}_{n,i-b_n} \prec {\mathrm{v}}$, ${\mathrm{z}}_{n,i-b_n-1} < {\mathrm{v}}$ and ${\mathrm{z}}_{n,i-b_n-1} \nprec {\mathrm{v}}$. - For all $i \in \{0,\ldots,a_n\}$, let ${\overline{\tau}}_{n,i}$ be the subtree containing the vertices ${\mathrm{v}}$ such that : - if $i=0$, then ${\mathrm{x}}_0 \preceq {\mathrm{v}}$, - otherwise, either ${\mathrm{v}}={\mathrm{x}}_{n,i}$, or ${\mathrm{x}}_{n,i} \prec {\mathrm{v}}$, ${\mathrm{x}}_{n,i-1} < {\mathrm{v}}$ and ${\mathrm{x}}_{n,i-1} \nprec {\mathrm{v}}$. - For all $i \in \{0,\ldots,b_n-1\}$, let ${\overline{\tau}}'_{n,i}$ be the subtree containing the vertices ${\mathrm{v}}$ such that : - if $i = 0$, then ${\mathrm{y}}_0 \preceq {\mathrm{v}}$. - otherwise, ${\mathrm{y}}_{n,i} \leq {\mathrm{v}} < {\mathrm{y}}_{n,i-1}$, As in section \[S: First dist eqns\], for all these variables, there should be an exponent $^{(k)}$ in the notation, but we omit this precision as long as $k$ remains constant. ![Notation for the vertices and subtrees of $\theta_n$, with distinguished points ${\mathrm{e}}_k$ and ${\mathrm{e}}_{-k+1}$.[]{data-label="F: Notation on theta_n bi-rerooted"}](BiReRootedTreeTheta_n,k.pdf) Fix $a,b,c,N,N' \geq 0$, ${\underline{t}} = (t_1,\ldots,t_{a+c}) \in {\mathbb{F}_{a+c,N}}$ and ${\underline{t'}} = (t'_1,\ldots,t'_{b+c}) \in {\mathbb{F}_{b+c,N'}}$ such that: - the root of $t_1$ has label $1$, and all labels in ${\underline{t}}$ are positive, - the root of $t'_1$ has label $2$, and all labels in ${\underline{t'}}$ are greater than $1$, - for all $i \leq c$, the labels of the roots of $t_{a+c-i}$ and $t'_{b+c-i}$ are the same. Let $$\begin{gathered} P_n^{(k)} (a,b,c,{\underline{t}},{\underline{t'}}) = {\mathbb{P}\if\relax\relax(a_n=a,b_n=b,c_n=c,(\tau_1, \ldots, \tau_{a+c})={\underline{t}}, (\tau'_1, \ldots, \tau'_{b+c})={\underline{t'}})\else\left(a_n=a,b_n=b,c_n=c,(\tau_1, \ldots, \tau_{a+c})={\underline{t}}, (\tau'_1, \ldots, \tau'_{b+c})={\underline{t'}}\right)\fi}.\end{gathered}$$ We are once again interested in the behaviour of $P_n^{(k)} (a,b,c,{\underline{t}},{\underline{t'}})$ as $n \rightarrow \infty$, for fixed $k$. Using the above notation, we have $$\begin{gathered} P_n^{(k)} (a,b,c,{\underline{t}},{\underline{t'}}) \sim_{n \rightarrow \infty} \frac{a+b+1}{6^{a+b} 12^{c+N+N'}}.\end{gathered}$$ Recall that $\mathcal{F}_{m,n'}$ denotes the number of well-labeled forests with $m$ trees, $n'$ edges and prescribed root labels. Since $\theta_n$ in uniform in ${\mathbb{T}_{n}}(0)$, we have $$\begin{gathered} P_n^{(k)} (a,b,c,{\underline{t}},{\underline{t'}}) = \frac{\mathcal{F}_{a+b+1,n-(a+b+c+N+N')}}{{\# \{(T,l) \in {\mathbb{T}_{n}}(0): \min_{V(T)} l \leq -k\}}}.\end{gathered}$$ Using equations and yields $$\begin{aligned} \mathcal{F}_{a+b+1,n-(a+b+c+N+N')} = &\frac{3^{n-(a+b+c+N+N')} (a+b+1)}{2n+1-(a+b+2c+2N+2N')} \\ & \qquad \binom{2n+1-(a+b+2c+2N+2N')}{n-(a+b+c+N+N')},\end{aligned}$$ and $$\begin{gathered} P_n^{(k)} (a,b,c,{\underline{t}},{\underline{t'}}) \sim_{n \rightarrow \infty} \frac{a+b+1}{3^{a+b+c+N+N'}} \binom{2n+1}{n}^{-1} \binom{2n+1-(a+b+2c+2N+2N')}{n-(a+b+c+N+N')}.\end{gathered}$$ Stirling’s formula now gives $$\begin{gathered} P_n^{(k)} (a,b,c,{\underline{t}},{\underline{t'}}) \sim_{n \rightarrow \infty} \frac{a+b+1}{3^{a+b+c+N+N'}2^{a+b+2c+2N+2N'}},\end{gathered}$$ hence the lemma. Recall that for all $m,x,x' \in {\mathbb{N}}$, ${\mathbb{T}_{m}}^+(x)$ is the set of the labeled trees $(T,l) \in {\mathbb{T}_{m}}$ such that $l>0$ and the root of $T$ has label $x$, and $\mathcal{M}^+_{m, x \rightarrow x'}$ is the set of the walks $(x_1, \ldots, x_m) \in {\mathbb{N}}^{m+1}$ such that $x_0=x$, $x_m = x'$ and for all $i \leq m-1$, ${\left|x_{i+1} - x_i\right|} \leq 1$. Similarly, we let ${\mathbb{T}_{m}}^{>1}(x)$ be the set of the labeled trees $(T,l) \in {\mathbb{T}_{m}}^+(x)$ such that $l>1$, and $\mathcal{M}^{>1}_{m, x \rightarrow x'}$ be the set of the walks $(x_1, \ldots, x_m) \in \mathcal{M}^+_{m, x \rightarrow x'}$ such that $x_0, \ldots, x_m >1$. Also recall that $\mu_{(x_0,\ldots,x_m)}$ denotes the distribution of a “uniform infinite” forest with root labels $x_0,\ldots,x_m$. For all $a,b,c,k' \geq 1$, ${\underline{x}} \in \mathcal{M}_{a, 1\rightarrow k'}^+$, ${\underline{y}} \in \mathcal{M}_{b, 2\rightarrow k'}^{>1}$, ${\underline{z}} \in \mathcal{M}_{c+1, k'\rightarrow k}^{>1}$, let $A_{\infty}^{(k)}(a,b,c,{\underline{x}},{\underline{y}},{\underline{z}})$ denote the event: $$\begin{gathered} a_{\infty}^{(k)}=a, b_{\infty}^{(k)}=b, c_{\infty}^{(k)}=c, \quad (X_{\infty,1}^{(k)},\ldots,X_{\infty,a}^{(k)})={\underline{x}}, \\ (Y_{\infty,1}^{(k)},\ldots,Y_{\infty,b}^{(k)})={\underline{y}}, \quad \quad (Z_{\infty,0}^{(k)},\ldots,Z_{\infty,c}^{(k)})={\underline{z}}.\end{gathered}$$ \[T: Distribution of theta(infty,k) with two marked points\] For all $a,b,c,k' \geq 1$, ${\underline{x}} \in \mathcal{M}_{a, 1\rightarrow k'}^+$, ${\underline{y}} \in \mathcal{M}_{b, 2\rightarrow k'}^{>1}$, ${\underline{z}} \in \mathcal{M}_{c+1, k'\rightarrow k}^{>1}$, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(A_{\infty}^{(k)} (a,b,c,{\underline{x}},{\underline{y}},{\underline{z}}))\else\left(A_{\infty}^{(k)} (a,b,c,{\underline{x}},{\underline{y}},{\underline{z}})\right)\fi} = \frac{a+b+1}{3^{a+b+c}} {\left(}\prod_{i=1}^a w(x_i){\right)}{\left(}\prod_{i=1}^b w(y_i-1){\right)}{\left(}\prod_{i=1}^{c+1} w(z_i)w(z_i-1){\right)}.\end{gathered}$$ Moreover, conditionally on $A_{\infty}^{(k)} (a,b,c,{\underline{x}},{\underline{y}},{\underline{z}})$, with the conventions $x_0=y_0=0$: - The forests $(\tau_{\infty,i})_{1 \leq i \leq a+c}$, $(\tau'_{\infty,i})_{1 \leq i \leq b+c}$ and $({\overline{\tau}}_{\infty,0},\ldots,{\overline{\tau}}_{\infty,a},{\overline{\tau}}'_{\infty,0},\ldots,{\overline{\tau}}'_{\infty,b-1})$ are independent. - The trees $\tau_{\infty,i}$, $1 \leq i \leq a+c$ are independent random variables, respectively distributed according to ${\rho_{(x_i)}}^+$, $1 \leq i \leq a$ and ${\rho_{(z_{a+i})}}^+$, $a+1 \leq i \leq a+c$. - The trees $\tau'_{\infty,i}$, $1 \leq i \leq b+c$ are independent random variables, obtained by adding $1$ to the labels of trees distributed according to ${\rho_{(y_i-1)}}^+$, $1 \leq i \leq b$ and ${\rho_{(z_{b+i})}}^+$, $b+1 \leq i \leq a+c$, respectively. - The forest $({\overline{\tau}}_{\infty,0},\ldots,{\overline{\tau}}_{\infty,a},{\overline{\tau}}'_{\infty,0},\ldots,{\overline{\tau}}'_{\infty,b-1})$ follows the distribution $\mu_{(0,x_1,\ldots,x_a,0,y_1,\ldots,y_{b-1})}$. We have $$\begin{aligned} {\mathbb{P}\if\relaxsz\relax(A_{\infty}^{(k)} (a,b,c,{\underline{x}},{\underline{y}},{\underline{z}}))\else\left(A_{\infty}^{(k)} (a,b,c,{\underline{x}},{\underline{y}},{\underline{z}})\right)\fi} = \frac{a+b+1}{6^{a+b}12^c} & {\left(}\prod_{i=1}^a \sum_{n_i \geq 0} \frac{1}{12^{n_i}} {\# {\mathbb{T}_{n_i}}^+(x_i)} {\right)}{\left(}\prod_{i=1}^b \sum_{n_i \geq 0} \frac{1}{12^{n_i}} {\# {\mathbb{T}_{n_i}}^{>1}(y_i)} {\right)}\\ & {\left(}\prod_{i=1}^{c+1} \sum_{n_i \geq 0} \frac{1}{12^{n_i}} {\# {\mathbb{T}_{n_i}}^+(z_i)} {\right)}{\left(}\prod_{i=1}^{c+1} \sum_{n_i \geq 0} \frac{1}{12^{n_i}} {\# {\mathbb{T}_{n_i}}^{>1}(z_i)} {\right)}.\end{aligned}$$ Using equation and the fact that ${\# {\mathbb{T}_{m}}^{>1}(x)} = {\# {\mathbb{T}_{m}}^+(x-1)}$ for all $m,x \in {\mathbb{N}}$, this gives $$\begin{aligned} {\mathbb{P}\if\relaxsz\relax(A_{\infty}^{(k)} (a,b,c,{\underline{x}},{\underline{y}},{\underline{z}}))\else\left(A_{\infty}^{(k)} (a,b,c,{\underline{x}},{\underline{y}},{\underline{z}})\right)\fi} &= \frac{a+b+1}{6^{a+b}12^c} {\left(}\prod_{i=1}^a 2 w(x_i){\right)}{\left(}\prod_{i=1}^b 2 w(y_i-1){\right)}{\left(}\prod_{i=1}^{c+1} 4 w(z_i)w(z_i-1){\right)}\\ &= \frac{a+b+1}{3^{a+b+c}} {\left(}\prod_{i=1}^a w(x_i){\right)}{\left(}\prod_{i=1}^b w(y_i-1){\right)}{\left(}\prod_{i=1}^{c+1} w(z_i)w(z_i-1){\right)},\end{aligned}$$ hence the first part of the Lemma. The conditional distributions of the trees $\tau_{\infty,i}$, $\tau'_{\infty,i}$, ${\overline{\tau}}_{\infty,i}$ and ${\overline{\tau}}'_{\infty,i}$ are then obtained exactly as in the proof of Proposition \[T: Distribution of theta(infty,k)\]. Proof of the joint convergence ------------------------------ As in Section \[S: First convergence\], the main step of the proof of the convergence is to show the convergence of the labels on the branches ${\mathrm{x}}_{\infty,i}^{(k)}$, $i \geq 1$ and ${\mathrm{y}}_{\infty,i}^{(k)}$, $i \geq 1$. Fix $r \in {\mathbb{N}}$. For all $k \in {\mathbb{N}}$, and for all continuous bounded functions $F$, $G$ from ${\mathbb{R}}^r$ into ${\mathbb{R}}$, we let $$\begin{gathered} \mathcal{E}_k(F,G) := {\mathbb{E}\if\relaxsz\relax[F(X_{\infty,1}^{(k)}, \ldots, X_{\infty,r}^{(k)}) G(Y_{\infty,1}^{(k)}, \ldots, Y_{\infty,r}^{(k)}) {\mathbf{1}_{\lbrace a_{\infty}^{(k)}, b_{\infty}^{(k)} \geq r \rbrace}}]\else\left[F(X_{\infty,1}^{(k)}, \ldots, X_{\infty,r}^{(k)}) G(Y_{\infty,1}^{(k)}, \ldots, Y_{\infty,r}^{(k)}) {\mathbf{1}_{\lbrace a_{\infty}^{(k)}, b_{\infty}^{(k)} \geq r \rbrace}}\right]\fi}.\end{gathered}$$ \[T: Joint cv labels\] We have the convergence $$\begin{gathered} \mathcal{E}_k(F,G) {\xrightarrow[k \rightarrow \infty]{}} {\mathbb{E}_{1}\if\relax\relax[F(\tilde{X}_0, \ldots, \tilde{X}_{r-1})]\else\left[F(\tilde{X}_0, \ldots, \tilde{X}_{r-1})\right]\fi} {\mathbb{E}_{1}\if\relax\relax[G(\tilde{X}_0+1, \ldots, \tilde{X}_{r-1}+1)]\else\left[G(\tilde{X}_0+1, \ldots, \tilde{X}_{r-1}+1)\right]\fi}.\end{gathered}$$ As in the previous section, we introduce independent random walks $\hat{X}$, $\hat{Y}$ and $\hat{Z}$ with uniform steps in $\{-1,0,1\}$, and consider associated martingales ${M\!X}$, ${M\!Y}$ and ${M\!Z}$ such that for all $j \geq 0$, $$\begin{gathered} {M\!X}_j = \frac{f(\hat{X}_j)}{f(\hat{X}_0)} \prod_{i=0}^{j-1} w(\hat{X}_i) \qquad {M\!Y}_j = \frac{f(\hat{Y}_j)}{f(\hat{Y}_0)} \prod_{i=0}^{j-1} w(\hat{Y}_i) \qquad {M\!Z}_j = \frac{g(\hat{Z}_j)}{g(\hat{Z}_0)} \prod_{i=0}^{j-1} v(\hat{Z}_i),\end{gathered}$$ where $v(x) = w(x) w(x+1) = x(x+4)/(x+2)^2$ and $g(x) = x(x+4)(5x^2 + 20 x + 17)$ for all $x \in {\mathbb{N}}$. From now on, we work under the assumption $1 \leq r \leq k$. With the above notation, we can write $$\begin{aligned} \mathcal{E}_k(F,G) = \sum_{a,b \geq r-1} \sum_{c \geq 0} \frac{a+b+1}{9} \sum_{k' \geq 1} \mathbb{E} & \left[ {\mathbb{E}_{1}\if\relaxsz\relax[F(\hat{X}_0, \ldots, \hat{X}_{r-1}) \frac{f(1)w(k'+1)}{f(k'+1)} {M\!X}_{a-1} {\mathbf{1}_{\lbrace \hat{X}_{a-1}=k'+1 \rbrace}}]\else\left[F(\hat{X}_0, \ldots, \hat{X}_{r-1}) \frac{f(1)w(k'+1)}{f(k'+1)} {M\!X}_{a-1} {\mathbf{1}_{\lbrace \hat{X}_{a-1}=k'+1 \rbrace}}\right]\fi} \right. \\ & \left. {\mathbb{E}_{1}\if\relaxsz\relax[G(\hat{Y}_0+1, \ldots, \hat{Y}_{r-1}+1) \frac{f(1)w(k')}{f(k')} {M\!Y}_{b-1} {\mathbf{1}_{\lbrace \hat{Y}_{b-1}=k' \rbrace}}]\else\left[G(\hat{Y}_0+1, \ldots, \hat{Y}_{r-1}+1) \frac{f(1)w(k')}{f(k')} {M\!Y}_{b-1} {\mathbf{1}_{\lbrace \hat{Y}_{b-1}=k' \rbrace}}\right]\fi} \right. \\ & \left. {\mathbb{E}_{k'}\if\relaxsz\relax[\frac{g(k')v(k-1)}{g(k-1)} {M\!Z}_c {\mathbf{1}_{\lbrace \hat{Z}_c=k-1 \rbrace}}]\else\left[\frac{g(k')v(k-1)}{g(k-1)} {M\!Z}_c {\mathbf{1}_{\lbrace \hat{Z}_c=k-1 \rbrace}}\right]\fi} \right].\end{aligned}$$ Using the Markov property and re-arranging the terms yields $$\begin{aligned} \mathcal{E}_k(F,G) = \sum_{k' \geq 1} \mathbb{E} & \left[{M\!X}_{r-1} F(\hat{X}_0, \ldots, \hat{X}_{r-1}) {M\!Y}_{r-1} G(\hat{Y}_0+1, \ldots, \hat{Y}_{r-1}+1) \frac{f(1)w(k'+1)}{3 f(k'+1)} \vphantom{\sum_{c\geq 0}}\right. \\ & \left. \frac{f(1)w(k')}{3 f(k')} \sum_{a,b \geq 0} (a+b+2r-1) {\mathbb{E}_{\hat{X}_{r-1}}\if\relaxsz\relax[{M\!X}'_a {\mathbf{1}_{\lbrace \hat{X}'_a=k'+1 \rbrace}}]\else\left[{M\!X}'_a {\mathbf{1}_{\lbrace \hat{X}'_a=k'+1 \rbrace}}\right]\fi} {\mathbb{E}_{\hat{Y}_{r-1}}\if\relaxsz\relax[{M\!Y}'_b {\mathbf{1}_{\lbrace \hat{Y}'_b=k' \rbrace}}]\else\left[{M\!Y}'_b {\mathbf{1}_{\lbrace \hat{Y}'_b=k' \rbrace}}\right]\fi} \right. \\ & \left. \frac{g(k')v(k-1)}{g(k-1)} \sum_{c \geq 0} {\mathbb{E}_{k'}\if\relaxsz\relax[ {M\!Z}_c {\mathbf{1}_{\lbrace \hat{Z}_c=k-1 \rbrace}}]\else\left[ {M\!Z}_c {\mathbf{1}_{\lbrace \hat{Z}_c=k-1 \rbrace}}\right]\fi} \right],\end{aligned}$$ where $\hat{X}',\hat{Y}', {M\!X}', {M\!Y}'$ are independent copies of $\hat{X},\hat{Y}, {M\!X}, {M\!Y}$. We already have the necessary ingredients in Section \[S: First convergence\] to study the first factors; the only additional quantity we need to compute is $$\begin{gathered} {H'_{k'}(k)} = \sum_{c \geq 0} {\mathbb{E}_{k'}\if\relaxsz\relax[{M\!Z}_c {\mathbf{1}_{\lbrace \hat{Z}_c=k \rbrace}}]\else\left[{M\!Z}_c {\mathbf{1}_{\lbrace \hat{Z}_c=k \rbrace}}\right]\fi} = \sum_{c \geq 0} {\mathbb{P}_{k'}\if\relaxsz\relax({\tilde{\raisebox{0cm}[0.9\height]{$\tilde{Z}$}}}_c=k)\else\left({\tilde{\raisebox{0cm}[0.9\height]{$\tilde{Z}$}}}_c=k\right)\fi},\end{gathered}$$ where ${\tilde{\raisebox{0cm}[0.9\height]{$\tilde{Z}$}}}$ is the image of $\hat{Z}$ under the measure-change given by the martingale ${M\!Z}$, i.e. the Markov process such that ${\mathbb{E}\if\relax\relax[\phi({\tilde{\raisebox{0cm}[0.9\height]{$\tilde{Z}$}}}_i)]\else\left[\phi({\tilde{\raisebox{0cm}[0.9\height]{$\tilde{Z}$}}}_i)\right]\fi} = {\mathbb{E}\if\relax\relax[{M\!Z}_i \phi(\hat{Z}_i)]\else\left[{M\!Z}_i \phi(\hat{Z}_i)\right]\fi}$ for every continuous bounded function $\phi$. \[T: Values of SumGV\] Fix $k,k' \geq 2$. We have the following equalities: - if $k'\leq k$, $$\begin{gathered} \frac{g(k')v(k)}{g(k)} {H'_{k'}(k)} = \frac{3 g(k')}{35 (k+1)(k+2)(k+3)}\end{gathered}$$ - if $k'>k$, $$\begin{gathered} \frac{g(k')v(k)}{g(k)} {H'_{k'}(k)} = \frac{3 g(k)}{35 (k'+1)(k'+2)(k'+3)}.\end{gathered}$$ We omit the technical detail of the proof of this result; the ideas are exactly the same as in the proof of Lemma \[T: Values of SumFW\]. Now $$\begin{aligned} \mathcal{E}_k(F,G) = \sum_{1 \leq x,y \leq r} {\left(}\sum_{k' \geq 1} \mathcal{H}_{x,y,k'}(k){\right)}& {\mathbb{E}_{1}\if\relax\relax[{M\!X}_{r-1} F(\hat{X}_0, \ldots, \hat{X}_{r-1}) {\mathbf{1}_{\lbrace \hat{X}_{r-1}=x \rbrace}}]\else\left[{M\!X}_{r-1} F(\hat{X}_0, \ldots, \hat{X}_{r-1}) {\mathbf{1}_{\lbrace \hat{X}_{r-1}=x \rbrace}}\right]\fi} \\ & {\mathbb{E}_{1}\if\relax\relax[{M\!Y}_{r-1} G(\hat{Y}_0+1, \ldots, \hat{Y}_{r-1}+1) {\mathbf{1}_{\lbrace \hat{Y}_{r-1}=y \rbrace}}]\else\left[{M\!Y}_{r-1} G(\hat{Y}_0+1, \ldots, \hat{Y}_{r-1}+1) {\mathbf{1}_{\lbrace \hat{Y}_{r-1}=y \rbrace}}\right]\fi},\end{aligned}$$ where $$\begin{aligned} \mathcal{H}_{x,y,k'}(k) = & \frac{f(1)w(k'+1)}{3 f(k'+1)} \frac{f(1)w(k')}{3 f(k')} \frac{g(k')v(k-1)}{g(k-1)} {H'_{k'}(k-1)} \\ & \qquad \times ({H_{x}^{\ast}(k'+1)} {H_{y}(k')} + {H_{x}(k'+1)} {H_{y}^{\ast}(k')} + (2r-5) {H_{x}(k'+1)} {H_{y}(k')}).\end{aligned}$$ Therefore, it is enough to show that $\sum_{k' \geq 1} \mathcal{H}_{x,y,k'}(k)$ converges to $1$ as $k \rightarrow \infty$, uniformly in $x,y \leq r$. Let us first treat the terms for which $k' \geq k$. We have $$\begin{gathered} \frac{g(k')v(k-1)}{g(k-1)} {H'_{k'}(k-1)} = \frac{3 g(k-1)}{35 (k'+1)(k'+2)(k'+3)}.\end{gathered}$$ Moreover, the results of Lemmas \[T: Values of SumFW\] and \[T: Values of SumFWStar\] show that, uniformly in $y \leq r$, $$\begin{gathered} \frac{f(1)w(k')}{3 f(k')} {H_{y}^{\ast}(k')} {\xrightarrow[k' \rightarrow \infty]{}} 1 \\ \frac{f(1)w(k')}{3 f(k')} {H_{y}(k')} \sim_{k' \rightarrow \infty} \frac{2}{(k')^2},\end{gathered}$$ and that the same holds with $k'+1$ instead of $k'$ in the left-hand term. As a consequence, we have $$\begin{aligned} \sum_{k' \geq k} \mathcal{H}_{x,y,k'}(k) & \sim_{k \rightarrow \infty} \frac{3 g(k-1)}{35} \sum_{k' \geq k} \frac{4}{(k')^5} \nonumber \\ & \sim_{k \rightarrow \infty} \frac{3 k^4}{7} \frac{1}{k^4} = \frac{3}{7}, \label{E: JCV, first sum}\end{aligned}$$ uniformly in $x,y \leq r$. In second, we consider the terms for which we have $x \vee y < k' \leq k-1$. Lemmas \[T: Values of SumFW\] and \[T: Values of SumFWStar\] yield the following estimates, uniformly in $y \leq r$: $$\begin{gathered} \frac{f(1)w(k')}{3 f(k')} {H_{y}^{\ast}(k')} = 1 - \frac{3}{10(k'+1)(k'+2)} {\left(}C_y + 1 - {\left(}1 \wedge \frac{h(k')}{h(y)}{\right)}{\right)}\end{gathered}$$ and $$\begin{gathered} \frac{f(1)w(k')}{3 f(k')} {H_{y}(k')} \sim_{k' \rightarrow \infty} \frac{2}{(k')^2}.\end{gathered}$$ Putting this together with the result of Lemma \[T: Values of SumGV\], we get $$\begin{aligned} \sum_{k'=x \vee y +1}^{k-2} \mathcal{H}_{x,y,k'}(k) & \sim_{k \rightarrow \infty} \frac{3}{35 k^3} \sum_{k'=x \vee y +1}^{k-1} 2 \times \frac{2}{(k')^2} \times 5(k')^4 \nonumber \\ & \sim_{k \rightarrow \infty} \frac{12}{7 k^3} \frac{k^3}{3} = \frac{4}{7}. \label{E: JCV, second sum}\end{aligned}$$ The remaining term is $$\begin{gathered} \sum_{k'=1}^{x \vee y} \mathcal{H}_{x,y,k'}(k) = O{\left(}\frac{1}{k^3}{\right)}.\end{gathered}$$ Putting this together with and , we obtain $$\begin{gathered} \sum_{k' \geq 1} \mathcal{H}_{x,y,k'}(k) {\xrightarrow[k \rightarrow \infty]{}} \frac{3}{7}+\frac{4}{7} = 1,\end{gathered}$$ uniformly in $x,y \leq r$, hence the conclusion. To complete the proof of Theorem \[T: Joint cv of the rerooted trees\], we finally come back to the trees attached on the branches ${\mathrm{x}}_{\infty,i}^{(k)}$, $i \geq 1$ and ${\mathrm{y}}_{\infty,i}^{(k)}$, $i \geq 1$, putting together the above result and Corollary \[T: Distribution of theta(infty,k) with two marked points\]. Let $E^{(k)}(r)$ be the event that $a_{\infty}^{(k)}, b_{\infty}^{(k)} \geq r$, and the trees $({\overline{\tau}}_{\infty,i})^{(k)}$ and $({\overline{\tau}}'_{\infty,i})^{(k)}$ are finite for all $i \leq r$. Conditionally on $E^{(k)}(r)$, we have the following properties on the spines of $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k+1)}$: - The points ${\mathfrak{s}}_i(\theta_{\infty}^{(k)})$ and ${\mathrm{x}}_{\infty,i}^{(k)}$ are the same for all $i \leq r$, hence $R_i(\theta_{\infty}^{(k)}) = \tau_{\infty,i}^{(k)}$ and $L_i(\theta_{\infty}^{(k)}) = {\overline{\tau}}_{\infty,i}^{(k)}$ for all $i < r$. - The points ${\mathfrak{s}}_i(\theta_{\infty}^{(-k+1)})$ and ${\mathrm{y}}_{\infty,i}^{(k)}$ are the same for all $i \leq r$, hence $R_i(\theta_{\infty}^{(-k+1)}) = ({\overline{\tau}}'_{\infty,i})^{(k)}$ and $L_i(\theta_{\infty}^{(-k+1)}) = (\tau'_{\infty,i})^{(k)}$ for all $i < r$. - As a consequence, the spine labels $(S_i(\theta_{\infty}^{(k)}),S_i(\theta_{\infty}^{(-k+1)})-1)_{1 \leq i \leq r}$ converge in distribution to $(\tilde{X}_i,\tilde{Y}_i)_{0 \leq i \leq r-1}$, with $\tilde{X}_0=\tilde{Y}_0=1$. Further conditioning on $(S_i(\theta_{\infty}^{(k)}),S_i(\theta_{\infty}^{(-k+1)}))_{0 \leq i < r}$, we get that: - The subtrees $L_i(\theta_{\infty}^{(k)})$, $0 \leq i < r$ and $R_i(\theta_{\infty}^{(k)})$, $1 \leq i < r$ are independent random variables, with respective distributions ${\rho_{(S_i(\theta_{\infty}^{(k)}))}}$ and ${\rho_{(S_i(\theta_{\infty}^{(k)}))}}^+$. - The subtrees $L_i(\theta_{\infty}^{(-k+1)})$, $0 \leq i < r$ and $R_i(\theta_{\infty}^{(-k+1)})$, $1 \leq i < r$ are independent random variables, respectively obtained by adding 1 to the labels of trees distributed according to ${\rho_{(S_i(\theta_{\infty}^{(k)})-1)}}^+$ and ${\rho_{(S_i(\theta_{\infty}^{(k)})-1)}}$. - The random forests $(L_i(\theta_{\infty}^{(k)},R_i(\theta_{\infty}^{(k)}))_{0\leq i < r}$ and $(L_i(\theta_{\infty}^{(-k+1)},R_i(\theta_{\infty}^{(-k+1)}))_{0\leq i < r}$ are independent. Therefore, it is enough to show that ${\mathbb{P}\if\relax\relax({\overline{E^{(k)}(r)}})\else\left({\overline{E^{(k)}(r)}}\right)\fi}$ converges to 0 as $k \rightarrow \infty$. Fix ${\varepsilon}> 0$. We have $$\begin{gathered} {\mathbb{P}\if\relax\relax({\overline{E^{(k)}(r)}})\else\left({\overline{E^{(k)}(r)}}\right)\fi} \leq {\mathbb{P}\if\relax\relax(a_{\infty}^{(k)} < r \mbox{ or } b_{\infty}^{(k)} < r)\else\left(a_{\infty}^{(k)} < r \mbox{ or } b_{\infty}^{(k)} < r\right)\fi} + {\mathbb{E}\if\relaxsz\relax[1 \wedge \frac{2r+2}{a_{\infty}^{(k)}+b_{\infty}^{(k)}+1}]\else\left[1 \wedge \frac{2r+2}{a_{\infty}^{(k)}+b_{\infty}^{(k)}+1}\right]\fi}\end{gathered}$$ We know from Lemma \[T: Joint cv labels\] that the first term converges to 0. More precisely, for all $r' \in {\mathbb{N}}$, we have $$\begin{gathered} {\mathbb{P}\if\relax\relax(a_{\infty}^{(k)} < r' \mbox{ or } b_{\infty}^{(k)} < r')\else\left(a_{\infty}^{(k)} < r' \mbox{ or } b_{\infty}^{(k)} < r'\right)\fi} \leq {\varepsilon}\end{gathered}$$ for all $k$ large enough, hence $$\begin{gathered} {\mathbb{E}\if\relaxsz\relax[1 \wedge \frac{2r+2}{a_{\infty}^{(k)}+b_{\infty}^{(k)}+1}]\else\left[1 \wedge \frac{2r+2}{a_{\infty}^{(k)}+b_{\infty}^{(k)}+1}\right]\fi} \leq {\varepsilon}+ \frac{2r+2}{2r'+1}\end{gathered}$$ for $k$ large enough. Thus we can choose $r'$ in such a way that for all $k$ large enough, we have $$\begin{gathered} {\mathbb{P}\if\relax\relax({\overline{E^{(k)}(r)}})\else\left({\overline{E^{(k)}(r)}}\right)\fi} \leq 3 {\varepsilon}.\end{gathered}$$ This concludes the proof. Convergence of the associated quadrangulations {#S: Joint CV of the quadrangulations} ============================================== As indicated in the Introduction, the main step of the proof of Theorem \[T: Joint CV of the quadrangulations\] consists in showing the following result. We use the conventions $$\begin{gathered} \theta_{\infty}^{(\infty)} = {\overrightarrow{\theta_{\infty}}}, \qquad \theta_{\infty}^{(-\infty)} = {\overleftarrow{\theta_{\infty}}}, \\ {\overrightarrow{Q}}_{\infty}^{(\infty)} = {\overrightarrow{Q}}_{\infty}, \qquad {\overleftarrow{Q}}_{\infty}^{(\infty)} = {\overleftarrow{Q}}_{\infty}.\end{gathered}$$ \[T: Ball inclusions\] For all $r \in {\mathbb{N}}$ and ${\varepsilon}> 0$, there exists $h \in {\mathbb{N}}$ such that for all $k$ large enough, possibly infinite, we have $$\begin{gathered} \label{E: R+ ball inclusion} V{\left(}B_{{\overrightarrow{Q}}_{\infty}^{(k)}}(r){\right)}\subset V{\left(}B_{\theta_{\infty}^{(k)}}(h){\right)}\end{gathered}$$ and $$\begin{gathered} \label{E: L+ ball inclusion} V{\left(}B_{{\overleftarrow{Q}}_{\infty}^{(k)}}(r){\right)}\subset V{\left(}B_{\theta_{\infty}^{(-k+1)}}(h){\right)}\cup \{\lambda_i: {\left|i\right|}\leq r \}\end{gathered}$$ with probability at least $1-{\varepsilon}$. Let us first see how this result allows us to prove the theorem. Using the Skorokhod representation theorem, we assume that the convergence $$\begin{gathered} (\theta_{\infty}^{(k)},\theta_{\infty}^{(-k+1)}) {\xrightarrow[k \rightarrow \infty]{}} ({\overrightarrow{\theta_{\infty}}},{\overleftarrow{\theta_{\infty}}}),\end{gathered}$$ obtained in Theorem \[T: Joint cv of the rerooted trees\], holds almost surely. In particular, it also holds in probability: for all $h \in {\mathbb{N}}$ and ${\varepsilon}> 0$, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(D(\theta_{\infty}^{(k)},{\overrightarrow{\theta_{\infty}}}) \leq \frac{1}{1+h} \mbox{ and } D(\theta_{\infty}^{(-k+1)},{\overleftarrow{\theta_{\infty}}}) \leq \frac{1}{1+h})\else\left(D(\theta_{\infty}^{(k)},{\overrightarrow{\theta_{\infty}}}) \leq \frac{1}{1+h} \mbox{ and } D(\theta_{\infty}^{(-k+1)},{\overleftarrow{\theta_{\infty}}}) \leq \frac{1}{1+h}\right)\fi} \geq 1- {\varepsilon}\end{gathered}$$ for all $k$ large enough, which means that $$\begin{gathered} \label{E: Applying the joint tree CV} B_{\theta_{\infty}^{(k)}}(h) = B_{{\overrightarrow{\theta_{\infty}}}}(h) \qquad \mbox{and} \qquad B_{\theta_{\infty}^{(-k)}}(h) = B_{{\overleftarrow{\theta_{\infty}}}}(h)\end{gathered}$$ with probability at least $1-{\varepsilon}$, for all $k$ large enough. For all $r \in {\mathbb{N}}$ and ${\varepsilon}> 0$, the above proposition shows that there exists $h_{{\varepsilon}}$ such that the inclusions and hold with probability at least $1-{\varepsilon}$, for all $k$ large enough. Putting this together with for $h=h_{{\varepsilon}}$, we get that $$\begin{gathered} B_{{\overrightarrow{Q}}_{\infty}^{(k)}}(r) = B_{\Phi({\overrightarrow{\theta_{\infty}}})} (r) \qquad \mbox{and} \qquad B_{{\overleftarrow{Q}}_{\infty}^{(k)}}(r) = B_{\Phi({\overleftarrow{\theta_{\infty}}})} (r)\end{gathered}$$ with probability at least $1-{\varepsilon}$, for all $k$ large enough (possibly infinite). Therefore, we have the convergence $$\begin{gathered} ({\overrightarrow{Q}}_{\infty}^{(k)}, {\overleftarrow{Q}}_{\infty}^{(k)}) {\xrightarrow[k \rightarrow \infty]{}} ({\overrightarrow{Q}}_{\infty}, {\overleftarrow{Q}}_{\infty})\end{gathered}$$ in probability, hence the joint distributional convergence. The rest of the section is devoted to the proof of Proposition \[T: Ball inclusions\]. We first introduce conditions on the “left-hand side” and “right-hand side” of the trees $\theta_{\infty}^{(k)}$, $\theta_{\infty}^{(-k+1)}$, which are sufficient to get the ball inclusions and . This is done in Section \[S: LR conditions\] (see in particular Lemma \[T: LR conditions for k&lt;infty\]). In Sections \[S: Spine labels\], \[S: Left-hand condition\] and \[S: Right-hand condition\], we then show that an “elementary block” of these conditions holds with arbitrarily high probability, for all $s$ and $k$ large enough. The corresponding results are stated in Lemmas \[T: Left-hand condition\] and \[T: Right-hand condition\]. Finally, Section \[S: Conclusion\] concludes the proof of the proposition. Conditions on the right-hand and left-hand part of a labeled tree {#S: LR conditions} ----------------------------------------------------------------- We first introduce some more detailed notation for the balls in a rooted tree $T$. For all $s \geq 0$, we let $\partial B_T (s)$ denote the “boundary” of the ball of radius $s$, defined as $$\begin{gathered} \partial B_T(s) = \{ {\mathrm{v}} \in T: {\mathrm{v}} \mbox{ has height } s \}.\end{gathered}$$ In what follows, the letter $L$ will correspond to the “left-hand part” of a tree, and $R$ will be used for the “right-hand part”. All the following notations are given for the left-hand part, and are also valid for the right-hand part (replacing $L$ by $R$). Assume that $T \in \textbf{S}$, and recall that $L_i(T)$ denotes the subtree of the descendants of ${\mathfrak{s}}_i(T)$ that are on the left of the spine. We let $$\begin{gathered} L(T) = \bigcup_{i \geq 0} L_i(T),\end{gathered}$$ and for all $s \geq 0$, $$\begin{gathered} {LB}_T (s) = B_T(s) \cap L(T) = \bigcup_{i=0}^s B_{L_i(T)}(s-i),\end{gathered}$$ and $$\begin{gathered} \partial {LB}_T(s) = \partial B_T(s) \cap L(T).\end{gathered}$$ We also use the natural extensions of this notation to labeled trees. We are interested in the following subsets of ${\mathbb{S}}$, for all $r,s,s',h \in {\mathbb{N}}$: $$\begin{gathered} \mathbb{A}_L (r,s,s',h) = \{ (T,l) \in {\mathbb{S}}: \bigcup_{i=0}^{s'} L_i(T) \subset B_T(h), \mbox{ and } \exists {\mathrm{v}} \in \bigcup_{i=s+1}^{s'} L_i(T) \mbox{ s.t. } l({\mathrm{v}}) = -r \}\end{gathered}$$ and $$\begin{gathered} \mathbb{A}_{L +} (r,s) = \{ (T,l) \in {\mathbb{S}}: \forall {\mathrm{v}} \in L(T) \setminus {LB}_T(s),\ l({\mathrm{v}}) > r \}.\end{gathered}$$ Figure \[F: LRConditions\] illustrates these definitions. We give a sufficient condition for an inclusion between the balls in $\theta$ and in $\Phi(\theta)$, in terms of these sets $\mathbb{A}_L (r,s,s',h)$, $\mathbb{A}_{L+} (r,s)$, $\mathbb{A}_R (r,s,s',h)$ and $\mathbb{A}_{R+} (r,s)$: ![An illustration of the conditions $\theta \in \mathbb{A}_L (r,s,s',h)$ (on the left) and $\theta \in \mathbb{A}_{R+} (r,s)$ (on the right).[]{data-label="F: LRConditions"}](LRConditions.pdf) \[T: LR conditions for ball inclusions\] Let $r \in {\mathbb{N}}$. 1. For all $\theta \in {\overrightarrow{{\mathbb{S}}}}$, if there exists sequences $(s(r'))_{0 \leq r' \leq r}$ and $(h(r'))_{1 \leq r' \leq r}$ such that $$\begin{gathered} \theta \in \mathbb{A}_L (r',s(r'-1),s(r'),h(r')) \cap \mathbb{A}_{R+} (r',s(r')) \qquad \forall r' \in \{1,\ldots,r\},\end{gathered}$$ then we have $V(B_{\Phi(\theta)}(r)) \subset V(B_{\theta}(h(r)))$. 2. For all $\theta \in {\overleftarrow{{\mathbb{S}}}}$, if there exists sequences $(s(r'))_{0 \leq r' \leq r}$ and $(h(r'))_{1 \leq r' \leq r}$ such that $$\begin{gathered} \theta \in \mathbb{A}_R (r',s(r'-1),s(r'),h(r')) \cap \mathbb{A}_{L+} (r',s(r')) \qquad \forall r' \in \{1,\ldots,r\},\end{gathered}$$ then we have $V(B_{\Phi(\theta)}(r)) \subset V(B_{\theta}(h(r))) \cup \{\lambda_i: {\left|i\right|}\leq r \}$. Let $\theta \in {\overrightarrow{{\mathbb{S}}}}$. We show by induction that for all $r \geq 0$, if there exists sequences $(s(r'))_{0 \leq r' \leq r}$ and $(h(r'))_{1 \leq r' \leq r}$ such that $$\begin{gathered} \theta \in \mathbb{A}_L (r,s(r'-1),s(r'),h(r')) \cap \mathbb{A}_{R+} (r',s(r')) \qquad \forall r' \in \{1,\ldots,r\},\end{gathered}$$ then we have $$\begin{gathered} V{\left(}B_{\Phi(\theta)}(r){\right)}\subset V{\left(}{RB}_{\theta}(s(r)) \cup \bigcup_{i=0}^{s(r)} L_i(\theta){\right)}.\end{gathered}$$ This is enough to prove the first part of the Lemma. Indeed, since $\theta$ belongs to $\mathbb{A}_L (r,s(r-1),s(r),h(r))$, we have $\bigcup_{i=0}^{s(r)} L_i(\theta) \subset B_{\theta} (h(r))$ and $s(r) \leq h(r)$, so $$\begin{gathered} V{\left(}{RB}_{\theta}(s(r)) \cup \bigcup_{i=0}^{s(r)} L_i(\theta){\right)}\subset V(B_{\theta}(h(r))).\end{gathered}$$ The result is obviously true for $r=0$. Assume that it holds for a given $r \geq 0$. We order the corners of $\theta$ by writing ${\mathrm{c}}_n(\theta) \leq {\mathrm{c}}_{n'} (\theta)$ for all $n \leq n'$. For all $r' \leq r+1$, let $\xi_{r'}$ denote the largest corner incident to the vertex ${\mathfrak{s}}_{s(r')}$. Note that for all $r' \leq r$, for every corner ${\mathrm{c}}$ of $\theta$, we have ${\mathrm{c}} \leq \xi_{r'}$ if and only if every corner $\tilde{{\mathrm{c}}}$ incident to the same vertex as ${\mathrm{c}}$ verifies $\tilde{{\mathrm{c}}} \leq \xi_{r'}$. The induction hypothesis ensures that for every corner ${\mathrm{c}}$ of $\theta$ which is incident to a vertex of $B_{\Phi(\theta)}(r)$, we have ${\mathrm{c}} \leq \xi_r$. (This is the case even if the corresponding vertex is in the right-hand part of $\theta$.) Let ${\mathrm{v}} \in V(\theta)$. The vertex ${\mathrm{v}}$ belongs to $B_{\Phi(\theta)}(r+1)$ if and only if one of the following conditions holds: 1. ${\mathrm{v}}$ belongs to $B_{\Phi(\theta)}(r)$. 2. There exist a vertex ${\mathrm{v}}'$ of $B_{\Phi(\theta)}(r)$, and two corners ${\mathrm{c}}$ and ${\mathrm{c}}'$, respectively incident to ${\mathrm{v}}$ and ${\mathrm{v}}'$, such that $\sigma_{\theta}({\mathrm{c}}) = {\mathrm{c}}'$. 3. There exist a vertex ${\mathrm{v}}'$ of $B_{\Phi(\theta)}(r)$, and two corners ${\mathrm{c}}$ and ${\mathrm{c}}'$, respectively incident to ${\mathrm{v}}$ and ${\mathrm{v}}'$, such that $\sigma_{\theta}({\mathrm{c}}') = {\mathrm{c}}$. Respectively, in these three cases, it holds that: 1. Every corner $\tilde{{\mathrm{c}}}$ incident to ${\mathrm{v}}$ is such that $\tilde{{\mathrm{c}}} \leq \xi_r \leq \xi_{r+1}$. 2. We have ${\mathrm{c}} \leq {\mathrm{c}}' \leq \xi_r$, so every corner $\tilde{{\mathrm{c}}}$ incident to ${\mathrm{v}}$ is such that $\tilde{{\mathrm{c}}} \leq \xi_r \leq \xi_{r+1}$. 3. The corner ${\mathrm{c}}$ is the first corner with label $l({\mathrm{v}}')-1$ after ${\mathrm{c}}'$. Since ${\mathrm{v}}'$ belongs to $B_{\Phi(\theta)}(r)$, the bound ensures that $$\begin{gathered} d_{\Phi(\theta)}({\mathrm{v}}_0,{\mathrm{v}}') \geq {\left|l({\mathrm{v}}_0) - l({\mathrm{v}}')\right|} = l({\mathrm{v}}')\end{gathered}$$ (where ${\mathrm{v}}_0$ denotes the root of $\theta$), so $l({\mathrm{v}}')-1 \geq -r-1$. Moreover, we have ${\mathrm{c}}' \leq \xi_r$, and since $\theta$ belongs to $\mathbb{A}_L(r+1,s(r),s(r+1),h(r+1))$, there exists a corner with label $-r-1$ between $\xi_r$ and $\xi_{r+1}$. As a consequence, we have ${\mathrm{c}} \leq \xi_{r+1}$, and therefore every corner $\tilde{{\mathrm{c}}}$ incident to ${\mathrm{v}}$ is such that $\tilde{{\mathrm{c}}} \leq \xi_{r+1}$. Thus, we get the inclusion $$\begin{gathered} V{\left(}B_{\Phi(\theta)}(r+1){\right)}\subset V{\left(}R(\theta) \cup \bigcup_{i=0}^{s(r+1)} L_i(\theta){\right)}.\end{gathered}$$ Finally, for every vertex ${\mathrm{v}} \in R(\theta) \setminus {RB}_{\theta}(s(r+1))$, since $\theta$ belongs to $\mathbb{A}_{R+}(r+1,s(r+1))$, we have $l({\mathrm{v}}) > r+1$, so ${\mathrm{v}}$ is at distance at least $r+2$ of the root in $\Phi(\theta)$. This yields $$\begin{gathered} V{\left(}B_{\Phi(\theta)}(r+1){\right)}\subset V{\left(}{RB}_{\theta}(s(r+1)) \cup \bigcup_{i=0}^{s(r+1)} L_i(\theta){\right)}.\end{gathered}$$ We now consider the case where $\theta \in {\overleftarrow{{\mathbb{S}}}}$. Similarly, it is enough to show by induction that for all $r \geq 0$, if there exists sequences $(s(r'))_{0 \leq r' \leq r}$ and $(h(r'))_{1 \leq r' \leq r}$ verifying the hypotheses, then we have $$\begin{gathered} V{\left(}B_{\Phi(\theta)}(r) \setminus \Lambda{\right)}\subset V{\left(}{LB}_{\theta}(s(r)) \cup \bigcup_{i=0}^{s(r)} R_i(\theta){\right)}.\end{gathered}$$ (Indeed, equation shows that $V(B_{\Phi(\theta)}(r) \cap \Lambda) \subset \{\lambda_i: {\left|i\right|}\leq r \}$.) Assume that the result holds for a given $r \geq 0$. For all $r' \leq r+1$, let $\xi'_{r'}$ denote the smallest corner incident to the vertex ${\mathfrak{s}}_{s(r')}$. For every corner ${\mathrm{c}}$ of $\theta$ which is incident to a vertex of $B_{\Phi(\theta)}(r)$, we have ${\mathrm{c}} \geq \xi_r$. We fix ${\mathrm{v}} \in V(\theta)$, and study the same three cases as above. Respectively, we obtain that: 1. Every corner $\tilde{{\mathrm{c}}}$ incident to ${\mathrm{v}}$ is such that $\tilde{{\mathrm{c}}} \geq \xi'_r \geq \xi'_{r+1}$. 2. The corner ${\mathrm{c}}'$ is the first corner with label $l({\mathrm{v}})-1$ after ${\mathrm{c}}$ (or a point of $\Lambda$, if such a corner does not exist), and equation gives that $l({\mathrm{v}})-1=l({\mathrm{v}}') \geq -r$. Since $\theta$ belongs to $\mathbb{A}_R(r+1,s(r),s(r+1),h(r+1))$, there exists a corner with label $-r-1$ which is (strictly) between $\xi'_{r+1}$ and $\xi'_r$. So, if we had ${\mathrm{c}} < \xi'_{r+1}$, this would imply ${\mathrm{c}}' < \xi'_r$, which is impossible since ${\mathrm{v}}'$ is in $B_{\Phi(\theta)}(r)$. Thus, we have ${\mathrm{c}} \geq \xi'_{r+1}$, and every corner $\tilde{{\mathrm{c}}}$ incident to ${\mathrm{v}}$ is such that $\tilde{{\mathrm{c}}} \geq \xi'_{r+1}$. 3. Note that since ${\mathrm{v}}$ is a vertex of $\theta$, we cannot have ${\mathrm{v}}' \in \Lambda$. Thus, we have ${\mathrm{c}} \geq {\mathrm{c}}' \geq \xi'_r$, so every corner $\tilde{{\mathrm{c}}}$ incident to ${\mathrm{v}}$ is such that $\tilde{{\mathrm{c}}} \geq \xi'_r \geq \xi'_{r+1}$. This yields the inclusion $$\begin{gathered} V{\left(}B_{\Phi(\theta)}(r+1)\setminus \Lambda{\right)}\subset V{\left(}L(\theta) \cup \bigcup_{i=0}^{s(r+1)} R_i(\theta){\right)},\end{gathered}$$ and the same argument as above concludes the proof. Our goal is now to obtain similar conditions on the trees $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k+1)}$, sufficient to get the ball inclusions and . Note that we cannot apply the above result directly, since $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k+1)}$ are elements of ${\mathbb{S}}^{\ast}(0)$ and ${\mathbb{S}}^{\ast}(1)$ instead of ${\overrightarrow{{\mathbb{S}}}}$ and ${\overleftarrow{{\mathbb{S}}}}$. Moreover, for example in $\theta_{\infty}^{(k)}$, we are not interested in *all* the vertices which are on the right of the spine, but only in those which are on the right of the segment ${[ \! [ {\mathrm{e}}_k(\theta_{\infty}) , {\mathrm{e}}_0(\theta_{\infty}) ] \! ]}$. Informally, the others are “cut-off” from the root when we split the quadrangulation $Q_{\infty}$ along the maximal geodesic, so they do not belong to the neighbourhood of ${\mathrm{e}}_k(\theta_{\infty})$ in ${\overrightarrow{Q}}_{\infty}^{(k)}$. Therefore, for all $k \in {\mathbb{N}}$, we further decompose the trees $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k+1)}$. Recall the notation introduced in Section \[S: Bi-marked theta(n)\]. We let $$\begin{gathered} R_{\infty}^{(k)} = \bigcup_{i=1}^{a_{\infty}^{(k)}+c_{\infty}^{(k)}} \tau_{\infty,i}^{(k)} \qquad \mbox{and} \qquad R_{\infty}^{(k)} (s) = R_{\infty}^{(k)} \cap B_{\theta_{\infty}^{(k)}} (s) \quad \forall s \geq 0,\end{gathered}$$ and similarly, $$\begin{gathered} L_{\infty}^{(-k+1)} = \bigcup_{i=1}^{b_{\infty}^{(k)}+c_{\infty}^{(k)}} (\tau'_{\infty,i})^{(k)} \qquad \mbox{and} \qquad L_{\infty}^{(-k+1)} (s) = L_{\infty}^{(-k+1)} \cap L_{\theta_{\infty}^{(-k+1)}} (s) \quad \forall s \geq 0.\end{gathered}$$ Note that we have, for example, $R_{\infty}^{(k)} \subset R(\theta_{\infty}^{(k)})$ and $R_{\infty}^{(k)} (s) \subset {RB}_{\theta_{\infty}^{(k)}}(s)$. We consider the following events: - $\mathcal{A}_{R+}^{(k)} (r,s)$: “every vertex ${\mathrm{v}} \in R_{\infty}^{(k)} \setminus (R_{\infty}^{(k)} (s))$ has label greater than $r$ in $\theta_{\infty}^{(k)}$”, - $\mathcal{A}_{L+}^{(-k+1)} (r,s)$: “every vertex ${\mathrm{v}} \in L_{\infty}^{(-k+1)} \setminus (L_{\infty}^{(-k+1)} (s))$ has label greater than $r$ in $\theta_{\infty}^{(-k+1)}$”. For $k=\infty$, we complement this notation by setting $$\begin{gathered} \mathcal{A}_{R+}^{(\infty)} (r,s) = \{ {\overrightarrow{\theta_{\infty}}} \in \mathbb{A}_{R+} (r,s) \} \quad \mbox{and} \quad \mathcal{A}_{L+}^{(-\infty)} (r,s) = \{ {\overleftarrow{\theta_{\infty}}} \in \mathbb{A}_{L+} (r,s) \}.\end{gathered}$$ We can now adapt Lemma \[T: LR conditions for ball inclusions\] to $\theta_{\infty}^{(k)}$ in the following way: \[T: LR conditions for k&lt;infty\] Let $r \in {\mathbb{N}}$, and consider two sequences of positive integers $(s(r'))_{0 \leq r' \leq r}$ and $(h(r'))_{1 \leq r' \leq r}$. For all $k \in {\mathbb{N}}\cup \{\infty\}$, we have that: 1. Conditionally on ${\mathfrak{s}}_{s(r)+1}(\theta_{\infty}^{(k)}) \prec {\mathrm{e}}_0(\theta_{\infty})$ in $\theta_{\infty}^{(k)}$ and on the event $$\begin{gathered} \label{E: Cond for ball inclusion, k positive} \bigcap_{r'=1}^r {\left(}\theta_{\infty}^{(k)} \in \mathbb{A}_L (r',s(r'-1),s(r'),h(r')){\right)}\cap \mathcal{A}_{R+}^{(k)} (r',s(r')),\end{gathered}$$ we have $$\begin{gathered} V{\left(}B_{{\overrightarrow{Q}}_{\infty}^{(k)}}(r){\right)}\subset V{\left(}B_{\theta_{\infty}^{(k)}}(h(r)){\right)}\end{gathered}$$ almost surely. 2. Conditionally on ${\mathfrak{s}}_{s(r)+1}(\theta_{\infty}^{(-k+1)}) \prec {\mathrm{e}}_0(\theta_{\infty})$ in $\theta_{\infty}^{(-k+1)}$ and on the event $$\begin{gathered} \label{E: Cond for ball inclusion, k negative} \bigcap_{r'=1}^r {\left(}\theta_{\infty}^{(-k+1)} \in \mathbb{A}_R (r',s(r'-1),s(r'),h(r')){\right)}\cap \mathcal{A}_{L+}^{(-k+1)} (r',s(r')),\end{gathered}$$ we have $$\begin{gathered} V{\left(}B_{{\overleftarrow{Q}}_{\infty}^{(k)}}(r){\right)}\subset V{\left(}B_{\theta_{\infty}^{(-k+1)}}(h(r)){\right)}\cup \{\lambda_i: {\left|i\right|}\leq r \}\end{gathered}$$ almost surely. Figure \[F: Conditions finite k\] illustrates the “new” conditions which appear, compared to the conditions of Lemma \[T: LR conditions for ball inclusions\] (both are shown for the first case). Note that the condition on the left-hand side of $\theta_{\infty}^{(k)}$ is exactly the same as in Lemma \[T: LR conditions for ball inclusions\], already illustrated in Figure \[F: LRConditions\]. ![Illustration of the event $\mathcal{A}_{R+}(s(r))$ (on the left), and of the additional condition ${\mathfrak{s}}_{s(r)+1}(\theta_{\infty}^{(k)}) \prec {\mathrm{e}}_0(\theta_{\infty})$ in $\theta_{\infty}^{(k)}$ (on the right). The second figure emphasises the fact that under the condition ${\mathfrak{s}}_{s(r)+1}(\theta_{\infty}^{(k)}) \prec {\mathrm{e}}_0(\theta_{\infty})$, the spine $\mathcal{S}_{\infty}$ does not intersect the set $R_{\infty}^{(k)} (s(r)) \cup \bigcup_{i=0}^{s(r)} L_i(\theta_{\infty}^{(k)})$ (and in particular, it does not contain $e_k(\theta_{\infty})$). This will be used in the proof of Lemma \[T: LR conditions for k&lt;infty\].[]{data-label="F: Conditions finite k"}](Conditions_finite_k.pdf) The case where $k=\infty$ is a direct application of Lemma \[T: LR conditions for ball inclusions\]. From now on, we fix $k \in {\mathbb{N}}$. Let $\mathcal{S}_{\infty} = \{{\mathfrak{s}}_i(\theta_{\infty}): i\geq 0 \}$ be the spine of $\theta_{\infty}$, and $\Gamma'_{\infty} = \{{\mathrm{e}}'_{k'}: k' \geq 1 \}$ be the “copy” of the infinite geodesic ray we introduced in the definition of the split quadrangulation $\operatorname{Sp}(Q_{\infty})$ (see for example Figure \[F: Split UIPQ\]). The construction of $\operatorname{Sp}(Q_{\infty})$ ensures that there are no edges between the vertices of $(\Gamma'_{\infty} \cup R(\theta_{\infty})) \setminus \mathcal{S}_{\infty}$ and the vertices of $L(\theta_{\infty}) \setminus \mathcal{S}_{\infty}$. As a consequence, any geodesic from a point of $(\Gamma'_{\infty} \cup R(\theta_{\infty})) \setminus \mathcal{S}_{\infty}$ to a point of $L(\theta_{\infty}) \setminus \mathcal{S}_{\infty}$ contains a vertex of $\mathcal{S}_{\infty}$. Note that we have the following equalities: $$\begin{gathered} R(\theta_{\infty}) = R(\theta_{\infty}^{(k)}) \setminus R_{\infty}^{(k)} = R(\theta_{\infty}^{(-k+1)}) \cup L_{\infty}^{(-k+1)} \\ L(\theta_{\infty}) = L(\theta_{\infty}^{(k)}) \cup R_{\infty}^{(k)} = L(\theta_{\infty}^{(-k+1)}) \setminus L_{\infty}^{(-k+1)}.\end{gathered}$$ In the first case, the same induction as in the proof Lemma \[T: LR conditions for ball inclusions\] shows that conditionally on , we have $$\begin{gathered} \label{E: Partial ball inclusion, finite k} V{\left(}B_{{\overrightarrow{Q}}_{\infty}^{(k)}}(r){\right)}\cap V{\left(}L(\theta_{\infty}){\right)}\subset V{\left(}R_{\infty}^{(k)} (s(r)) \cup \bigcup_{i=0}^{s(r)} L_i(\theta_{\infty}^{(k)}){\right)}.\end{gathered}$$ Indeed, the first step of the induction shows that there are no vertices belonging to the ball $B_{{\overrightarrow{Q}}_{\infty}^{(k)}}(r)$ after ${\mathfrak{s}}_{s(r)}(\theta_{\infty}^{(k)})$ in the clockwise order, or equivalently $$\begin{gathered} V{\left(}B_{{\overrightarrow{Q}}_{\infty}^{(k)}}(r){\right)}\cap V{\left(}L(\theta_{\infty}^{(k)}){\right)}\subset V{\left(}\bigcup_{i=0}^{s(r)} L_i(\theta_{\infty}^{(k)}){\right)},\end{gathered}$$ and since the vertices in $R_{\infty}^{(k)} \setminus R_{\infty}^{(k)}(s(r))$ all have labels greater than $r$, we also have $$\begin{gathered} V{\left(}B_{{\overrightarrow{Q}}_{\infty}^{(k)}}(r){\right)}\cap V{\left(}R_{\infty}^{(k)}{\right)}\subset V{\left(}R_{\infty}^{(k)} (s(r)){\right)}.\end{gathered}$$ Noting that $L(\theta_{\infty}^{(k)}) \cup R_{\infty}^{(k)} = L(\theta_{\infty})$ yields inclusion . To conclude the proof of the first point, we only have to show that the vertices of $R(\theta_{\infty}) \setminus \mathcal{S}_{\infty}$ are at distance at least $r+1$ from ${\mathrm{e}}_k(\theta_{\infty})$ in ${\overrightarrow{Q}}_{\infty}^{(k)}$. Let ${\mathrm{v}} \in V(R(\theta_{\infty}) \setminus \mathcal{S}_{\infty})$, and let $\gamma$ be a geodesic path from ${\mathrm{v}}$ to ${\mathrm{e}}_k(\theta_{\infty})$ in ${\overrightarrow{Q}}_{\infty}^{(k)}$. The condition ${\mathfrak{s}}_{s(r)+1}(\theta_{\infty}^{(k)}) \prec {\mathrm{e}}_0(\theta_{\infty})$ now has two consequences, as noted in the caption of Figure \[F: Conditions finite k\]: - First, ${\mathrm{e}}_k(\theta_{\infty})$ belongs to $L(\theta_{\infty}) \setminus \mathcal{S}_{\infty}$. Thus the geodesic $\gamma$ goes from a point of $R(\theta_{\infty}) \setminus \mathcal{S}_{\infty}$ to a point of $L(R_{\infty}) \setminus \mathcal{S}_{\infty}$, so there exists a vertex ${\mathrm{v}}'$ of $\gamma$ which belongs to the spine $\mathcal{S}_{\infty}$ (see the remark we made at the beginning of the proof). - Second, the set $$\begin{gathered} R_{\infty}^{(k)} (s(r)) \cup \bigcup_{i=0}^{s(r)} L_i(\theta_{\infty}^{(k)})\end{gathered}$$ does not intersect $\mathcal{S}_{\infty}$, so inclusion implies that $$\begin{gathered} \label{E: Spine and ball do not intersect} V{\left(}B_{{\overrightarrow{Q}}_{\infty}^{(k)}}(r){\right)}\cap V{\left(}\mathcal{S}_{\infty}{\right)}= \emptyset.\end{gathered}$$ Putting these two facts together, we get that $$\begin{gathered} d_{{\overrightarrow{Q}}_{\infty}^{(k)}} ({\mathrm{v}},{\mathrm{e}}_k(\theta_{\infty})) \geq d_{{\overrightarrow{Q}}_{\infty}^{(k)}} ({\mathrm{v}}',{\mathrm{e}}_k(\theta_{\infty})) \geq r+1.\end{gathered}$$ Similarly, in the second case, conditionally on , we have $$\begin{gathered} V{\left(}B_{{\overleftarrow{Q}}_{\infty}^{(k)}}(r){\right)}\cap V{\left(}R(\theta_{\infty}){\right)}\subset V{\left(}L_{\infty}^{(k)} (s(r)) \cup \bigcup_{i=0}^{s(r)} R_i(\theta_{\infty}^{(k)}){\right)},\end{gathered}$$ and conditionally on ${\mathfrak{s}}_{s(r)+1}(\theta_{\infty}^{(-k+1)}) \prec {\mathrm{e}}_0(\theta_{\infty})$, the latter set does not intersect $\mathcal{S}_{\infty}$, so equation still holds. Thus we only have to show that the vertices of $L(\theta_{\infty}) \setminus \mathcal{S}_{\infty}$ are at distance at least $r+1$ from ${\mathrm{e}}'_k$ in ${\overleftarrow{Q}}_{\infty}^{(k)}$. As above, for every such vertex ${\mathrm{v}}$, any geodesic path from ${\mathrm{v}}$ to ${\mathrm{e}}'_k$ in ${\overleftarrow{Q}}_{\infty}^{(k)}$ intersects $\mathcal{S}_{\infty}$, hence $$\begin{gathered} d_{{\overleftarrow{Q}}_{\infty}^{(k)}} ({\mathrm{v}},{\mathrm{e}}_k(\theta_{\infty})) \geq r+1.\end{gathered}$$ From now on, we fix $r \in {\mathbb{N}}$. The goal of the next sections is to show that the above conditions hold with arbitrarily high probability, for $k$ large enough. For condition , the main ingredients are the following lemmas: \[T: Left-hand condition\] Let $s \in {\mathbb{N}}$ and ${\varepsilon}> 0$. There exists $s_L = s_L(r,s,{\varepsilon})$ such that for all $s' \geq s_L$, there exists $h_L(s',{\varepsilon})$ such that for all $k$ large enough, possibly infinite, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(\theta_{\infty}^{(k)} \notin \mathbb{A}_L (r,s,s',h_L(s',{\varepsilon})))\else\left(\theta_{\infty}^{(k)} \notin \mathbb{A}_L (r,s,s',h_L(s',{\varepsilon}))\right)\fi} \leq {\varepsilon}.\end{gathered}$$ \[T: Right-hand condition\] For all ${\varepsilon}> 0$, there exists $s_R = s_R(r,{\varepsilon})$ such that for all $k$ large enough, possibly infinite, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k)} (r,s_R)}})\else\left({\overline{\mathcal{A}_{R+}^{(k)} (r,s_R)}}\right)\fi} \leq {\varepsilon},\end{gathered}$$ where ${\overline{\mathcal{A}_{R+}^{(k)} (r,s_R)}}$ denotes the contrary of the event $\mathcal{A}_{R+}^{(k)} (r,s_R)$. The proofs of these results are given in Sections \[S: Left-hand condition\] and \[S: Right-hand condition\], respectively. A first step consists in studying the spine labels of ${\overrightarrow{\theta_{\infty}}}$: this is what we do in Section \[S: Spine labels\]. In Section \[S: Conclusion\], we finally put all these ingredients together to complete the proof of Proposition \[T: Ball inclusions\]. Two properties of the spine labels {#S: Spine labels} ---------------------------------- In this section, we show two lemmas on the spine labels $S_i({\overrightarrow{\theta_{\infty}}})$. The first one gives an upper bound which holds almost surely, for all $i$ large enough. The second one gives a lower bound which holds with high probability. \[T: Upper bound on the spine labels\] There exists a constant $K$ such that almost surely, for all $i$ large enough, we have $$\begin{gathered} S_i({\overrightarrow{\theta_{\infty}}}) \leq K \sqrt{i \ln(i)}.\end{gathered}$$ Recall that the distribution of $(S_i({\overrightarrow{\theta_{\infty}}}))_{i \geq 0}$ is given in Theorem \[T: Joint cv of the rerooted trees\]. Let $K > 0$ and $i \geq 1$. Recall that $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)})\else\left(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)}\right)\fi} = {\mathbb{E}\if\relaxsz\relax[M_i {\mathbf{1}_{\lbrace \hat{X}_i > K \sqrt{i\ln(i)} \rbrace}}]\else\left[M_i {\mathbf{1}_{\lbrace \hat{X}_i > K \sqrt{i\ln(i)} \rbrace}}\right]\fi},\end{gathered}$$ where $\hat{X}$ is a random walk with uniform steps in $\{-1,0,1\}$ and $M$ is the martingale defined by $$\begin{gathered} M_i = \frac{f(\hat{X}_i)}{f(\hat{X}_0)} \prod_{j=0}^{i-1} w(\hat{X}_j).\end{gathered}$$ Note that $M_i \leq f(\hat{X}_i)$ almost surely. Thus, for all $\lambda > 0$, we have $$\begin{aligned} \label{E: Bound on P(S_i > K sqrt(i ln(i)))} {\mathbb{P}\if\relaxsz\relax(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)})\else\left(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)}\right)\fi} & \leq {\mathbb{E}\if\relaxsz\relax[M_i e^{-\lambda K \sqrt{i\ln(i)}} e^{\lambda \hat{X}_i}]\else\left[M_i e^{-\lambda K \sqrt{i\ln(i)}} e^{\lambda \hat{X}_i}\right]\fi} \nonumber \\ & \leq C e^{-\lambda K \sqrt{i\ln(i)}} {\mathbb{E}\if\relaxsz\relax[\hat{X}_i^4 e^{\lambda \hat{X}_i}]\else\left[\hat{X}_i^4 e^{\lambda \hat{X}_i}\right]\fi},\end{aligned}$$ where $C$ denotes a constant such that $f(x) = x(x+3)(2x+3) \leq C x^4$ for every $x \geq 0$. Now ${\mathbb{E}\if\relaxsz\relax[\hat{X}_i^4 e^{\lambda \hat{X}_i}]\else\left[\hat{X}_i^4 e^{\lambda \hat{X}_i}\right]\fi}$ is the fourth derivative of ${\mathbb{E}\if\relaxsz\relax[e^{\lambda \hat{X}_i}]\else\left[e^{\lambda \hat{X}_i}\right]\fi}$, and we have $$\begin{gathered} {\mathbb{E}\if\relaxsz\relax[e^{\lambda \hat{X}_i}]\else\left[e^{\lambda \hat{X}_i}\right]\fi} = e^{i \psi(\lambda)},\end{gathered}$$ where $\psi(\lambda)$ denotes the Laplace transform of a uniform random variable in $\{-1,0,1\}$. Now we have $$\begin{gathered} \psi(\lambda) = \ln{\left(}\frac{1+2\cosh(\lambda)}{3} {\right)}\leq c \lambda^2\end{gathered}$$ for a suitable constant $c>0$, and that the first four derivatives of $\psi$ are bounded. Therefore, there exists a positive constant $C'$ such that $$\begin{gathered} C {\mathbb{E}\if\relaxsz\relax[\hat{X}_i^4 e^{\lambda \hat{X}_i}]\else\left[\hat{X}_i^4 e^{\lambda \hat{X}_i}\right]\fi} \leq C' i^4 e^{i \psi(\lambda)} \leq C' i^4 e^{ci \lambda^2} \qquad \forall \lambda >0.\end{gathered}$$ Putting this together with , we get $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)})\else\left(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)}\right)\fi} \leq C' i^4 e^{ci\lambda^2-\lambda K \sqrt{i\ln(i)}} \qquad \forall \lambda > 0.\end{gathered}$$ Choosing the optimal value $\lambda = K \sqrt{\ln(i)}/ (2c\sqrt{i})$ gives $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)})\else\left(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)}\right)\fi} \leq C' i^4 e^{-(K^2/2c) \ln(i)} = C' i^{4-K^2/2c}.\end{gathered}$$ As a consequence, for all $K$ large enough (such that $4-K^2/2c <-1$), the sum of the probabilities ${\mathbb{P}\if\relaxsz\relax(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)})\else\left(S_i({\overrightarrow{\theta_{\infty}}}) > K \sqrt{i \ln(i)}\right)\fi}$ is finite. Applying the Borel–Cantelli lemma concludes the proof. \[T: Lower bound on the spine labels\] For all $\eta > 0$, there exists $\delta > 0$ such that for all $s$ large enough, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(\exists i \geq {\lfloor \eta s \rfloor}: S_i({\overrightarrow{\theta_{\infty}}}) < {\lfloor \delta \sqrt{s} \rfloor})\else\left(\exists i \geq {\lfloor \eta s \rfloor}: S_i({\overrightarrow{\theta_{\infty}}}) < {\lfloor \delta \sqrt{s} \rfloor}\right)\fi} \leq \eta.\end{gathered}$$ Since $(S_i({\overrightarrow{\theta_{\infty}}}))_{i \geq 0}$ has the same distribution as $(\tilde{X}_i)_{i \geq 0}$ with $\tilde{X}_0=1$, it is enough to show that $$\begin{gathered} {\mathbb{P}_{1}\if\relaxsz\relax(\exists i \geq {\lfloor \eta s \rfloor}: \tilde{X}_i < {\lfloor \delta \sqrt{s} \rfloor})\else\left(\exists i \geq {\lfloor \eta s \rfloor}: \tilde{X}_i < {\lfloor \delta \sqrt{s} \rfloor}\right)\fi} \leq \eta.\end{gathered}$$ Recall that, as stated in the introduction, we have the convergence $$\begin{gathered} {\left(}\frac{1}{\sqrt{n}} \tilde{X}_{{\lfloor nt \rfloor}}{\right)}_{t \geq 0} {\xrightarrow[n \rightarrow \infty]{(d)}} (Z_{2t/3})_{t \geq 0},\end{gathered}$$ where $Z$ denotes a seven-dimensional Bessel process. As a consequence, there exists constants $\delta_1 > 0$ and $s_1 \in {\mathbb{N}}$ such that, for all $s \geq s_1$, we have $$\begin{gathered} {\mathbb{P}_{1}\if\relaxsz\relax(\tilde{X}_{{\lfloor \eta s \rfloor}} \leq \sqrt{\eta s} \cdot \delta_1)\else\left(\tilde{X}_{{\lfloor \eta s \rfloor}} \leq \sqrt{\eta s} \cdot \delta_1\right)\fi} \leq \frac{\eta}{2}.\end{gathered}$$ Fix $s \geq s_1$. Using the Markov property at time ${\lfloor \eta s \rfloor}$, for any $\delta > 0$, we can now write $$\begin{aligned} {\mathbb{P}_{1}\if\relaxsz\relax(\exists i \geq {\lfloor \eta s \rfloor}: \tilde{X}_i < {\lfloor \delta \sqrt{s} \rfloor})\else\left(\exists i \geq {\lfloor \eta s \rfloor}: \tilde{X}_i < {\lfloor \delta \sqrt{s} \rfloor}\right)\fi} & \leq \frac{\eta}{2} + \sum_{x \geq \sqrt{\eta s} \cdot \delta_1} {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_{{\lfloor \eta s \rfloor}} = x)\else\left(\tilde{X}_{{\lfloor \eta s \rfloor}} = x\right)\fi} {\mathbb{P}_{x}\if\relaxsz\relax(\exists i \geq 0: \tilde{X}_i < {\lfloor \delta \sqrt{s} \rfloor})\else\left(\exists i \geq 0: \tilde{X}_i < {\lfloor \delta \sqrt{s} \rfloor}\right)\fi} \nonumber \\ & \leq \frac{\eta}{2} + \sum_{x \geq \sqrt{\eta s} \cdot \delta_1} {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_{{\lfloor \eta s \rfloor}} = x)\else\left(\tilde{X}_{{\lfloor \eta s \rfloor}} = x\right)\fi} {\mathbb{P}_{x}\if\relaxsz\relax(\exists i \geq 0: \tilde{X}_i < \frac{\delta}{\delta_1 \sqrt{\eta}} x)\else\left(\exists i \geq 0: \tilde{X}_i < \frac{\delta}{\delta_1 \sqrt{\eta}} x\right)\fi} \nonumber \\ & = \frac{\eta}{2} + \sum_{x \geq \sqrt{\eta s} \cdot \delta_1} {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_{{\lfloor \eta s \rfloor}} = x)\else\left(\tilde{X}_{{\lfloor \eta s \rfloor}} = x\right)\fi} {\mathbb{P}_{x}\if\relaxsz\relax(\tilde{T}_{{\lfloor \delta x/\delta_1 \sqrt{\eta} \rfloor}} < \infty)\else\left(\tilde{T}_{{\lfloor \delta x/\delta_1 \sqrt{\eta} \rfloor}} < \infty\right)\fi}, \label{E: Partial lower bound on the spine labels}\end{aligned}$$ where $\tilde{T}_{x'}$ denotes the first hitting time of $x'$ for $\tilde{X}$. It was shown in the proof of Lemma \[T: Values of SumFW\] that for all $x \geq x'$, we have $$\begin{gathered} {\mathbb{P}_{x}\if\relax\relax(\tilde{T}_{x'} < \infty)\else\left(\tilde{T}_{x'} < \infty\right)\fi} = \frac{h(x')}{h(x)},\end{gathered}$$ for a given non-constant polynomial $h$. Thus there exists constants $\delta_2$ and $x_2$ such that for all $x \geq x_2$, we have $$\begin{gathered} {\mathbb{P}_{x}\if\relax\relax(\tilde{T}_{{\lfloor \delta_2 x \rfloor}} < \infty)\else\left(\tilde{T}_{{\lfloor \delta_2 x \rfloor}} < \infty\right)\fi} \leq \frac{\eta}{2}.\end{gathered}$$ Putting this together with , for $\delta = \delta_2 \sqrt{\delta_1 \eta}$ and $s \geq s_1 \wedge (x_2^2/(\eta \delta_1^2))$, we get $$\begin{aligned} {\mathbb{P}_{1}\if\relaxsz\relax(\exists i \geq {\lfloor \eta s \rfloor}: \tilde{X}_i < {\lfloor \delta \sqrt{s} \rfloor})\else\left(\exists i \geq {\lfloor \eta s \rfloor}: \tilde{X}_i < {\lfloor \delta \sqrt{s} \rfloor}\right)\fi} & \leq \frac{\eta}{2} {\left(}1 + \sum_{x \geq \sqrt{\eta s} \cdot \delta_1} {\mathbb{P}_{1}\if\relax\relax(\tilde{X}_{{\lfloor \eta s \rfloor}} = x)\else\left(\tilde{X}_{{\lfloor \eta s \rfloor}} = x\right)\fi} {\right)}\leq \eta.\end{aligned}$$ Proof of the left-hand condition {#S: Left-hand condition} -------------------------------- In this section, we give the proof of Lemma \[T: Left-hand condition\]. This result mainly uses the upper bound on the spine labels of ${\overrightarrow{\theta_{\infty}}}$, and the explicit expressions of the distribution of $L_i({\overrightarrow{\theta_{\infty}}})$, for $i \geq 0$. Since for all $s,s',h$, ${\mathbb{S}}\setminus \mathbb{A}_L (r,s,s',h)$ is a closed set, we have $$\begin{gathered} \limsup_{k \rightarrow \infty} {\mathbb{P}\if\relaxsz\relax(\theta_{\infty}^{(k)} \notin \mathbb{A}_L (r,s,s',h))\else\left(\theta_{\infty}^{(k)} \notin \mathbb{A}_L (r,s,s',h)\right)\fi} \leq {\mathbb{P}\if\relaxsz\relax({\overrightarrow{\theta_{\infty}}} \notin \mathbb{A}_L (r,s,s',h))\else\left({\overrightarrow{\theta_{\infty}}} \notin \mathbb{A}_L (r,s,s',h)\right)\fi},\end{gathered}$$ so it is enough to show that the Lemma holds with ${\overrightarrow{\theta_{\infty}}}$ instead of $\theta_{\infty}^{(k)}$. For all $s,s',h \in {\mathbb{N}}$, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\overrightarrow{\theta_{\infty}}} \notin \mathbb{A}_L (r,s,s',h))\else\left({\overrightarrow{\theta_{\infty}}} \notin \mathbb{A}_L (r,s,s',h)\right)\fi} \leq p_{r,s,s'} + {\mathbb{P}\if\relaxsz\relax(\exists i \leq s': L_i(T) \mbox{ has height greater than } h-s')\else\left(\exists i \leq s': L_i(T) \mbox{ has height greater than } h-s'\right)\fi},\end{gathered}$$ where $$\begin{gathered} p_{r,s,s'} := {\mathbb{P}\if\relaxsz\relax(\forall i \in \{s+1,\ldots,s'\}, \min_{{\mathrm{x}} \in L_i ({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} ({\mathrm{x}}) > -r )\else\left(\forall i \in \{s+1,\ldots,s'\}, \min_{{\mathrm{x}} \in L_i ({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} ({\mathrm{x}}) > -r \right)\fi}.\end{gathered}$$ Since for all $s'$, there exists $h$ such that the maximum of the heights of the Galton–Watson trees $L_0({\overrightarrow{\theta_{\infty}}}), \ldots L_{s'}({\overrightarrow{\theta_{\infty}}})$ is less than $h-s'$ with probability greater than $1-{\varepsilon}/2$, it is enough to prove that the probabilities $p_{r,s,s'}$ converge to $0$ as $s' \rightarrow \infty$. We first rewrite $p_{r,s,s'}$ using the spine-labels $S_i({\overrightarrow{\theta_{\infty}}})$: $$\begin{aligned} p_{r,s,s'} & = {\mathbb{E}\if\relaxsz\relax[\prod_{i=s+1}^{s'} {\rho_{(S_i({\overrightarrow{\theta_{\infty}}}))}} \{ (T,l) \in {\mathbb{T}_{}}: l > -r \} ]\else\left[\prod_{i=s+1}^{s'} {\rho_{(S_i({\overrightarrow{\theta_{\infty}}}))}} \{ (T,l) \in {\mathbb{T}_{}}: l > -r \} \right]\fi} \\ & = {\mathbb{E}\if\relaxsz\relax[\prod_{i=s+1}^{s'} {\rho_{(r+S_i({\overrightarrow{\theta_{\infty}}}))}} {\left(}{\mathbb{T}_{}}^+{\right)}]\else\left[\prod_{i=s+1}^{s'} {\rho_{(r+S_i({\overrightarrow{\theta_{\infty}}}))}} {\left(}{\mathbb{T}_{}}^+{\right)}\right]\fi} = {\mathbb{E}\if\relaxsz\relax[\prod_{i=s+1}^{s'} w(r+S_i({\overrightarrow{\theta_{\infty}}}))]\else\left[\prod_{i=s+1}^{s'} w(r+S_i({\overrightarrow{\theta_{\infty}}}))\right]\fi}.\end{aligned}$$ The above product is almost surely decreasing as $s' \rightarrow \infty$. Therefore, we only have to show that $$\begin{gathered} \prod_{i=s+1}^{s'} w(r+S_i({\overrightarrow{\theta_{\infty}}})) {\xrightarrow[s' \rightarrow \infty]{}} 0,\end{gathered}$$ or equivalently that $$\begin{gathered} \label{E: Divergent series} \sum_{i=s+1}^{s'} -\ln{\left(}w(r+S_i({\overrightarrow{\theta_{\infty}}})){\right)}{\xrightarrow[s' \rightarrow \infty]{}} + \infty\end{gathered}$$ almost surely. Since $S_i({\overrightarrow{\theta_{\infty}}}) \rightarrow + \infty$ almost surely, we can use the estimate $$\begin{gathered} w(x) = 1 -\frac{2}{x^2} + o{\left(}\frac{1}{x^2}{\right)}.\end{gathered}$$ This yields $$\begin{gathered} -\ln{\left(}w(r+S_i({\overrightarrow{\theta_{\infty}}})){\right)}\sim_{i \rightarrow \infty} \frac{2}{{\left(}r+S_i({\overrightarrow{\theta_{\infty}}}){\right)}^2}.\end{gathered}$$ Lemma \[T: Upper bound on the spine labels\] now ensures that the right-hand term is ${\mbox{a.s.}}$ larger than $2/(K^2 i \ln(i))$ for all $i$ large enough, hence the ${\mbox{a.s.}}$ divergence . Proof of the right-hand condition {#S: Right-hand condition} --------------------------------- This section is devoted to the proof of Lemma \[T: Right-hand condition\]. Note that the structure of the proof is close to Ménard [@Men]. More precisely, the lower bound we already proved in Lemma \[T: Lower bound on the spine labels\] corresponds to a result Ménard obtains by putting together Lemma 2 and Proposition 5 of [@Men], and Lemma \[T: Bound on P(theta not in A-tilde)\] corresponds to Lemma 5 of [@Men]. We begin by computing the probability ${\mathbb{P}\if\relax\relax(R_{\infty}^{(k)}(s) = \theta^{\ast})\else\left(R_{\infty}^{(k)}(s) = \theta^{\ast}\right)\fi}$, and some conditional probabilities on this event, for all suitable trees $\theta^{\ast}$. More precisely, let ${\mathbb{T}_{[s]}}^R$ denote the set of the labeled trees $(T,l) \in {\mathbb{T}_{}}(0)$ such that: - The root of $T$ has exactly one offspring. - All labels in $T$ are positive, except the root-label. - The height of $T$ is $s$. - There are no vertices on the left of the path from the root to ${\mathrm{x}}_s$, where ${\mathrm{x}}_s$ denotes the leftmost vertex having height $s$. In other words, if ${\mathrm{x}}_0, \ldots, {\mathrm{x}}_s$ are the vertices on the path from the root to ${\mathrm{x}}_s$, then for all ${\mathrm{x}} \in T^{\ast} \setminus \{{\mathrm{x}}_0, \ldots, {\mathrm{x}}_s\}$, we have ${\mathrm{x}} > {\mathrm{x}}_s$ (where $<$ denotes the depth-first order). Fix $\theta^{\ast} \in {\mathbb{T}_{[s]}}^R$. We let ${\mathrm{x}}_s = {\mathrm{y}}_1 < \ldots < {\mathrm{y}}_{n^{\ast}}$ denote the vertices of $\theta^{\ast}$ which have height $s$. For all $i \in \{0,\ldots,s-1\}$, we let $\tau_i^{\ast}$ denote the subtree formed by the vertices ${\mathrm{x}} \in T^{\ast}$ such that ${\mathrm{x}}_i \preceq {\mathrm{x}}$ and ${\mathrm{x}}_{i+1} \npreceq {\mathrm{x}}$ (note that $\tau_0^{\ast} = \{{\mathrm{x}}_0\}$). Finally, for all suitable $i$, we let $x_i = l^{\ast} ({\mathrm{x}}_i)$ and $y_i = l^{\ast}({\mathrm{y}}_i)$. We have the following results: Let $k > s+r$. With the above notation, we have $$\begin{gathered} \label{E: Distribution of RB(theta(infty,k)) (2)} {\mathbb{P}\if\relaxsz\relax(R_{\infty}^{(k)}(s) = \theta^{\ast})\else\left(R_{\infty}^{(k)}(s) = \theta^{\ast}\right)\fi} = W_s(y_1,k) \frac{2^{n^{\ast}-1}}{6^{s-1} 12^{|T^{\ast}|-s}}\prod_{j=2}^{n^{\ast}} w(y_j),\end{gathered}$$ where $$\begin{gathered} W_s(x,k) = \frac{f(x)}{f(1)} {\left(}1- \frac{C_x-s+1}{10(k+1)(k+2)}{\right)}\qquad \forall x \leq s < k.\end{gathered}$$ Moreover, this yields the conditional probabilities $$\begin{gathered} \label{E: P(no small label above y_1 on the right)} {\mathbb{P}\if\relaxsz\relax(\min_{\bigcup_{i \geq s} \tau_{\infty,i}^{(k)}} l_{\infty}^{(k)} > r \bigg| R_{\infty}^{(k)}(s) = \theta^{\ast} )\else\left(\min_{\bigcup_{i \geq s} \tau_{\infty,i}^{(k)}} l_{\infty}^{(k)} > r \bigg| R_{\infty}^{(k)}(s) = \theta^{\ast} \right)\fi} = \frac{W_s(y_1-r,k-r)}{W_s(y_1,k)}\end{gathered}$$ and $$\begin{gathered} \label{E: P(no small label above y_j, j>2)} {\mathbb{P}\if\relaxsz\relax(\min_{{\mathrm{y}}_j \preceq {\mathrm{v}}} l_{\infty}^{(k)}({\mathrm{v}}) > r \bigg| R_{\infty}^{(k)}(s) = \theta^{\ast} )\else\left(\min_{{\mathrm{y}}_j \preceq {\mathrm{v}}} l_{\infty}^{(k)}({\mathrm{v}}) > r \bigg| R_{\infty}^{(k)}(s) = \theta^{\ast} \right)\fi} = \frac{w(y_j-r)}{w(y_j)} \qquad \forall j \in \{2,\ldots,n^{\ast}\}.\end{gathered}$$ Note that it is easy to see that these equations also hold for $k=\infty$, with $RB_{{\overrightarrow{\theta_{\infty}}}}(s)$ instead of $R_{\infty}^{(k)}(s)$ and $W_s(x,\infty) = f(x)/f(1)$ for all $x \leq s$. Note that we have $x_s=y_1$; in the first two steps of the proof, it is more natural to use the notation $x_s$. The characterization of the distribution of $\theta_{\infty}^{(k)}$ given in Proposition \[T: Distribution of theta(infty,k)\] yields $$\begin{gathered} \label{E: Distribution of RB(theta(infty,k)) (1)} {\mathbb{P}\if\relaxsz\relax(R_{\infty}^{(k)}(s) = \theta^{\ast})\else\left(R_{\infty}^{(k)}(s) = \theta^{\ast}\right)\fi} = {\mathbb{P}\if\relaxsz\relax((X_{\infty,1}^{(k)}, \ldots, X_{\infty,s}^{(k)}) = (x_1,\ldots,x_s) )\else\left((X_{\infty,1}^{(k)}, \ldots, X_{\infty,s}^{(k)}) = (x_1,\ldots,x_s) \right)\fi} \prod_{i=1}^{s-1} {\rho_{(x_i)}}^+ {\left(}\theta: B_{\theta}(s-i) = \tau_i^{\ast} {\right)}.\end{gathered}$$ Furthermore, the computations of Section \[S: First dist eqns\] show that $$\begin{aligned} {\mathbb{P}\if\relaxsz\relax(X_{\infty,i}^{(k)} = x_i,\ \forall i \leq s )\else\left(X_{\infty,i}^{(k)} = x_i,\ \forall i \leq s \right)\fi} & = \sum_{m \geq s} \frac{m+1}{3^s} \prod_{i=1}^{s-1} w(x_i)\ {\mathbb{E}_{x_s}\if\relaxsz\relax[\prod_{i=0}^{m-s} w(\hat{X}_i) {\mathbf{1}_{\lbrace \hat{X}_{m-s}=k \rbrace}}]\else\left[\prod_{i=0}^{m-s} w(\hat{X}_i) {\mathbf{1}_{\lbrace \hat{X}_{m-s}=k \rbrace}}\right]\fi} \\ & = 3^{-s} {\left(}\prod_{i=1}^{s-1} w(x_i){\right)}\frac{w(k)f(x_s)}{f(k)} \sum_{m \geq 0} (m+s+1) {\mathbb{P}_{x_s}\if\relaxsz\relax(\tilde{X}_m=k)\else\left(\tilde{X}_m=k\right)\fi} \\ & = 3^{-s} {\left(}\prod_{i=1}^{s-1} w(x_i){\right)}\frac{w(k)f(x_s)}{f(k)} {\left(}{H_{x_s}^{\ast}(k)}+ (s-1){H_{x_s}(k)} {\right)}.\end{aligned}$$ Using the expressions obtained in Lemma \[T: Values of SumFW\] and Lemma \[T: Values of SumFWStar\], and the hypothesis $s < k$, this gives $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(X_{\infty,i}^{(k)} = x_i,\ \forall i \leq s )\else\left(X_{\infty,i}^{(k)} = x_i,\ \forall i \leq s \right)\fi} = 3^{-s+1} {\left(}\prod_{i=1}^{s-1} w(x_i){\right)}\frac{f(x_s)}{f(1)} {\left(}1-\frac{C_{x_s}-s+1}{10(k+1)(k+2)} {\right)}.\end{gathered}$$ Besides, for all $i \leq s-1$, we have $$\begin{gathered} {\rho_{(x_i)}}^+ {\left(}\theta: B_{\theta}(s-i) = \tau_i^{\ast} {\right)}= \frac{1}{2 w(x_i) 12^{|\tau_i^{\ast}|}} \prod_{j: {\mathrm{v}}_j \in \tau_i^{\ast}} 2 w(y_j).\end{gathered}$$ Equation can now be rewritten as $$\begin{aligned} {\mathbb{P}\if\relaxsz\relax(R_{\infty}^{(k)}(s) = \theta^{\ast})\else\left(R_{\infty}^{(k)}(s) = \theta^{\ast}\right)\fi} & = \frac{f(x_s)}{f(1)} {\left(}1-\frac{C_{x_s}-s+1}{10(k+1)(k+2)} {\right)}\prod_{i=1}^{s-1} {\left(}\frac{w(x_i)}{6 w(x_i) 12^{|\tau_i^{\ast}|}} \prod_{j: {\mathrm{v}}_j \in \tau_i^{\ast}} 2 w(y_j) {\right)}\nonumber \\ & = \frac{f(x_s)}{f(1)} {\left(}1-\frac{C_{x_s}-s+1}{10(k+1)(k+2)} {\right)}\frac{2^{n^{\ast}-1}}{6^{s-1} 12^{|T^{\ast}|-s}}\prod_{j=2}^{n^{\ast}} w(y_j),\end{aligned}$$ hence the first result of the lemma. To get the conditional probability , we have to compute $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\left(}R_{\infty}^{(k)}(s) = \theta^{\ast}{\right)}\cap {\left(}\min_{\bigcup_{i \geq s} \tau_{\infty,i}^{(k)}} l_{\infty}^{(k)} > r {\right)})\else\left({\left(}R_{\infty}^{(k)}(s) = \theta^{\ast}{\right)}\cap {\left(}\min_{\bigcup_{i \geq s} \tau_{\infty,i}^{(k)}} l_{\infty}^{(k)} > r {\right)}\right)\fi}.\end{gathered}$$ Using the same decomposition as above, this probability can be written as $$\begin{aligned} & \sum_{m \geq s} \frac{m+1}{3^m} {\left(}\prod_{i=1}^{s-1} w(x_i){\right)}\sum_{{\underline{x}}' \in \mathcal{M}^+_{m-s,x_s \rightarrow k}} {\left(}\prod_{i=0}^{m-s} {\rho_{(x'_i)}}^+ {\left(}(T,l): \min_T l > r{\right)}{\right)}\\ & \qquad \qquad \qquad \qquad \qquad \qquad \quad \prod_{i=1}^{s-1} {\rho_{(x_i)}}^+ {\left(}\theta: B_{\theta}(s-i) = \tau_i^{\ast} {\right)},\end{aligned}$$ or equivalently, $$\begin{gathered} {\left(}\sum_{m \geq 0} \frac{m+s+1}{3^m} \sum_{{\underline{x}}' \in \mathcal{M}^+_{m,x_s-r \rightarrow k-r}} \prod_{i=0}^m w(x'_i) {\right)}\frac{2^{n^{\ast}-1}}{6^{s-1} 12^{|T^{\ast}|-s}}\prod_{j=2}^{n^{\ast}} w(y_j).\end{gathered}$$ Thus, we get $$\begin{aligned} & {\mathbb{P}\if\relaxsz\relax({\left(}R_{\infty}^{(k)}(s) = \theta^{\ast}{\right)}\cap {\left(}\min_{\bigcup_{i \geq s} \tau_{\infty,i}^{(k)}} l_{\infty}^{(k)} > r {\right)})\else\left({\left(}R_{\infty}^{(k)}(s) = \theta^{\ast}{\right)}\cap {\left(}\min_{\bigcup_{i \geq s} \tau_{\infty,i}^{(k)}} l_{\infty}^{(k)} > r {\right)}\right)\fi} \\ & \qquad \qquad = \frac{w(k-r)f(x_s-r)}{3f(k-r)} {\left(}{H_{x_s-r}^{\ast}(k-r)} + (s-1){H_{x_s-r}(k-r)} {\right)}\frac{2^{n^{\ast}-1}}{6^{s-1} 12^{|T^{\ast}|-s}}\prod_{j=2}^{n^{\ast}} w(y_j) \\ & \qquad \qquad = \frac{f(x_s-r)}{f(1)} {\left(}1- \frac{C_{x_s-r}-s+1}{10(k-r+1)(k-r+2)} {\right)}\frac{2^{n^{\ast}-1}}{6^{s-1} 12^{|T^{\ast}|-s}}\prod_{j=2}^{n^{\ast}} w(y_j).\end{aligned}$$ This completes the proof of equation . Finally, for all $j^{\ast} \in \{2,\ldots,n^{\ast}\}$, we have $$\begin{aligned} & {\mathbb{P}\if\relaxsz\relax({\left(}R_{\infty}^{(k)}(s) = \theta^{\ast}{\right)}\cap {\left(}\min_{{\mathrm{y}}_j \preceq {\mathrm{v}}} l_{\infty}^{(k)}({\mathrm{v}}) > r {\right)})\else\left({\left(}R_{\infty}^{(k)}(s) = \theta^{\ast}{\right)}\cap {\left(}\min_{{\mathrm{y}}_j \preceq {\mathrm{v}}} l_{\infty}^{(k)}({\mathrm{v}}) > r {\right)}\right)\fi} \\ & \qquad \qquad = 3^{-s+1} {\left(}\prod_{i=1}^{s-1} w(x_i){\right)}W(y_1,k) \prod_{i=1}^{s-1} \frac{1}{2 w(x_i) 12^{|\tau_i^{\ast}|}} {\left(}\prod_{\substack{j: {\mathrm{v}}_j \in \tau_i^{\ast} \\ j \neq j^{\ast}}} 2 w(y_j){\right)}\times 2 w(y_{j^{\ast}}-r) \\ & \qquad \qquad = W_s(y_1,k) \frac{2^{n^{\ast}-1}}{6^{s-1} 12^{|T^{\ast}|-s}} w(y_{j^{\ast}}-r) \prod_{\substack{2 \leq j \leq n^{\ast} \\ j \neq j^{\ast}}} w(y_j),\end{aligned}$$ hence equation . The second step consists in studying the vertices of $R_{\infty}^{(k)}$ which are exactly at height $s$: we give an upper bound on the expectation of the number of such vertices, and show that with high probability, for $k$ large enough, these vertices have labels greater than $s^{\alpha}$, for $\alpha \in (0,1/2)$. Precise statements are given in Lemmas \[T: Bound on E\[nb of vertices at height s\]\] and \[T: Bound on P(theta not in A-tilde)\] below. Note that for all $k$, we have $$\begin{gathered} \partial R_{\infty}^{(k)}(s):= \lbrace {\mathrm{v}} \in R_{\infty}^{(k)}: {\mathrm{v}} \mbox{ has height } s \rbrace \subset \partial {RB}_{\theta_{\infty}^{(k)}} (s).\end{gathered}$$ \[T: Bound on E\[nb of vertices at height s\]\] For all $s \geq 1$ and $k \in {\mathbb{N}}$, we have $$\begin{gathered} {\mathbb{E}\if\relaxsz\relax[{\# \partial R_{\infty}^{(k)}(s)}]\else\left[{\# \partial R_{\infty}^{(k)}(s)}\right]\fi} \leq s.\end{gathered}$$ For all $s \geq 1$ and $k \in {\mathbb{N}}$, we have $$\begin{aligned} {\mathbb{E}\if\relaxsz\relax[{\# \partial R_{\infty}^{(k)}}(s)]\else\left[{\# \partial R_{\infty}^{(k)}}(s)\right]\fi} & = \sum_{i=1}^s {\mathbb{E}\if\relaxsz\relax[{\# \partial B_{\tau_{\infty,i}^{(k)}}(s-i)}]\else\left[{\# \partial B_{\tau_{\infty,i}^{(k)}}(s-i)}\right]\fi} \\ & = \sum_{i=1}^s {\mathbb{E}\if\relaxsz\relax[ \frac{1}{w(X_i)} {\mathbb{E}\if\relax\relax[{\# \partial B_{\tau}(s-i)}]\else\left[{\# \partial B_{\tau}(s-i)}\right]\fi}]\else\left[ \frac{1}{w(X_i)} {\mathbb{E}\if\relax\relax[{\# \partial B_{\tau}(s-i)}]\else\left[{\# \partial B_{\tau}(s-i)}\right]\fi}\right]\fi},\end{aligned}$$ where $\tau$ denotes a Galton–Watson tree with offspring distribution $\operatorname{Geom}(1/2)$. For all $h \geq 0$, we have ${\mathbb{E}\if\relax\relax[{\# \partial B_{\tau}(h)}]\else\left[{\# \partial B_{\tau}(h)}\right]\fi}=1$. As a consequence, the above equality gives $$\begin{gathered} {\mathbb{E}\if\relaxsz\relax[{\# \partial R_{\infty}^{(k)}}(s)]\else\left[{\# \partial R_{\infty}^{(k)}}(s)\right]\fi} = \sum_{i=1}^s {\mathbb{E}\if\relaxsz\relax[ \frac{1}{w(X_i)}]\else\left[ \frac{1}{w(X_i)}\right]\fi} \leq \sum_{i=1}^s 1 = s.\end{gathered}$$ We now consider the set $$\begin{gathered} \tilde{\mathbb{A}}_{R+} (r,s,\alpha) = \{ (T,l) \in {\mathbb{S}}: \forall {\mathrm{v}} \in \partial {RB}_T(s),\ l({\mathrm{v}}) > {\lfloor s^{\alpha} \rfloor} \}.\end{gathered}$$ \[T: Bound on P(theta not in A-tilde)\] Fix $\alpha < 1/2$. For all $s$ large enough, there exists $k_1(s)$ such that for all $k \geq k_1(s)$, possibly infinite, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(\theta_{\infty}^{(k)} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha))\else\left(\theta_{\infty}^{(k)} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha)\right)\fi} \leq {\varepsilon}.\end{gathered}$$ First note that since ${\mathbb{S}}\setminus \tilde{\mathbb{A}}_{R+} (r,s,\alpha)$ is a closed set, we have $$\begin{gathered} \limsup_{k \rightarrow \infty} {\mathbb{P}\if\relaxsz\relax(\theta_{\infty}^{(k)} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha))\else\left(\theta_{\infty}^{(k)} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha)\right)\fi} \leq {\mathbb{P}\if\relaxsz\relax({\overrightarrow{\theta_{\infty}}} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha))\else\left({\overrightarrow{\theta_{\infty}}} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha)\right)\fi},\end{gathered}$$ so it is enough to show that the property holds for $k = \infty$. Moreover, the same arguments as in the proof of [@Men Lemma 5] show that for all $\eta \in (0,1/2)$, for all $s$ large enough, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(\exists i \leq {\lfloor \eta s \rfloor}-1: R_i({\overrightarrow{\theta_{\infty}}}) \cap \partial B_{{\overrightarrow{\theta_{\infty}}}}(s) \neq \emptyset )\else\left(\exists i \leq {\lfloor \eta s \rfloor}-1: R_i({\overrightarrow{\theta_{\infty}}}) \cap \partial B_{{\overrightarrow{\theta_{\infty}}}}(s) \neq \emptyset \right)\fi} \leq 4 \eta.\end{gathered}$$ Thus, letting $I_{\eta}(s) = \{{\lfloor \eta s \rfloor}, \ldots, s\}$, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\overrightarrow{\theta_{\infty}}} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha))\else\left({\overrightarrow{\theta_{\infty}}} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha)\right)\fi} \leq 4 \eta + {\mathbb{P}\if\relaxsz\relax(\exists i \in I_{\eta}(s): \min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor})\else\left(\exists i \in I_{\eta}(s): \min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor}\right)\fi}.\end{gathered}$$ Lemma \[T: Lower bound on the spine labels\] now ensures that for $\delta > 0$ and $s$ large enough, this probability is less than $$\begin{gathered} \label{E: Partial bound on P(theta not in A-tilde)} 5 \eta + {\mathbb{P}\if\relaxsz\relax({\left(}\exists i \in I_{\eta}(s): \min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor}{\right)}\cap {\left(}\forall i \in I_{\eta}(s), S_i({\overrightarrow{\theta_{\infty}}}) \geq {\lfloor \delta \sqrt{s} \rfloor}{\right)})\else\left({\left(}\exists i \in I_{\eta}(s): \min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor}{\right)}\cap {\left(}\forall i \in I_{\eta}(s), S_i({\overrightarrow{\theta_{\infty}}}) \geq {\lfloor \delta \sqrt{s} \rfloor}{\right)}\right)\fi}.\end{gathered}$$ For all $(x_i)_{{\lfloor \eta s \rfloor} \leq i \leq s}$, we have $$\begin{aligned} & {\mathbb{P}\if\relaxsz\relax( \exists i \in I_{\eta}(s): \min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor} \bigg| S_i({\overrightarrow{\theta_{\infty}}}) = x_i\ \forall i \in I_{\eta}(s))\else\left( \exists i \in I_{\eta}(s): \min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor} \bigg| S_i({\overrightarrow{\theta_{\infty}}}) = x_i\ \forall i \in I_{\eta}(s)\right)\fi} \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \leq \sum_{i={\lfloor \eta s \rfloor}}^s {\mathbb{P}\if\relaxsz\relax(\min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor} \bigg| S_i({\overrightarrow{\theta_{\infty}}}) = x_i)\else\left(\min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor} \bigg| S_i({\overrightarrow{\theta_{\infty}}}) = x_i\right)\fi} \\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \leq \sum_{i={\lfloor \eta s \rfloor}}^s {\rho_{(x_i)}}^+ {\left(}(T,l) \in {\mathbb{T}_{}}^+: \min_T l \leq {\lfloor s^{\alpha} \rfloor} {\right)}\\ & \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \leq \sum_{i={\lfloor \eta s \rfloor}}^s \frac{w(x_i)-w(x_i-{\lfloor s^{\alpha} \rfloor})}{w(x_i)}.\end{aligned}$$ Furthermore, if we choose the integers $x_i$ in such a way that $x_i \geq {\lfloor \delta \sqrt{s} \rfloor}$ for all $i$, we have $$\begin{gathered} \frac{w(x_i)-w(x_i-{\lfloor s^{\alpha} \rfloor})}{w(x_i)} = \frac{4 {\lfloor s^{\alpha} \rfloor}}{x_i^3} + o{\left(}\frac{s^{\alpha}}{x_i^3}{\right)}\leq \frac{4 s^{\alpha-3/2}}{\delta^3} + o{\left(}s^{\alpha-3/2}{\right)},\end{gathered}$$ so $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax( \exists i \in I_{\eta}(s): \min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor} \bigg| S_i({\overrightarrow{\theta_{\infty}}}) = x_i\ \forall i \in I_{\eta}(s))\else\left( \exists i \in I_{\eta}(s): \min_{R_i({\overrightarrow{\theta_{\infty}}})} {\overrightarrow{l_{\infty}}} \leq {\lfloor s^{\alpha} \rfloor} \bigg| S_i({\overrightarrow{\theta_{\infty}}}) = x_i\ \forall i \in I_{\eta}(s)\right)\fi} \leq \frac{4 s^{\alpha-1/2}}{\delta^3} + o{\left(}s^{\alpha-1/2}{\right)}.\end{gathered}$$ Putting this together with , we finally get $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\overrightarrow{\theta_{\infty}}} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha))\else\left({\overrightarrow{\theta_{\infty}}} \notin \tilde{\mathbb{A}}_{R+} (r,s,\alpha)\right)\fi} \leq 5 \eta + \frac{4 s^{\alpha-1/2}}{\delta^3} + o{\left(}s^{\alpha-1/2}{\right)}\leq 6 \eta\end{gathered}$$ for all $s$ large enough. We are now ready to give the proof of Lemma \[T: Right-hand condition\]. Fix $\alpha \in (1/3,1/2)$. Lemma \[T: Bound on P(theta not in A-tilde)\] show that for all $s$ large enough and $k \geq k_1(s)$ (possibly infinite), we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}})\else\left({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\right)\fi} \leq 2 {\varepsilon}+ {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}} \cap {\left(}\theta_{\infty}^{(k)} \in \tilde{\mathbb{A}}_{R+} (r,s,\alpha){\right)})\else\left({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}} \cap {\left(}\theta_{\infty}^{(k)} \in \tilde{\mathbb{A}}_{R+} (r,s,\alpha){\right)}\right)\fi}.\end{gathered}$$ Letting $$\begin{gathered} \Theta(s,\alpha) = \{ (T,l) \in {\mathbb{T}_{[s]}}^R: \min_{\partial B_T(s)} l > {\lfloor s^{\alpha} \rfloor} \},\end{gathered}$$ we get that $$\begin{gathered} \label{E: Bound on P(theta(k) not in A-R(k)) (1)} {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}})\else\left({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\right)\fi} \leq 2 {\varepsilon}+ \sum_{\theta^{\ast} \in \Theta(s,\alpha)} {\mathbb{P}\if\relaxsz\relax({\left(}R_{\infty}^{(k)}(s) = \theta^{\ast}{\right)}\cap {\overline{\mathcal{A}_{R+}^{(k)} (r,s)}})\else\left({\left(}R_{\infty}^{(k)}(s) = \theta^{\ast}{\right)}\cap {\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\right)\fi}.\end{gathered}$$ Fix $\theta^{\ast} \in \Theta_{{\varepsilon}}(s,\alpha)$, and let $y_1,\ldots,y_{n^{\ast}}$ denote the labels of the vertices of height $s$ (from left to right) in $\theta^{\ast}$. Note that the condition $\theta^{\ast} \in \Theta_{{\varepsilon}}(s,\alpha)$ means that we have ${\lfloor s^{\alpha} \rfloor} < y_i \leq s$ for all $i \in \{1,\ldots,n^{\ast}\}$. Moreover, equations and show that $$\begin{aligned} {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\ \bigg|\ R_{\infty}^{(k)}(s) = \theta^{\ast})\else\left({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\ \bigg|\ R_{\infty}^{(k)}(s) = \theta^{\ast}\right)\fi} \leq 1-\frac{W_s(y_1-r,k-r)}{W_s(y_1,k)} + \sum_{j=2}^{n^{\ast}} {\left(}1-\frac{w(y_j-r)}{w(y_j)} {\right)}.\end{aligned}$$ For all $y \leq s$, we have $$\begin{gathered} \frac{W_s(y-r,k-r)}{W_s(y,k)} = \frac{f(y-r)}{f(y)} {\left(}1+ \frac{\frac{C_{y-r}-s+1}{10(k-r+1)(k-r+2)} - \frac{C_y-s+1}{10(k+1)(k+2)}}{1 - \frac{C_y-s+1}{10(k+1)(k+2)}}{\right)}\geq \frac{f(y-r)}{f(y)},\end{gathered}$$ so $$\begin{gathered} 0 \leq 1-\frac{W_s(y-r,k-r)}{W_s(y,k)} \leq 1-\frac{f(y-r)}{f(y)} \leq {\varepsilon}\end{gathered}$$ for all $s$ large enough and $y \in \{{\lfloor s^{\alpha} \rfloor},\ldots,s\}$. Besides, uniformly in $y > {\lfloor s^{\alpha} \rfloor}$, we have $$\begin{gathered} 1-\frac{w(y-r)}{w(y)} \leq \frac{4r}{s^{3\alpha}} + o{\left(}\frac{r}{s^{3\alpha}}{\right)}.\end{gathered}$$ This yields $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\ \bigg|\ R_{\infty}^{(k)}(s) = \theta^{\ast})\else\left({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\ \bigg|\ R_{\infty}^{(k)}(s) = \theta^{\ast}\right)\fi} \leq {\varepsilon}+ n^{\ast} {\left(}\frac{4r}{s^{3\alpha}} + o{\left(}\frac{r}{s^{3\alpha}}{\right)}{\right)}.\end{gathered}$$ Putting this into , we obtain $$\begin{aligned} {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}})\else\left({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\right)\fi} & \leq 3 {\varepsilon}+ {\left(}\frac{4r}{s^{3\alpha}} + o{\left(}\frac{r}{s^{3\alpha}}{\right)}{\right)}\sum_{\theta^{\ast} \in \Theta(s,\alpha)} {\# \partial B_{\theta^{\ast}}(s)} {\mathbb{P}\if\relaxsz\relax(R_{\infty}^{(k)}(s) = \theta^{\ast})\else\left(R_{\infty}^{(k)}(s) = \theta^{\ast}\right)\fi} \\ & = 3 {\varepsilon}+ {\left(}\frac{4r}{s^{3\alpha}} + o{\left(}\frac{r}{s^{3\alpha}}{\right)}{\right)}{\mathbb{E}\if\relaxsz\relax[{\# \partial R_{\infty}^{(k)}(s)}]\else\left[{\# \partial R_{\infty}^{(k)}(s)}\right]\fi}.\end{aligned}$$ Lemma \[T: Bound on E\[nb of vertices at height s\]\] now implies that $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}})\else\left({\overline{\mathcal{A}_{R+}^{(k)} (r,s)}}\right)\fi} \leq 3 {\varepsilon}+ \frac{4r}{s^{3\alpha-1}} + o{\left(}\frac{r}{s^{3\alpha-1}}{\right)}.\end{gathered}$$ Since we took $\alpha > 1/3$, this concludes the proof. Proof of Proposition \[T: Ball inclusions\] {#S: Conclusion} ------------------------------------------- We can now prove Proposition \[T: Ball inclusions\] by putting together the results of Lemmas \[T: Left-hand condition\], \[T: Right-hand condition\] and \[T: LR conditions for k&lt;infty\], and using the symmetry between the definitions of $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k)}$. Let $r \in {\mathbb{N}}$. For all ${\varepsilon}\geq 0$, we consider the sequences $(s_{{\varepsilon}}(r'))_{r' \geq 0}$ and $(h_{{\varepsilon}}(r'))_{r' \geq 1}$ defined by $s_{{\varepsilon}}(0)=0$, and for all $r' \geq 1$: $$\begin{gathered} s_{{\varepsilon}}(r')=s_R(r',2^{-r'-1} {\varepsilon}) \vee s_L(r',s_{{\varepsilon}}(r'-1), 2^{-r'-1}{\varepsilon}) \\ h_{{\varepsilon}}(r')=h_L(s_{{\varepsilon}}(r'),2^{-r'-1}{\varepsilon}),\end{gathered}$$ where $s_L$, $s_R$ and $h_R$ are the quantities introduced in Lemmas \[T: Left-hand condition\] and \[T: Right-hand condition\]. Note that for all $r'$, we have $\mathcal{A}_{R+}(r',s_{{\varepsilon}}(r')) \subset \mathcal{A}_{R+}(r',s_R(r',2^{-r'-1} {\varepsilon}))$. Thus, Lemmas \[T: Left-hand condition\] and \[T: Right-hand condition\] show that for all $r' \in {\mathbb{N}}$, for all $k$ large enough, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\left(}\theta_{\infty}^{(k)} \notin \mathbb{A}_L(r',s_{{\varepsilon}}(r'-1),s_{{\varepsilon}}(r'),h_{{\varepsilon}}(r')) {\right)}\cup {\overline{\mathcal{A}_{R+}^{(k)}(r',s_{{\varepsilon}}(r'))}} )\else\left({\left(}\theta_{\infty}^{(k)} \notin \mathbb{A}_L(r',s_{{\varepsilon}}(r'-1),s_{{\varepsilon}}(r'),h_{{\varepsilon}}(r')) {\right)}\cup {\overline{\mathcal{A}_{R+}^{(k)}(r',s_{{\varepsilon}}(r'))}} \right)\fi} \leq 2^{-r'} {\varepsilon},\end{gathered}$$ and as a consequence, $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(\bigcup_{r'=1}^r {\left(}\theta_{\infty}^{(k)} \notin \mathbb{A}_L(r',s_{{\varepsilon}}(r'-1),s_{{\varepsilon}}(r'),h_{{\varepsilon}}(r')) {\right)}\cup {\overline{\mathcal{A}_{R+}^{(k)}(r',s_{{\varepsilon}}(r'))}} )\else\left(\bigcup_{r'=1}^r {\left(}\theta_{\infty}^{(k)} \notin \mathbb{A}_L(r',s_{{\varepsilon}}(r'-1),s_{{\varepsilon}}(r'),h_{{\varepsilon}}(r')) {\right)}\cup {\overline{\mathcal{A}_{R+}^{(k)}(r',s_{{\varepsilon}}(r'))}} \right)\fi} \leq {\varepsilon}\end{gathered}$$ for all $k$ large enough. Moreover, recalling the notation of Proposition \[T: Distribution of theta(infty,k)\], we have $$\begin{gathered} {\mathfrak{s}}_{s_{{\varepsilon}}(r)+1}(\theta_{\infty}^{(k)}) \nprec {\mathrm{e}}_0(\theta_{\infty})\end{gathered}$$ if and only if $I_{\infty}^{(k)} \leq s_{{\varepsilon}}(r)$, which happens with probability at most $s_{{\varepsilon}}(r)/(k+1)$. Therefore, for all $k$ large enough, the conditions stated in the first part of Lemma \[T: LR conditions for k&lt;infty\] hold with probability at least $1-2{\varepsilon}$. Finally, we can see from the symmetry between the definitions of $\theta_{\infty}^{(k)}$ and $\theta_{\infty}^{(-k)}$ that for all $r',s,s',h \in {\mathbb{N}}$, we have $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(\theta_{\infty}^{(-k+1)} \notin \mathbb{A}_R(r',s,s',h))\else\left(\theta_{\infty}^{(-k+1)} \notin \mathbb{A}_R(r',s,s',h)\right)\fi} = {\mathbb{P}\if\relaxsz\relax(\theta_{\infty}^{(k-1)} \notin \mathbb{A}_L(r'+1,s,s',h))\else\left(\theta_{\infty}^{(k-1)} \notin \mathbb{A}_L(r'+1,s,s',h)\right)\fi}\end{gathered}$$ and $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{L+}^{(-k+1)}(r',s)}})\else\left({\overline{\mathcal{A}_{L+}^{(-k+1)}(r',s)}}\right)\fi} = {\mathbb{P}\if\relaxsz\relax({\overline{\mathcal{A}_{R+}^{(k-1)}(r'-1,s)}})\else\left({\overline{\mathcal{A}_{R+}^{(k-1)}(r'-1,s)}}\right)\fi}.\end{gathered}$$ Thus, letting $\tilde{s}_{{\varepsilon}}(0)=0$ and, for all $r' \geq 1$, $$\begin{gathered} \tilde{s}_{{\varepsilon}}(r')=s_R(r'-1,2^{-r'-1} {\varepsilon}) \vee s_L(r'+1,\tilde{s}_{{\varepsilon}}(r'-1), 2^{-r'-1}{\varepsilon}) \\ \tilde{h}_{{\varepsilon}}(r')=h_L(\tilde{s}_{{\varepsilon}}(r'),2^{-r'-1}{\varepsilon}),\end{gathered}$$ the probability $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(\bigcap_{r'=1}^r {\left(}\theta_{\infty}^{(-k+1)} \notin \mathbb{A}_R(r',\tilde{s}_{{\varepsilon}}(r'-1),\tilde{s}_{{\varepsilon}}(r'),\tilde{h}_{{\varepsilon}}(r')) {\right)}\cap \mathcal{A}_{L+}^{(-k+1)}(r',\tilde{s}_{{\varepsilon}}(r')) )\else\left(\bigcap_{r'=1}^r {\left(}\theta_{\infty}^{(-k+1)} \notin \mathbb{A}_R(r',\tilde{s}_{{\varepsilon}}(r'-1),\tilde{s}_{{\varepsilon}}(r'),\tilde{h}_{{\varepsilon}}(r')) {\right)}\cap \mathcal{A}_{L+}^{(-k+1)}(r',\tilde{s}_{{\varepsilon}}(r')) \right)\fi}\end{gathered}$$ is equal to $$\begin{gathered} {\mathbb{P}\if\relaxsz\relax(\bigcap_{r'=1}^r {\left(}\theta_{\infty}^{(k-1)} \notin \mathbb{A}_L(r'+1,\tilde{s}_{{\varepsilon}}(r'-1),\tilde{s}_{{\varepsilon}}(r'),\tilde{h}_{{\varepsilon}}(r')) {\right)}\cap \mathcal{A}_{R+}^{(k-1)}(r'-1,\tilde{s}_{{\varepsilon}}(r')) )\else\left(\bigcap_{r'=1}^r {\left(}\theta_{\infty}^{(k-1)} \notin \mathbb{A}_L(r'+1,\tilde{s}_{{\varepsilon}}(r'-1),\tilde{s}_{{\varepsilon}}(r'),\tilde{h}_{{\varepsilon}}(r')) {\right)}\cap \mathcal{A}_{R+}^{(k-1)}(r'-1,\tilde{s}_{{\varepsilon}}(r')) \right)\fi} \leq {\varepsilon}.\end{gathered}$$ Similarly as above, this implies that the conditions stated in the second part of Lemma \[T: LR conditions for k&lt;infty\] hold with probability at least $1-2{\varepsilon}$. Therefore, Lemma \[T: LR conditions for k&lt;infty\] shows that for $h = h_{{\varepsilon}}(r) \vee \tilde{h}_{{\varepsilon}}(r)$, the inclusions and hold with probability at least $1-4{\varepsilon}$, for all $k$ large enough.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the quantitative relationship between the cones of nonnegative polynomials, cones of sums of squares and cones of sums of powers of linear forms. We derive bounds on the volumes (raised to the power reciprocal to the ambient dimension) of compact sections of the three cones. We show that the bounds are asymptotically exact if the degree is fixed and number of variables tends to infinity. When the degree is larger than two it follows that there are significantly more non-negative polynomials than sums of squares and there are significantly more sums of squares than sums of powers of linear forms. Moreover, we quantify the exact discrepancy between the cones; from our bounds it follows that the discrepancy grows as the number of variables increases.' author: - Grigoriy Blekherman title: 'Volumes of Nonnegative Polynomials, Sums of Squares and Powers of Linear Forms' --- Introduction ============ Let $P_{n,2k}$ be the vector space of real homogeneous polynomials in $n$ variables of degree $2k$. There are three interesting convex cones in $P_{n,2k}$: The cone of nonnegative polynomials, $C=C_{n,2k}$ $$C=\bigl{\{}f \in P_{n,2k} \mid f(x) \geq 0 \quad \text{for all} \quad x \in \mathbb{R}^n \bigr{\}}.$$ The cone of sums of squares, $Sq=Sq_{n,2k}$ $$Sq=\biggl{\{} f \in P_{n,2k} \mathrel{\bigg{\arrowvert}} f=\sum_i f_i^2 \quad \text{for some} \quad f_i \in P_{n,k} \biggr{\}}.$$ The cone of sums of $2k$-th powers of linear forms, ${L\!f}={L\!f}_{n,2k}$ $${L\!f}=\biggl{\{} f \in P_{n,2k} \mathrel{\bigg{\arrowvert}} f=\sum_i l_i^{2k} \quad \text{for some linear forms} \quad l_i \in P_{n,1}\bigg{\}}.$$ A different notation of $P_{n,2k},\Sigma_{n,2k}$ and $Q_{n,2k}$ respectively was employed by Reznick in the study of these cones [@rez]. The cones are clearly nested: $${L\!f}_{n,2k} \subseteq Sq_{n,2k} \subseteq C_{n,2k}.$$ It is known that for quadratic forms these cones coincide. Moreover, it is not hard to show that in all other cases there are sums of squares that are not $2k$-th powers of linear forms. Hilbert proved that in the cases $n=2$, $k=1$ and, $n=3$ and $k=2$, a nonnegative polynomial is necessarily a sum of squares; in all other cases there exist nonnegative polynomials that are not sums of squares [@hilbert]. The situation with respect to containment has therefore been completely known for a long time.\ There remains, however, the question of the quantitative relationship between these cones. There are several known families of polynomials that are not sums of squares [@choi], [@rez4]; however all of these examples lie close to the boundary of the cone of nonnegative polynomials. To the author’s knowledge little except for the equality in the case of quadratic forms is known. In this paper we show that the picture is quite different for a fixed degree greater than 2.\ For a convex set $K$ a good measure of size of $K$ that takes into account the effect of large dimensions is the volume of $K$ raised to the power reciprocal to the ambient dimension: $$(\text{Vol} \, K)^{1/\text{dim} \, K}.$$ For example, homothetically expanding $K$ by a constant factor leads to an increase by the same factor in this normed volume.\ We derive bounds on volumes, raised to the power reciprocal to the ambient dimension, of sections of the three cones with the hyperplane of all forms of integral 1 on the unit sphere $S^{n-1}$ in $\mathbb{R}^n$. We show that the bounds are asymptotically tight if the degree is fixed and number of variables tends to infinity. If the degree is greater than 2 then the order of dependence on the number of variables $n$ is quite different for the three cones. We remark that this indeed shows that asymptotically the cones differ drastically in size. These bounds provide us with the complete picture of metric dependence of the size of all three cones on the number of variables, when the degree is fixed.\ We would also like to mention that the bounds that separate the cone of nonnegative polynomials from the cone of sums of squares are interesting from the point of view of computational complexity [@parrilo]. Namely, they show that it is not feasible in general to replace testing for positivity with testing whether a polynomial is a sum of squares, since for degree greater than two the sizes of the cones are drastically different. Some of the bounds given in this paper have already been proved by the author in [@greg]; we reproduce their proofs for the sake of completeness. Main Theorems ============= We begin by introducing some notation. In order to compare the cones we take compact bases. Let $M=M_{n,2k}$ be the hyperplane of all forms in $P_{n,2k}$ with integral 0 on the unit sphere ${S^{n-1}}$: $$M_{n,2k}=\biggr{\{} f \in P_{n,2k} \mathrel{\bigg{\arrowvert}} \int_{{S^{n-1}}}f \, d\sigma=0 \biggl{\}}.$$ Let $r^{2k}$ in $P_{n,2k}$ be the polynomial constant on the unit sphere ${S^{n-1}}$: $$r^{2k}=(x_1^2+ \ldots +x_n^2)^k.$$ Let $M'$ be the affine hyperplane of all forms of integral 1 on the unit sphere ${S^{n-1}}$. We define compact convex bodies $\widetilde{C}$, $\widetilde{Sq}$ and $\widetilde{{L\!f}}$ by intersecting the respective cones with $M'$ and then translating the compact intersection into $M$ by subtracting $r^{2k}$. Formally we can define $\widetilde{C}$, $\widetilde{Sq}$ and $\widetilde{{L\!f}}$ as the sets of all forms $f$ in $M_{n,2k}$ such that $f+r^{2k}$ lies in the respective cone: $$\begin{aligned} \widetilde{C}=\{f \in M_{n,2k} \quad \mid \quad f+r^{2k} \in C \}, \\ \widetilde{Sq}=\{f \in M_{n,2k} \quad \mid \quad f+r^{2k} \in Sq \}, \\ \widetilde{{L\!f}}=\{f \in M_{n,2k} \quad \mid \quad f+r^{2k} \in {L\!f}\}.\end{aligned}$$ We note that these sections are the natural ones to take since $M_{n,2k}$ is the only linear hyperplane in $P_{n,2k}$ that is preserved by an orthogonal change of coordinates in $\mathbb{R}^n$.\ We work with the following Euclidean metric on $P_{n,2k}$, which we call the integral or $L^2$ metric, $${\langle f \, ,g \rangle}=\int_{{S^{n-1}}} fg \, d\sigma,$$ where $\sigma$ is the rotation invariant probability measure on ${S^{n-1}}$. We use $D_M$ to denote the dimension of $M_{n,2k}$, $S_M$ to denote the unit sphere in $M_{n,2k}$ and $B_M$ to denote the unit ball in $M_{n,2k}$. The main results of this paper are the following three theorems: \[posmain\] There exist constants $\alpha_1$ and $\beta_1$$>0$ dependent only on $k$ such that $$\beta_1 n^{-1/2} \leq \bigg{(}\frac{\text{Vol} \, \widetilde{C}}{\text{Vol} \, B_M}\bigg{)}^{1/D_M} \leq \alpha_1n^{-1/2}.$$ \[squarenormbound\] There exist constants $\alpha_2$ and $\beta_2$$>0$ dependent only on $k$ such that $$\beta_2n^{-k/2} \bigg{(}\frac{\text{Vol}\, \widetilde{Sq}}{\text{Vol}\, B_M}\bigg{)}^{1/D_M} \leq \alpha_2 n^{-k/2}.$$ \[powersnormbound\] There exist constants $\alpha_3$ and $\beta_3$ $>0$ dependent only on $k$ such that for all $\epsilon > 0$ and $n$ large enough $$\beta_3n^{-k+1/2} \leq \bigg{(}\frac{\text{Vol} \, \widetilde{Lf}}{\text{Vol} \, B_M}\bigg{)}^{1/D_M} \leq \alpha_3n^{-k+1/2+\epsilon}.$$ We observe that if the degree $2k$ is equal to two, then all of the above bounds agree asymptotically. However if the degree is greater than two then we see that the bases $\widetilde{C}$, $\widetilde{Sq}$ and $\widetilde{{L\!f}}$ asymptotically have quite different volumes.\ The rest of the paper is structured as follows. In Section 3 we collect preliminary material necessary for the proofs. Since many of the estimates used are technical in nature, in Section 4 we give an outline of the proofs postponing the technical details for the later sections. In Section 5 we prove the bounds for the cone of nonnegative polynomials. In Section 6 we introduce a different metric on $P_{n,2k}$ and prove duality results used later on. In Section 7 we prove the bounds for the cone of sums of squares and in Section 8 we prove the bounds for the cone of sums of powers of linear forms. Preliminaries ============= The Action of the Orthogonal Group on $P_{n,2k}$ ------------------------------------------------ There is the following action of $SO(n)$ on $P_{n,2k}$, $$A \in SO(n) \quad \text{sends} \quad f \in P_{n,2k} \quad \text{to} \quad Af=f(A^{-1}x).$$ We observe that the cones $C$, $Sq$ and $Lf$ are invariant under this action and so is $M_{n,2k}$, the hyperplane of polynomials of integral $0$. Therefore the sections $\widetilde{C}$, $\widetilde{Sq}$ and $\widetilde{{L\!f}}$ are fixed by $SO(n)$ as well.\ Let $\Delta$ be the Laplace differential operator: $$\Delta=\frac{\partial^2}{\partial x_1^2}+\ldots +\frac{\partial^2}{\partial x_n^2}.$$ A form $f$ such that $$\Delta(f)=0,$$ is called *harmonic*. We will need the fact that the irreducible components of this representation are subspaces $H_{n,2l}$ for $0\leq l \leq k$, which have the following form: $$H_{n,2l}=\bigl{\{} f \in P_{n,2k} \mid f=r^{2k-2l}h \quad \text{where} \quad h \in P_{n,2l} \quad \text{is harmonic} \bigr{\}}.$$ For $v \in \mathbb{R}^n$ the functional $$\lambda_v:M_{n,2k} \longrightarrow \mathbb{R}, \qquad \lambda_v(f)=f(v),$$ is linear and therefore there exists a form $q_v \in M$ such that $$\lambda_v(f)={\langle q_v \, ,f \rangle}.$$ There are explicit descriptions of the polynomials $q_v$, under a suitable normalization they are so called Gegenbauer or ultraspherical polynomials. We will only need the property that for $v \in {S^{n-1}}$ $$||\,q_v||_{2}=\sqrt{D_M}.$$ For more details on this representation of $SO(n)$ see [@vilenkin]. The Blaschke-Santaló Inequality ------------------------------- \ Let $K$ be a full-dimensional convex body in $\mathbb{R}^n$ with origin in its interior and let ${\langle \, \, ,\, \rangle}$ be an inner product. We will use $K^{\circ}$ to denote the polar of $K$, $$K^{\circ}=\bigl{\{} x \in \mathbb{R}^n \, \mid \, {\langle x \, ,y \rangle} \leq 1 {\quad \text{for all} \quad}y \in K \bigr{\}}.$$ Now suppose that a point $z$ is in the interior of $K$ and let $K^z$ be the polar of $K$ when $z$ is translated to the origin: $$K^z=\bigl{\{} x \in \mathbb{R}^n \, \mid \, {\langle x-z \, ,y-z \rangle} \leq 1 {\quad \text{for all} \quad}y \in K \bigr{\}}.$$ The point $z$ at which the volume of $K^z$ is minimal is unique and it is called the Santaló point of $K$. Moreover the following inequality on volumes of $K$ and $K^z$ holds: $$\frac{\text{Vol} \, K \, \text{Vol} \, K^z}{(\text{Vol} \, B)^2} \leq 1,$$ where $B$ is the unit ball of ${\langle \, \, ,\, \rangle}$ and $z$ is the Santaló point of $K$. This is known as the Blaschke-Santaló inequality [@santalo]. Outline of Proofs ================= Since many of the following proofs are technical we would like to first give an informal outline.\ We begin with the description of the proofs for the cone of nonnegative polynomials. We observe that $\widetilde{C}$ is the convex body of forms of integral $0$ on ${S^{n-1}}$, such that the minimum of the forms on ${S^{n-1}}$ is at least $-1$, $$\widetilde{C}=\big{\{}f\in M_{n,2k} \quad \mid \quad f(x) \geq -1 {\quad \text{for all} \quad}x \in{S^{n-1}}\big{\}}.$$ Let $B_{\infty}$ be the unit ball of $L^{\infty}$ norm in $M_{n,2k}$, $$B_{\infty}=\big{\{}f\in M_{n,2k} \quad \mid \quad |f(x)| \leq 1 {\quad \text{for all} \quad}x \in {S^{n-1}}\big{\}}.$$ It follows that $$B_{\infty}=\widetilde{C} \cap -\widetilde{C} \qquad \text{and therefore} \qquad B_{\infty} \subset \widetilde{C}.$$ However, using the Blaschke-Santaló inequality and a theorem of\ Rogers and Shephard [@pach] we can show that conversely $$\bigg{(}\frac{\text{Vol} \, B_{\infty}}{\text{Vol} \, \widetilde{C}}\bigg{)}^{1/D_M} \geq 1/4.$$ Therefore it suffices to derive upper and lower bounds for the volume of $B_{\infty}$.\ For the lower bound we reduce the proof to bounding the average $L^{\infty}$ norm of a form in $M_{n,2k}$, $$\int_{S_M} ||f||_{\infty}\, d\mu,$$ where $S_M$ is the unit sphere in $M_{n,2k}$ and $\mu$ is the rotation invariant probability measure on $S_M$. The key idea is to estimate $||f||_{\infty}$ using $L^{2p}$ norms for some large $p$. An inequality of Barvinok [@barv] is used to see that taking $p=n$ suffices for $||f||_{2p}$ to be within a constant factor of $||f||_{\infty}$. The proof is completed with some estimates.\ The techniques used for the proof of the upper bound are quite different. Let $\nabla f$ be the gradient of $f \in P_{n,2k}$, $$\nabla f = \bigg{(} \frac{\partial f}{\partial x_1}\, ,\ldots , \, \frac{\partial f}{\partial x_n} \bigg{)},$$ and let ${\langle \nabla{f} \, ,\nabla{f} \rangle}$ be the following polynomial giving the squared length of the gradient of $f$, $${\langle \nabla f \, ,\nabla f \rangle}= \bigg{(}\frac{\partial f}{\partial x_1}\bigg{)}^2 + \ldots + \bigg{(}\frac{\partial f}{\partial x_n} \bigg{)}^2.$$ The key to the proof is the following theorem of Kellogg [@kellogg] which tells us that for homogeneous polynomials the maximum length of the gradient on the unit sphere ${S^{n-1}}$ is equal to the maximum absolute value of the polynomial on ${S^{n-1}}$ multiplied by the degree of the polynomial: $$||{\langle \nabla f \, ,\nabla f \rangle}||_{\infty}=4k^2||f||^2_{\infty}.$$ Now we define a different inner product on $P_{n,2k}$ which we call the gradient inner product, $${\langle f \, ,g \rangle_{G}}=\frac{1}{4k^2}\int_{{S^{n-1}}} {\langle \nabla f \, ,\nabla g \rangle} \, d\sigma.$$ We denote the norm of $f$ in the gradient metric by $||f||_G$ and the unit ball of the gradient metric in $M_{n,2k}$ by $B_G$. We observe that $$||f||_G=\frac{1}{4k^2}\int_{{S^{n-1}}} {\langle \nabla f \, ,\nabla f \rangle} \, d\sigma,$$ and hence it follows that $$||f||_G \leq ||f||_{\infty} \qquad \text{and therefore} \qquad B_{\infty} \subset B_G.$$ The relationship between the gradient metric and the integral metric can be calculated precisely by using the fact that both metrics are $SO(n)$-invariant. Therefore these metrics are constant multiples of each other in the irreducible subspaces of the $SO(n)$ representation and the constants can be calculated directly using the Stokes’ formula. Hence we obtain an upper bound for the volume of $B_{\infty}$ in terms of the volume of $B_M$, the unit ball of the $L^2$ metric in $M_{n,2k}$.\ The intuitive idea of the proof is as follows. In the $L^2$ metric we have, $$||f||_2 \leq ||f||_{\infty} \qquad \text{and therefore} \qquad B_{\infty} \subset B_M.$$ However we give up too much in this estimate. On the other hand, it is not hard to show that $$f^2(x) \leq 4k^2 {\langle \nabla f \, ,\nabla f \rangle} {\quad \text{for all} \quad}x \in {S^{n-1}}.$$ Direct computations show that using the gradient metric gives us a better estimate and that this estimate is fine enough for our purposes.\ The proof of the upper bound for the cone of sums of squares is quite similar to the proof of the lower bound for the cone of nonnegative polynomials. We define the following norm on $P_{n,2k}$, $$||f||_{sq}=\max_{g \in S_{P_{n,k}}} |{\langle f \, ,g^2 \rangle}|,$$ where $S_{P_{n,k}}$ is the unit sphere in $P_{n,k}$. Using inequalities from convexity we can reduce the proof to bounding the average $||f||_{sq}$.\ To every form $f \in P_{n,2k}$ we can associate a quadratic form $H_f$ on $P_{n,2k}$ by letting $$H_f(g)={\langle f \, ,g^2 \rangle} \qquad \text{for} \qquad g \in P_{n,k}.$$ It follows that $$||f||_{sq}=||H_f||_{\infty}.$$ Now we can estimate $||H_f||_{\infty}$ by high $L^{2p}$ norms of $H_f$ and the proof is finished using similar ideas to the proof for the case of nonnegative polynomials.\ For the remainder of the proofs we will need to consider yet another metric on $P_{n,2k}$. To a form $f \in P_{n,2k}$, $$f=\sum_{\alpha=(i_1, \ldots ,i_n)}c_{\alpha}x_1^{i_1}\ldots x_n^{i_n}.$$ we formally associate the differential operator $D_f$: $$D_f=\sum_{\alpha=(i_1, \ldots ,i_n)}c_{\alpha}\frac{\partial^{i_1}}{\partial x_1^{i_1}}\cdots \frac{\partial^{i_n}}{\partial x_n^{i_n}}.$$ We define the following metric on $P_{n,2k}$, which we call the differential metric: $${\langle f \, ,g \rangle_{D}}=D_f(g).$$ It is not hard to check that this indeed defines a symmetric positive definite bilinear form, which is invariant under the action of $SO(n)$. The relationship between the differential metric and the integral metric can be calculated precisely.\ For the proof of the lower bound for the cone of sums of squares we show that the dual cone $Sq^*_d$ of $Sq$ with respect to the differential metric is contained in $Sq$. Therefore we can derive a lower bound on the volume of $\widetilde{Sq}$ by using the Blaschke-Santaló inequality.\ It can be shown that the cone of sums of $2k$-th powers of linear forms $Lf$ is dual to $C$ in the differential metric. The proofs of the bounds follow from the bounds derived for $\widetilde{C}$ and the Blaschke-Santaló inequality. Nonnegative Polynomials ======================= In this section we prove Theorem \[posmain\]. Here is the precise statement of the bounds: \[posmainfull\] There are the following bounds on the volume of $\widetilde{C}$: $$\frac{1}{2\sqrt{4k+2}} \, n^{-1/2} \leq \bigg{(}\frac{\text{Vol} \, \widetilde{C}}{\text{Vol} \, B_M}\bigg{)}^{1/D_M} \leq 4\bigg{(}\frac{2k^2}{4k^2+n-2}\bigg{)}^{1/2}.$$ Proof of the Lower Bound ------------------------ \ For a real Euclidean vector space $V$ with the unit sphere $S_V$ and a function $f:V \rightarrow \mathbb{R}$ we use $||f||_{p}$ to denote the $L^p$ norm of $f$: $$||f||_p=\bigg{(}\int_{S_V} |f|^p \, d\mu \bigg{)}^{1/p} \qquad \text{and} \qquad ||f||_{\infty}=\max_{x \in S_V} |f(x)|.$$ We begin by observing that $\widetilde{C}$ is a convex body in $M_{n,2k}$ with origin in its interior and the boundary of $\widetilde{C}$ consists of polynomials with minimum $-1$ on ${S^{n-1}}$. Therefore the gauge $G_C$ of $\widetilde{C}$ is given by: $$G_{C}(f)=|\min_{v \, \in {S^{n-1}}} f(v)\,|.$$ By using integration in polar coordinates in $M$ we obtain the following expression for the volume of $\widetilde{C}$, $$\label{integralvolume} \biggl{(}\frac{\text{Vol} \, \widetilde{C}}{\text{Vol } B_M}\biggr{)}^{\frac{1}{D_M}}=\biggl{(}\int_{S_M} G_{C}^{-D_M} \, d\mu \biggr{)}^{\frac{1}{D_M}},$$ where $\mu$ is the rotation invariant probability measure on $S_M$. The relationship holds for any convex body with origin in its interior [@pisier p. 91].\ We interpret the right hand side of as $||G_C^{-1}||_{D_M}$, and by Hölder’s inequality $$||G_C^{-1}||_{D_M} \geq ||G_C^{-1}||_1.$$ Thus, $$\biggl{(}\frac{\text{Vol} \, \widetilde{C}}{\text{Vol} \, B_M}\biggr{)}^{\frac{1}{D_M}} \geq \int_{S_M} G_C^{-1} \, d\mu.$$ By applying Jensen’s inequality [@hardy p.150], with convex function $y=1/x$ it follows that, $$\int_{S_M} G_C^{-1} \ d\mu \geq \bigg{(}\int_{S_M} G_C \, d\mu \bigg{)}^{-1}.$$ Hence we see that $$\bigg{(} \frac{\text{Vol} \, \widetilde{C}}{\text{Vol} \, B_M} \bigg{)}^{\frac{1}{D_M}} \geq \bigg{(}\int_{S_M} |\min f| \, d \mu \bigg{)}^{-1}.$$ Clearly, for all $f \in P_{n,2k}$ $$||f||_{\infty} \geq |\min f|.$$ Therefore, $$\bigg{(} \frac{\text{Vol} \, \widetilde{C}}{\text{Vol} \, B_M} \bigg{)}^{\frac{1}{D_M}} \geq \bigg{(}\int_{S_M} ||f||_{\infty} \, d \mu \bigg{)}^{-1}.$$ The proof of the lower bound of Theorem \[posmainfull\] is now completed by the following estimate. \[infinitynorm\] Let $S_M$ be the unit sphere in $M_{n,2k}$ and let $\mu$ be the rotation invariant probability measure on $S_M$. Then the following inequality for the average $L^{\infty}$ norm over $S_M$ holds: $$\int_{S_M} ||f||_{\infty} \, d\mu \leq 2\sqrt{2n(2k+1)}.$$ It was shown by Barvinok in [@barv] that for all $f \in P_{n,2k}$, $$||f||_{\infty} \leq \binom{2kn+n-1}{2kn}^{\frac{1}{2n}}||f||_{2n}.$$ By applying Stirling’s formula we can easily obtain the bound $$\binom{2kn+n-1}{2kn}^{\frac{1}{2n}} \leq 2\sqrt{2k+1}.$$ Therefore it suffices to estimate the average $L^{2n}$ norm, which we denote by $A$: $$A=\int_{S_M} ||f||_{2n} \, d\mu.$$ Applying Hölder’s inequality we observe that $$A=\int_{S_M} \bigg{(}\int_{{S^{n-1}}} f^{2n}(x) \, d\sigma \bigg{)}^{\frac{1}{2n}} d\mu \leq \bigg{(}\int_{S_M}\int_{{S^{n-1}}} f^{2n}(x)\, d\sigma \, d\mu \bigg{)}^{\frac{1}{2n}}.$$ By interchanging the order of integration we obtain $$\label{first} A \leq \bigg{(} \int_{{S^{n-1}}} \int_{S_M} f^{2n}(x) \, d\mu \, d\sigma \bigg{)}^{\frac{1}{2n}}.$$ We now note that by symmetry of $M$ $$\int_{S_M} f^{2n}(x) \, d\mu,$$ is the same for all $x \in {S^{n-1}}$. Therefore we see that in the outer integral is redundant and thus $$\label{second} A \leq \bigg{(} \int_{S_M} f^{2n}(v) \, d\mu \bigg{)}^{\frac{1}{2n}}, \qquad \text{where} \ v \ \text{is any vector in} \ {S^{n-1}}.$$ We recall from Section 3 that for $v \in {S^{n-1}}$ there there exists a form $q_v$ in $M$ such that $${\langle f \, ,q_v \rangle}=f(v) {\quad \text{for all} \quad}f \in M \qquad \text{and} \qquad ||q_v||_2=\sqrt{D_M}.$$ Rewriting we see that $$\label{third} A \leq \bigg{(} \int_{S_M} {\langle f \, ,q_v \rangle}^{2n} \, d\mu \bigg{)}^{\frac{1}{2n}}.$$ We observe that $$\int_{S_M} {\langle f \, ,q_v \rangle}^{2n} \, d\mu=(D_M)^n \ \frac{\Gamma(n+\frac{1}{2}\,) \, \Gamma(\frac{1}{2}D_M)}{\sqrt{\pi} \, \Gamma(\frac{1}{2}D_M+n)}.$$ We substitute this into to obtain, $$A \leq \bigg{(}(D_M)^n \ \frac{\Gamma(n+\frac{1}{2}\,) \, \Gamma(\frac{1}{2}D_M)}{\sqrt{\pi} \, \Gamma(\frac{1}{2}D_M+n)} \bigg{)}^{\frac{1}{2n}}.$$ Since $$\bigg{(}\frac{\Gamma(\frac{1}{2}D_M)}{\Gamma(\,\frac{ 1}{2}D_M +n)}\bigg{)}^{\frac{1}{2n}} \leq \sqrt{\frac{2}{D_M}} \qquad \text{and} \qquad \bigg{(} \frac{\Gamma(n+1/2\,)}{\sqrt{\pi}}\bigg{)}^{\frac{1}{2n}} \leq n^{1/2},$$ we see that $$A \leq (2n)^{1/2}.$$ The theorem now follows. Proof of the Upper Bound ------------------------ \ We begin by noting that the origin is the only point in $M$ fixed by $SO(n)$. Let $\widetilde{C}^{\circ}$ be the polar of $\widetilde{C}$ in $M_{n,2k}$, $$\widetilde{C}^{\circ}=\{f \in M_{n,2k} \mid {\langle f \, ,g \rangle} \leq 1 {\quad \text{for all} \quad}g \in \widetilde{C}\}.$$ Since $\widetilde{C}$ is fixed by the action of $SO(n)$ and Santaló point of a convex body is unique, it follows that the origin is the Santaló point of $\widetilde{C}$. We now use Blaschke-Santaló inequality, which applied to $\widetilde{C}$ gives us: $$(\text{Vol} \, \widetilde{C}) \, (\text{Vol}\, \widetilde{C}^{\circ})\leq (\text{Vol}\, B_M)^2.$$ Therefore it would suffice to show that $$\label{ratio} \bigg{(}\frac{\text{Vol} \, \widetilde{C}^{\circ}}{\text{Vol} \, B_M}\bigg{)}^{1/D_M} \geq \frac{1}{4}\bigg{(}\frac{4k^2+n-2}{2k^2}\bigg{)}^{1/2}.$$ Let $B_{\infty}$ be the unit ball of the $L^{\infty}$ metric in $M_{n,2k}$, $$B_{\infty}=\{f \in M \ \mid \ ||f||_{\infty} \leq 1 \}.$$ We observe that $B_{\infty}$ is clearly the intersection of $\widetilde{C}$ with $-\widetilde{C}$: $$B_{\infty}=\widetilde{C} \cap -\widetilde{C}.$$ By taking polars it follows that $$B_{\infty}^{\circ} = \text{ConvexHull}\{C^{\circ},-\,C^{\circ}\}\subset \, \widetilde{C}^{\circ} \oplus(-\, \widetilde{C}^{\circ}),$$ where $\, \oplus \, $ denotes Minkowski addition. By theorem of Rogers and Shephard, [@pach] p. 78, it follows that $$\text{Vol} \, B_{\infty}^{\circ} \leq \binom{2D_M}{D_M} \text{Vol} \, \widetilde{C}^{\circ}.$$ Since $$\binom{2D_M}{D_M} \leq 4^{D_M},$$ we obtain $$\bigg{(}\frac{\text{Vol} \, \widetilde{C}^{\circ}}{\text{Vol} \, B_{\infty}^{\circ}}\bigg{)}^{1/D_M} \geq \frac{1}{4}.$$ Combining with we see that we have reduced the lower bound of Theorem \[posmainfull\] to showing that $$\label{reduce} \bigg{(}\frac{\text{Vol} \, B_{\infty}^{\circ}}{\text{Vol} \, B_M}\bigg{)}^{1/D_M} \geq \bigg{(}\frac{4k^2+n-2}{2k^2}\bigg{)}^{1/2}$$ For a form $f$ we use $\nabla f$ to denote the gradient of $f$: $$\nabla f =\bigg{(} \frac{\partial f}{\partial x_1}\, ,\ldots \, , \frac{\partial f}{\partial x_n} \bigg{)}.$$ We also define a different Euclidean metric on $P_{n,2k}$ which we call the gradient metric: $${\langle f \, ,g \rangle_{G}}=\frac{1}{4k^2}\int_{{S^{n-1}}} {\langle \nabla f \, ,\nabla g \rangle} \, d\sigma.$$ We denote the unit ball in this metric by $B_G$ and the norm of $f$ by $||f||_G$. For $f \in P_{n,2k}$ let ${\langle \nabla f \, ,\nabla f \rangle}$ be the following polynomial: $${\langle \nabla f \, ,\nabla f \rangle}= \bigg{(}\frac{\partial f}{\partial x_1}\bigg{)}^2 + \ldots + \bigg{(}\frac{\partial f}{\partial x_n} \bigg{)}^2.$$ It was shown by Kellogg in [@kellogg] that $$||{\langle \nabla f \, ,\nabla f \rangle}||_{\infty}=4k^2||f||^2_{\infty}.$$ It clearly follows that $$||f||_{\infty} \geq ||f||_G,$$ and therefore $$B_{\infty} \subseteq B_G.$$ Polarity reverses inclusion and thus we see that $$B_{G}^{\circ} \subseteq B_{\infty}^{\circ} \quad \text{and} \quad \text{Vol} \, B_G^{\circ}=\frac{(\text{Vol} \, B_M)^2}{\text{Vol} \, B_G},$$ since $B_G$ is an ellipsoid. Thus and consequently the upper bound of Theorem \[posmainfull\] will follow from the following lemma. $$\bigg{(} \frac{\text{Vol} \, B_{M}}{\text{Vol} \, B_{\,G}} \bigg{)}^{1/D_M} \geq \bigg{(}\frac{4k^2+n-2}{2k^2}\bigg{)}^{1/2}.$$ It will suffice to show that for all $f \in M$ $$\label{product} {\langle f \, ,f \rangle_{G}} \geq \frac{4k^2+n-2}{2k^2}{\langle f \, ,f \rangle}.$$ By the invariance of both inner products under the action of $SO(n)$, it is enough to prove in the irreducible components of the representation.\ First let $f$ be a harmonic form of degree $2d$ in $n$ variables. Then we claim that $${\langle f \, ,f \rangle}=\frac{2d}{4d+n-2}\, {\langle f \, ,f \rangle_{G}}.$$ Indeed consider the vector field $F=f(v)\, \! \nabla \! f$ on ${S^{n-1}}$. By the Divergence Theorem: $$\int_{{S^{n-1}}} {\langle F \, ,v \rangle} \, dx(v) = \int_{||x|| \leq 1} \text{div}\, F \, dx,$$ where $dx$ is the Lebesgue measure and $\text{div} \, F$ is the divergence of $F$: $$\text{div} \, F = \frac{\partial{F_1}}{\partial{x_1}} + \ldots + \frac{\partial{F_n}}{\partial{x_n}}.$$ Since $f$ is homogeneous of degree $2d$, it follows that $${\langle \nabla f \, ,v \rangle}=2d\,f(v).$$ Therefore $$\int_{{S^{n-1}}} {\langle F \, ,v \rangle} \, dx=2\omega_n d\int_{{S^{n-1}}} f^2 \, d\sigma=2\omega_n d {\langle f \, ,f \rangle},$$ where $\omega_n$ is the surface area of ${S^{n-1}}$. Since $f$ is harmonic it follows that $$\text{div} \, F=\bigg{(}\frac{\partial{f}}{\partial{x_1}}\bigg{)}^2 + \ldots + \bigg{(}\frac{\partial{f}}{\partial{x_n}}\bigg{)}^2={\langle \nabla f \, ,\nabla f \rangle}.$$ We observe that ${\langle \nabla f \, ,\nabla f \rangle}$ is a homogeneous polynomial of degree $4d-2$ and therefore $$\int_{||x|| \leq 1} {\langle \nabla f \, ,\nabla f \rangle} \, dx=\frac{\omega_n}{4d+n-2}\int_{{S^{n-1}}} {\langle \nabla f \, ,\nabla f \rangle}\, d\sigma.$$ The claim now follows.\ Now suppose that $f=hr^{2k-2d}$ where $h$ is a harmonic form of degree $2d \leq 2k$. It is easy to check that $${\langle f \, ,f \rangle_{G}}=\frac{d^2}{k^2}{\langle h \, ,h \rangle_{G}}+\frac{k^2-d^2}{k^2}{\langle h \, ,h \rangle}.$$ We know that $${\langle h \, ,h \rangle_{G}}=\frac{4d+n-2}{2d}{\langle h \, ,h \rangle} \quad \text{and} \quad {\langle f \, ,f \rangle}={\langle h \, ,h \rangle}.$$ Thus $${\langle f \, ,f \rangle_{G}}=\frac{2k^2+d(n-2)+2d^2}{2k^2}{\langle f \, ,f \rangle}.$$ Since $f \in M_{n,2k}$ we know that $1\leq d \leq k$. The minimum clearly occurs when $d=1$ and we see that $${\langle f \, ,f \rangle_{G}} \leq \frac{4k^2+n-2}{2k^2}{\langle f \, ,f \rangle}.$$ The lemma now follows. The Differential Metric ======================= Before we proceed with the proofs of Theorems \[squarenormbound\] and \[powersnormbound\] we will need some preparatory results that involve switching to a different Euclidean metric on $P_{n,2k}$.\ To a form $f \in P_{n,2k}$, $$f=\sum_{\alpha=(i_1, \ldots ,i_n)}c_{\alpha}x_1^{i_1}\ldots x_n^{i_n}.$$ we formally associate the differential operator $D_f$: $$D_f=\sum_{\alpha=(i_1, \ldots ,i_n)}c_{\alpha}\frac{\partial^{i_1}}{\partial x_1^{i_1}}\cdots \frac{\partial^{i_n}}{\partial x_n^{i_n}}.$$ We define the following metric on $P_{n,2k}$, which we call the differential metric: $${\langle f \, ,g \rangle_{D}}=D_f(g).$$ It is not hard to check that this indeed defines a symmetric positive definite bilinear form, which is invariant under the action of $SO(n)$. For a point $v \in {S^{n-1}}$ we will use $v^{2k}$ to denote the polynomial $$v^{2k}=(v_1x_1+ \ldots +v_nx_n)^{2k}.$$ We also define an important linear operator $T: P_{n,2k} \to P_{n,2k}$, which to a form $f \in P_{n,2k}$ associates weighted average of forms $v^{2k}$ with the weight $f(v)$: $$T(f)=\int_{{S^{n-1}}} f(v)v^{2k} \, d\sigma(v).$$ The operator $T$ was first introduced in a very different form by Reznick in [@rez2]; we take our definition from [@greg2]. The operator $T$ acts as a switch between our standard integral metric and the differential metric in the following sense: \[scalarprodchange\] The following identity relating the operator $T$ and the two metrics holds, $${\langle Tf \, ,g \rangle_{D}}=(2k)!{\langle f \, ,g \rangle}.$$ We observe that $${\langle Tf \, ,g \rangle_{D}}={\langle \int_{{S^{n-1}}} f(v)v^{2k} \, d\sigma(v) \, ,g \rangle_{D}}=\int_{{S^{n-1}}} {\langle f(v)v^{2k} \, ,g \rangle_{D}} \, d \sigma(v).$$ Since $${\langle v^{2k} \, ,g \rangle_{D}}=(2k)!g(v),$$ it follows that $${\langle Tf \, ,g \rangle_{D}}=(2k)!\int_{{S^{n-1}}} f(v)g(v) \, d \sigma(v)=(2k)!{\langle f \, ,g \rangle}.$$ Let $L$ be a full-dimensional cone in $P_{n,2k}$ such that $r^{2k}$ is in the interior of $L$ and $\int_{{S^{n-1}}} f \, d\sigma > 0$ for all non-zero $f$ in $L$. We define $\widetilde{L}$ as the set of all forms $f$ in $M$ such that $f+r^{2k}$ lies in $L$, $$\widetilde{L}=\{f \in M \mid f+r^{2k} \in L \}.$$ We let $L^*_i$ be the dual cone of $L$ in the integral metric and $L^*_d$ be the dual cone of $L$ in the differential metric. $$\begin{aligned} L^*_i=\{f \in P_{n,2k} \mid {\langle f \, ,g \rangle} \geq 0 {\quad \text{for all} \quad}g \in L\}, \\ L^*_d=\{f \in P_{n,2k} \mid {\langle f \, ,g \rangle_{D}} \geq 0 {\quad \text{for all} \quad}g \in L\}.\end{aligned}$$ We observe that $r^{2k}$ is in the interior of both $L^*_i$ and $L^*_d$ and also $\int_{{S^{n-1}}} f \, d\sigma > 0$ for all non-zero $f$ in both of the dual cones. Therefore we can similarly define $\widetilde{L^*_i}$ and $\widetilde{L^*_d}$ as sets of all forms $f$ in $M$ such that $f+r^{2k}$ lies in the respective cone. \[volumeswitch\] Let $L$ be a full-dimensional cone in $P_{n,2k}$ such that $r^{2k}$ is the interior of $L$ and $\int_{{S^{n-1}}} f \, d\sigma > 0$ for all $f$ in $L$. Then there is the following relationship between the volumes of $\widetilde{L^*_i}$ and $\widetilde{L^*_d}$ $$\frac{k!}{(n/2+2k)^{k}} \leq \bigg{(}\frac{\text{Vol} \, \widetilde{L^*_d}}{\text{Vol}\, \widetilde{L^*_i}} \bigg{)}^{1/D_M} \leq \bigg{(}\frac{k!}{(n/2+k)^{k}}\bigg{)}^{\alpha},$$ where $$\alpha=1-\bigg{(}\frac{2k-1}{2k+n-2}\bigg{)}^2.$$ From Lemma \[scalarprodchange\] we see that $${\langle f \, ,g \rangle} \geq 0 \quad \text{if and only if} \quad {\langle Tf \, ,g \rangle_{D}} \geq 0 \quad \text{for all} \quad f,g \in P_{n,2k}.$$ Therefore it follows that $T$ maps $L_i^*$ to $L_d^*$, $$T(L_i^*)=L_d^*.$$ It is hot hard to show that $$T(r^{2k})=cr^{2k} \quad \text{where} \quad c=\int_{{S^{n-1}}} x_1^{2k} \, d\sigma=\frac{\Gamma(\frac{2k+1}{2})\Gamma(\frac{n}{2})}{\sqrt{\pi}\Gamma(\frac{n+2k}{2})}.$$ Therefore $\frac{1}{c}T$ fixes the hyperplane of all forms of integral 1 on the sphere and therefore $\frac{1}{c}T$ maps the section $\widetilde{L_i^*}$ to $\widetilde{L_d^*}$.\ It is possible to describe precisely the action of $\frac{1}{c}T$ on $M_{n,2k}$, see [@greg2]. It can be shown that $\frac{1}{c}T$ is a contraction operator and the exact coefficients of contraction can be computed. We only need the following estimate, which follows from [@greg2] Lemma 7.4 by estimating the change in volume to be at most the largest contraction coefficient: $$\bigg{(}\frac{\text{Vol}\, \widetilde{L_d^*}}{\text{Vol}\, \widetilde{L_i^*}}\bigg{)}^{1/D_M} \geq \frac{k!\Gamma(k+n/2)}{\Gamma(2k+n/2)}.$$ We observe that $$\frac{k!\Gamma(k+n/2)}{\Gamma(2k+n/2)} \geq \frac{k!}{(n/2+2k)^{k}},$$ and therefore, $$\bigg{(}\frac{\text{Vol}\, \widetilde{L_d^*}}{\text{Vol}\, \widetilde{L_i^*}}\bigg{)}^{1/D_M} \geq \frac{k!}{(n/2+2k)^{k}}.$$ Also from Lemma 7.4 of [@greg2] it follows that contraction by the largest coefficient occurs in the space of all harmonic polynomials of degree $2k$ which has dimension $$D_H=\binom{n+2k-1}{2k}-\binom{n+2k-3}{2k-2}.$$ Since the dimension of the ambient space $M$ is $$D_M=\binom{n+2k-1}{2k}-1,$$ we can estimate that $$\frac{D_H}{D_M} \geq 1-\bigg{(}\frac{2k-1}{n+2k-2}\bigg{)}^2.$$ Since we can also estimate the largest contraction coefficient from above, $$\frac{k!\Gamma(k+n/2)}{\Gamma(2k+n/2)} \leq \frac{k!}{(n/2+k)^{k}},$$ the theorem now follows. We also show the following theorem, which allows us to compare the cone of sums of squares to its dual. \[dualinclusion\] The dual cone to the cone of sums of squares in the differential metric $Sq_d^*$ is contained in the cone of sums of squares $Sq$, $$Sq_d^* \subseteq Sq.$$ In this proof we will work exclusively with the differential metric on $P_{n,k}$ and $P_{n,2k}$. Let $W$ be the space of quadratic forms on $P_{n,k}$. For $A,\,B$ in $W$, with corresponding symmetric matrices $M_A, \, M_B$ the inner product of $A$ and $B$ is given by, $${\langle A \, ,B \rangle}=\text{tr} \, M_AM_B.$$ For $q \in P_{n,k}$ let $A_q$ be the rank one quadratic form giving the square of the inner product with $q$: $$A_q(p)={\langle p \, ,q \rangle_{D}}^2.$$ Then for any $B \in W$ $${\langle A_q \, ,B \rangle}=B(q).$$ Now suppose $f \in Sq_d^*$. Let $H_f$ be the following quadratic form on $P_{n,k}$: $$H_f(p)={\langle p \, ,f^2 \rangle_{D}}.$$ Since $f \in Sq_d^*$, the quadratic form $H_f$ is clearly positive semidefinite. Therefore $H_f$ can be written as a nonnegative linear combination of forms of rank 1: $$\label{decompose} H_f=\sum A_q \qquad \text{for some} \qquad q \in P_{n,k}.$$ Let $V$ be the subspace of $W$ given by the linear span of the forms $H_f$ for all $f \in P_{n,2k}$. Let $\mathbb{P}$ be the operator of orthogonal projection onto $V$. We claim that $$\mathbb{P}(A_q)=\binom{2k}{k}^{-1}H_{q^2}.$$ It suffices to show that $A_q-\binom{2k}{k}^{-1}H_{q^2}$ is orthogonal to the forms $H_{v^{2k}}$ since these forms span $V$. We observe that $$H_{v^{2k}}(p)=(2k)!p(v)^{2k}=\frac{(2k)!A_{v^k}(p)}{(k!)^2}=\binom{2k}{k}A_{v^k}(p).$$ Therefore we see that $${\langle A_q-\binom{2k}{k}^{-1}\! H_{q^2} \, ,H_{v^{2k}} \rangle}= \! H_{v^{2k}}(q)- {\langle H_{q^2} \, ,A_{v^k} \rangle}=H_{v^{2k}}(q)-H_{q^2}(v^k)\!=\!0.$$ Now we apply $\mathbb{P}$ to both sides of . It follows that $$H_f=\mathbb{P}\bigg(\sum A_q \bigg)=\sum \binom{2k}{k}^{-1}H_{q^2}=\binom{2k}{k}^{-1}H_{\sum q^2}.$$ Therefore $f$ is a sum of squares. Sums of Squares =============== In this section we prove Theorem \[squarenormbound\]. The full statement of the bounds is the following, \[squarenormboundfull\] There are the following bounds for the volume of $\widetilde{Sq}$: $$\frac{(k!)^2}{4^{2k}(2k)!\sqrt{24}}\frac{n^{k/2}}{(n/2+2k)^{k}} \leq \bigg{(}\frac{\text{Vol}\, \widetilde{Sq}}{\text{Vol}\, B_M}\bigg{)}^{1/D_M} \leq \frac{4^{2k}(2k)!\sqrt{24}}{k!}\, n^{-k/2}.$$ Proof of the Upper Bound ------------------------ \ Let us begin by considering the support function of $\widetilde{Sq}$, which we call $L_{\widetilde{Sq}}$: $$L_{\widetilde{Sq}}(f)=\max_{g \, \in \, \widetilde{Sq}} \, {\langle f \, ,g \rangle}.$$ The average width $W_{\widetilde{Sq}}$ of $\widetilde{Sq}$ is given by $$W_{\widetilde{Sq}}=2\int_{S_M} L_{\widetilde{Sq}} \, d\mu.$$ We now recall Urysohn’s Inequality [@schneider p.318] which applied to $\widetilde{Sq}$ gives $$\label{urysohn} \bigg{(}\frac{\text{Vol} \, \widetilde{Sq}}{\text{Vol} \, B_M}\bigg{)}^{\frac{1}{D_M}} \, \leq \frac{W_{\widetilde{Sq}}}{2}.$$ Therefore it suffices to obtain an upper bound for $W_{\widetilde{Sq}}$.\ Let $S_{P_{n,k}}$ denote the unit sphere in $P_{n,k}$. We observe that extreme points of $\widetilde{Sq}$ have the form $$g^2-r^{2k} \qquad \text{where} \qquad g \in P_{n,k} \qquad \text{and} \qquad \int_{{S^{n-1}}}g^2 \,d\sigma=1.$$ For $f \in M$, $${\langle f \, ,r^{2k} \rangle}=\int_{{S^{n-1}}}f \,d\sigma=0,$$ and therefore, $$L_{\widetilde{Sq}}(f)=\max_{g \, \in S_{P_{n,k}}} {\langle f \, ,g^2 \rangle}.$$ We now introduce a norm on $P_{n,2k}$, which we denote $|| \ ||_{sq}$: $$||f||_{sq}=\max_{g \, \in \, S_{P_{n,k}}} |{\langle f \, ,g^2 \rangle}|.$$ It is clear that $$L_{Sq}(f) \leq ||f||_{Sq}.$$ Therefore by it follows that $$\bigg{(}\frac{\text{Vol} \, \widetilde{Sq}}{\text{Vol} \, B_M}\bigg{)}^{\frac{1}{D_M}} \, \leq \int_{S_M} ||f||_{sq} \, d \mu.$$ The proof of the upper bound of Theorem \[squarenormboundfull\] is reduced to the estimate below. \[squareballest\] There is the following bound for the average $|| \ ||_{sq}$ over $S_M$: $$\int_{S_M} ||f||_{sq} \, d\mu \, \leq \, \frac{4^{2k}(2k)!\sqrt{24}}{k!}\, n^{-k/2}.$$ For $f \in P_{n,2k}$ we introduce a quadratic form $H_f$ on $P_{n,k}$: $$H_f(g)={\langle f \, ,g^2 \rangle} \qquad \text{for} \qquad g \in P_{n,k}.$$ We note that $$||f||_{sq}=\max_{g \, \in \, S_{P_{n,k}}}|{\langle f \, ,g \rangle}|=||H_f||_{\infty}.$$ We bound $||H_f||_{\infty}$ by a high $L^{2p}$ norm of $H_f$. Since $H_f$ is a form of degree 2 on the vector space $P_{n,k}$ of dimension $D_{n,k}$ it follows by the inequality of Barvinok in [@barv] applied in the same way as in the proof of Theorem \[posmain\] that $$||H_f||_{\infty} \leq 2\sqrt{3} \, ||H_f||_{2D_{n,k}}.$$ Therefore it suffices to estimate: $$A= \int_{S_M} ||H_f||_{2D_{n,k}} \, d\mu = \int_{S_M} \bigg{(}\int_{S_{P_{n,k}}} {\langle f \, ,g^2 \rangle}^{\, 2D_{n,k}} \, d \sigma(g) \, d\mu(f) \bigg{)}^{\frac{1}{2D_{n,k}}}.$$ We apply Hölder’s inequality to see that $$A \leq \bigg{(}\int_{S_M} \int_{S_{P_{n,k}}} {\langle f \, ,g^2 \rangle}^{\, 2D_{n,k}} \, d \sigma(g) \, d\mu(f) \bigg{)}^{\frac{1}{2D_{n,k}}}.$$ By interchanging the order of integration we obtain $$\label{weird1} A \leq \bigg{(}\int_{S_{P_{n,k}}} \int_{S_M} {\langle f \, ,g^2 \rangle}^{\, 2D_{n,k}} \, d \mu(f) \, d\sigma(g) \bigg{)}^{\frac{1}{2D_{n,k}}}.$$ Now we observe that the inner integral $$\int_{S_M} {\langle f \, ,g^2 \rangle}^{\, 2D_{n,k}} \, d \mu(f),$$ clearly depends only on the length of the projection of $g^2$ into $M$. Therefore we have $$\int_{S_M} {\langle f \, ,g^2 \rangle}^{\, 2D_{n,k}} \, d \mu(f) \leq \, ||g^2||_2^{2D_{n,k}}\int_{S_M}{\langle f \, ,p \rangle}^{2D_{n,k}} \, d\mu(f),$$ $$\text{for any} \quad p \in S_M.$$ We observe that $$||g^2||_2=(||g||_4)^2 \qquad \text{and} \qquad ||g||_2=1.$$ By a result of Duoandikoetxea [@duo] Corollary 3 it follows that $$||g^2||_2 \leq 4^{2k}.$$ Hence we obtain $$\int_{S_M} {\langle f \, ,g^2 \rangle}^{\, 2D_{n,k}} \, d \mu(f) \leq 4^{4kD_{n,k}} \int_{S_V}{\langle f \, ,p \rangle}^{2D_{n,k}} \, d\mu(f).$$ We note that this bound is independent of $g$ and substituting into we get $$A \leq 4^{2k}\bigg{(}\int_{S_V}{\langle f \, ,p \rangle}^{2D_{n,k}} \, d\mu(f)\bigg{)}^{\frac{1}{2D_{n,k}}}.$$ Since $p \in S_M$ we have $$\int_{S_M}{\langle f \, ,p \rangle}^{2D_{n,k}} \, d\mu(f) = \frac{\Gamma(D_{n,k}+\frac{1}{2})\Gamma(\, \frac{1}{2}D_M)}{\sqrt{\pi} \, \Gamma(D_{n,k}+\frac{1}{2}D_M)}.$$ We use the following easy inequalities: $$\bigg{(}\frac{\Gamma(\, \frac{1}{2}D_M)}{\Gamma(D_{n,k}+\frac{1}{2}D_M)}\bigg{)}^{\frac{1}{2D_{n,k}}} \leq \sqrt{\frac{2}{D_M}}$$ and $$\bigg{(}\frac{\Gamma(D_{n,k}+\frac{1}{2})}{\sqrt{\pi}}\bigg{)}^{\frac{1}{2D_{n,k}}} \leq \sqrt{D_{n,k}},$$ to see that $$A \leq 4^{2k}\sqrt{\frac{2D_{n,k}}{D_M}}.$$ We now recall that $$D_{n,k}=\binom{n+k-1}{k} \qquad \text{and} \qquad D_M=\binom{n+2k-1}{2k}-1.$$ Therefore $$\sqrt{\frac{D_{n,k}}{D_M}} \, \leq \, \frac{(2k)!}{k!} \, n^{-k/2}.$$ Thus $$A \leq \frac{4^k(2k)!\sqrt{2}}{k!} \, n^{-k/2}.$$ The theorem now follows. Proof of the Lower Bound ------------------------ \ We begin with a corollary of Theorem \[squareballest\]. Let $B_{sq}$ be the unit ball of the norm $|| \ ||_{sq}$, $$B_{sq}=\{ f \in M \mid ||f||_{sq} \leq 1 \}.$$ From Theorem \[squareballest\] we know that $$\int_{S_M} ||f||_{sq} \, d\mu \, \leq \, \frac{4^{2k}(2k)!\sqrt{24}}{k!}\, n^{-k/2}.$$ It follows in the same way as in the section 3.1 that $$\bigg{(}\frac{\text{Vol}\, B_{sq}}{\text{Vol}\, B_M}\bigg{)}^{1/D_M} \geq \frac{k!}{4^{2k}(2k)!\sqrt{24}}\, n^{k/2}.$$ Now let $\widetilde{Sq}^{\circ}$ be the polar of $\widetilde{Sq}$ in $M$. It follows easily that $B_{sq}$ is the intersection of $\widetilde{Sq}^{\circ}$ and $-\widetilde{Sq}^{\circ}$. $$B_{sq}=\widetilde{Sq}^{\circ} \cap -\widetilde{Sq}^{\circ}.$$ Let ${Sq^*_i}$ be the dual cone of $Sq$ in the integral metric and let $\widetilde{Sq^*_i}$ be defined in the same way as for the previous cones. It is not hard to check that $\widetilde{Sq}^{\circ}$ is the negative of $\widetilde{Sq^*_i}$, $$\widetilde{Sq}^{\circ}=-\widetilde{Sq^*_i}.$$ Therefore we see that $$\label{short1} \bigg{(}\frac{\text{Vol}\, \widetilde{Sq^*_i}}{\text{Vol}\, B_M}\bigg{)}^{1/D_M} \geq \frac{k!}{4^{2k}(2k)!\sqrt{24}}\, n^{k/2}.$$ Now we observe that $r^{2k}$ is in the interior of $Sq$ and also for all non-zero $f$ in $Sq$ we have $\int_{{S^{n-1}}} f \, d\sigma > 0$. Therefore we can apply Lemma \[volumeswitch\] to $Sq$ and it follows that $$\bigg{(}\frac{\text{Vol}\, \widetilde{Sq^*_d}}{\text{Vol}\, \widetilde{Sq^*_i}}\bigg{)}^{1/D_M} \geq \frac{k!}{(n/2+2k)^{k}}.$$ Combining with we see that $$\bigg{(}\frac{\text{Vol}\, \widetilde{Sq^*_d}}{\text{Vol}\, B_M}\bigg{)}^{1/D_M} \geq \frac{(k!)^2}{4^{2k}(2k)!\sqrt{24}}\frac{n^{k/2}}{(n/2+2k)^{k}}.$$ By Lemma \[dualinclusion\] we know that $Sq_d^*$ in contained in $Sq$ and therefore $$\widetilde{Sq_d^*} \subseteq \widetilde{Sq}.$$ The lower bound now follows. Sums of 2k-th Powers of Linear Forms ==================================== In this section we prove Theorem \[powersnormbound\]. Here is the precise statement of the bounds, \[powersnormboundfull\] There are the following bounds for the volume of $\widetilde{Lf}$: $$\frac{k!\sqrt{4k^2+n-2}}{4k\sqrt{2}(n/2+2k)^k} \leq \! \bigg{(}\frac{\text{Vol} \, \widetilde{Lf}}{\text{Vol} \, B_M}\bigg{)}^{1/D_M} \! \! \leq 2\sqrt{n(4k+2)}\bigg{(}\frac{k!}{(n/2+k)^{k}}\bigg{)}^{\alpha},$$ where $$\alpha=1-\bigg{(}\frac{2k-1}{n+2k-2}\bigg{)}^2.$$ Proof of the Lower Bound ------------------------ \ We observe that the cone of sums of $2k$-th powers of linear forms is dual to the cone of nonnegative polynomials in the differential metric, $$Lf=C^*_d,$$ since in the differential metric, $${\langle f \, ,v^{2k} \rangle_{D}}=(2k)!f(v) {\quad \text{for all} \quad}f \in P_{n,2k}.$$ Therefore it follows that $$\widetilde{Lf}=\widetilde{C^*_d}.$$ We first consider the dual cone $C^*_i$ of $C$ in the integral metric. Similarly to the situation with the cone of sums of squares it is not hard to check that the dual $\widetilde{C}^{\circ}$ of $\widetilde{C}$ in $M$ with respect to the integral metric is $-\widetilde{C^*_i}$, $$\widetilde{C}^{\circ}=-\widetilde{C^*_i}.$$ We recall that in Section 3.2 we have shown : $$\bigg{(}\frac{\text{Vol} \, \widetilde{C}^{\circ}}{\text{Vol} \, B_M}\bigg{)}^{1/D_M} \geq \frac{1}{4}\bigg{(}\frac{4k^2+n-2}{2k^2}\bigg{)}^{1/2}.$$ Since $C$ has $r^{2k}$ in its interior and $\int_{{S^{n-1}}} f \, d\sigma > 0$ for all non-zero $f$ in $C$, we can apply Lemma \[volumeswitch\] to $C$ and we obtain, $$\bigg{(}\frac{\text{Vol} \, \widetilde{C^*_d}}{\text{Vol}\, \widetilde{C^*_i}} \bigg{)}^{1/D_M} \geq \frac{k!}{(n/2+2k)^{k}}.$$ Since $\widetilde{Lf}=\widetilde{C^*_d}$ and $\widetilde{C}^{\circ}=-\widetilde{C^*_i}$ we can combine with and we get: $$\bigg{(}\frac{\text{Vol} \, \widetilde{Lf}}{\text{Vol} \, B_M}\bigg{)}^{1/D_M} \geq \frac{k!}{4k\sqrt{2}}\frac{(4k^2+n-2)^{1/2}}{(n/2+2k)^k}.$$ Proof of the Upper Bound ------------------------ \ We begin by applying the Blaschke-Santaló inequality to $\widetilde{C}$ as in Section 3.2 to obtain $$\frac{\text{Vol}\, \widetilde{C} \, \text{Vol}\, \widetilde{C}^{\circ}}{(\text{Vol}\, B_M)^2} \leq 1.$$ Since $\widetilde{C}^{\circ}=-\widetilde{C^*_i}$ we can rewrite this to get $$\bigg{(}\frac{\text{Vol}\, \widetilde{C^*_i}}{\text{Vol}\, B_M}\bigg{)}^{1/D_M} \leq \bigg{(}\frac{\text{Vol}\, B_M}{\text{Vol}\, \widetilde{C}}\bigg{)}^{1/D_M}.$$ We observe that by the lower bound of Theorem \[posmainfull\] it follows that $$\label{short2} \bigg{(}\frac{\text{Vol}\, \widetilde{C^*_i}}{\text{Vol}\, B_M}\bigg{)}^{1/D_M} \leq 2\sqrt{n(4k+2)}.$$ Now we apply the upper bound of Lemma \[volumeswitch\] to $C$ and we get $$\bigg{(}\frac{\text{Vol} \, \widetilde{C^*_d}}{\text{Vol}\, \widetilde{C^*_i}} \bigg{)}^{1/D_M} \leq \bigg{(}\frac{k!}{(n/2+k)^{k}}\bigg{)}^{\alpha},$$ where $$\alpha=1-\bigg{(}\frac{2k-1}{n+2k-2}\bigg{)}^2.$$ The upper bound now follows by combining with . [dmst]{} A.I. Barvinok, *Estimating $L^{\infty}$ norms by $L^{2k}$ norms for functions on orbits*. Foundations of Computational Mathematics, 2 (2002), no. 4, 393-412. G. Blekherman *Convexity properties of the cone of nonnegative polynomials*, arXiv preprint math.CO/0211176 (2002), Discrete and Computational Geometry to appear. G. Blekherman *There are significantly more nonnegative polynomials than sums of squares*, arXiv preprint math.AG/0309130 (2003). M. D. Choi, T. Y. Lam, B. Reznick, *Even symmetric sextics.* Math. Z. 195 (1987), no. 4, 559-580. J. Duoandikoetxea, *Reverse Hölder inequalities for spherical harmonics.* Proc. Amer. Math. Soc. 101 (1987), no. 3, 487-491. G. H. Hardy, J. E. Littlewood, G. Pólya, *Inequalities.* Reprint of the 1952 edition. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1988. D. Hilbert, *Über die Darstellung definiter Formen als Summe von Formenquadraten.* Math. Ann. 32, 342-350 (1888). Ges Abh. vol. 2, 415-436. Chelsea Publishing Co., New York, (1965). O. Kellogg, *On bounded polynomials in several variables.* Math. Z. 27, 1928, 55-64. M. Meyer, A. Pajor. *On the Blaschke-Santaló inequality.* Arch. Math. (Basel) 55 (1990), no. 1, 82-93. J. Pach, P. Agarwal. *Combinatorial Geometry.* Wiley-Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons, Inc., New York, 1995. G. Pisier, *The Volume of Convex Bodies and Banach space Geometry.* Cambridge Tracts in Mathematics, 94. Cambridge University Press, Cambridge, 1989. B. Reznick, *Sums of even powers of real linear forms*, Mem. Amer. Math. Soc. 96 (1992), no. 463. B. Reznick, *Uniform denominators in Hilbert’s seventeenth problem.* Math. Zeitschrift. 220 (1995), no. 1, 75–97. B. Reznick, *Some concrete aspects of Hilbert’s 17th Problem.* Contemp. Math., 253 (2000), 251-272. R. Schneider, *Convex bodies: the Brunn-Minkowski theory.* Encyclopedia of Mathematics and its Applications, 44. Cambridge University Press, Cambridge, 1993. P. A. Parrilo, B. Sturmfels. *Minimizing polynomials functions.* Submitted to the DIMACS volume of the Workshop on Algorithmic and Quantitative Aspects of Real Algebraic Geometry in Mathematics and Computer Science. N. Ja. Vilenkin, *Special Functions and the Theory of Group Representations.* Translations of Mathematical Monographs, Vol. 22, American Mathematical Society (1968). <span style="font-variant:small-caps;">Department of Mathematics, University of Michigan,\ Ann Arbor, MI 48109-1109, USA</span>\ *Email address:* gblekher@umich.edu
{ "pile_set_name": "ArXiv" }
--- abstract: 'Given a permutation $\pi$ chosen uniformly from $S_n$, we explore the joint distribution of $\pi(1)$ and the number of descents in $\pi$. We obtain a formula for the number of permutations with $\des(\pi)=d$ and $\pi(1)=k$, and use it to show that if $\des(\pi)$ is fixed at $d$, then the expected value of $\pi(1)$ is $d+1$. We go on to derive generating functions for the joint distribution, show that it is unimodal if viewed correctly, and show that when $d$ is small the distribution of $\pi(1)$ among the permutations with $d$ descents is approximately geometric. Applications to Stein’s method and the Neggers-Stanley problem are presented.' author: - 'Mark Conger[^1]' bibliography: - 'ref.bib' date: 'July, 2005' nocite: - '[@ConcreteMath]' - '[@Wilf]' title: 'A Refinement of the Eulerian numbers, and the Joint Distribution of $\pi(1)$ and $\des(\pi)$ in $S_n$' --- Introduction {#sec:intro} ============ Consider $S_n$ to be the set of all bijections from $\set{1,2,\ldots,n}$ to itself. We will often identify a permutation $\pi$ with the sequence $\pi(1),\pi(2),\ldots,\pi(n)$. So for instance if $\pi(1)=k$ and $\pi(n)=\ell$, we say that $\pi$ “begins with” $k$ and “ends with” $\ell$. A permutation $\pi$ is said to have a descent at $i$ if $\pi(i)>\pi(i+1)$. That is to say, if we graph the points $(i,\pi(i))$ and connect them left to right, descents are the positions at which the connecting segments have negative slope. Let $\des(\pi)$ be the number of descents in $\pi$, and define $$\label{eulerdef} \euler{n}{d} \assign \sizeof{\set{\pi\in S_n:\des(\pi)=d}}.$$ These are known as the Eulerian numbers, and have been widely studied; see, for example, [@ConcreteMath p. 267] and [@Carlitz]. Bayer and Diaconis [@BayerDiaconis] showed that the probability that a particular permutation of a deck of cards occurs after any number of riffle shuffles is determined by the number of descents the permutation has. In [@Shuffle1], Viswanath and the author began working to generalize that result to decks containing repeated cards. At one point we had occasion to consider the number of permutations of $n$ letters which have $d$ descents and begin with $k$. That is, $$\label{eq:andkdef} \euler{n}{d}_k \assign \sizeof{\set{\pi \in S_n : \mbox{$\des(\pi) = d$ and $\pi(1) = k$}}}.$$ The current work is an investigation of the numbers defined in . We derive a formula in terms of binary coefficients: [andkformula]{} If $1 \le k \le n$, $$\euler{n}{d}_k = \sum_{j\ge 0} (-1)^{d-j} \binom{n}{d-j} j^{k-1}(j+1)^{n-k}$$ where $0^0$ is interpreted as 1 which is similar to a well-known formula for the Eulerian numbers. We use the formula to understand how the two statistics $\des(\pi)$ and $\pi(1)$ interact. If we are constructing a permutation with $d$ descents from left to right, and $d$ is small, a conservative strategy would seem to be to start with a low number, since starting with a high number means we will use up one of our descents near the beginning of the permutation. So in other words, we expect that if $d$ is small then there are more permutations with $d$ descents starting with low numbers than starting with high numbers. Similarly, if $d$ is close to $n$, our intuition is that that starting with a high number leaves us more possibilities later on. This intuition turns into a surprisingly simple result: [edpi1]{} If $\pi$ is chosen uniformly from among those permutations of $n$ that have $d$ descents, the expected value of $\pi(1)$ is $d+1$ and the expected value of $\pi(n)$ is $n-d$. And in we find, as expected, that the sequence $$\euler{n}{d}_1,\euler{n}{d}_2,\ldots,\euler{n}{d}_n$$ is weakly decreasing when $d$ is small and weakly increasing when $d$ is large. Consequently that sequence is an interpolation between its endpoints, which are two Eulerian numbers: $\euler{n-1}{d}$ and $\euler{n-1}{d-1}$. Experimental evidence (see ) suggests that it is a good interpolation, at least when $d$ is close to $(n-1)/2$, in the sense that a normal approximation to the Eulerian numbers also seems to provide a good approximation to the refined Eulerian numbers. However, the normal approximation is good for neither set when $d$ is small or $d$ is close to $n$. shows that in those cases the distribution of $\pi(1)$ is approximately geometric. The application which led directly to the current work is presented in . Fulman shows in [@Fulman1] that certain statistics on permutations, one of which is descents, are approximately normally distributed. The main tool he uses is Stein’s method, due to Charles Stein in [@Stein]. The thrust behind the method is to introduce a little extra randomness to a given random variable to get a new one. If certain symmetries are present, the result is an “exchangeable pair” of random variables, meaning, essentially, that the Markov process which takes one to the other is reversible. Then Stein’s theorems (and more recent refinements of them) can be applied to bound the distance between the original variable’s distribution and the standard normal distribution. Fulman uses a “random to end” operation to add randomness to permutations. That is, he starts with a uniformly distributed permutation $\pi$ and sets $$\pi' = (I,I+1,\ldots,n)\pi$$ where $I$ is selected uniformly from $\set{1,2,\ldots,n}$. While $(\pi,\pi')$ is not an exchangeable pair, it turns out that $(\des(\pi),\des(\pi'))$ is, and this leads to a central limit theorem for descents, and for a whole class of statistics. We tried a different method of adding randomness to $\pi$, namely, following $\pi$ by a uniformly selected transposition. That calculation (which is presented in ) led directly to . The Neggers-Stanley Conjecture, now proved false in general ([@Branden1; @Stembridge1]), was that the generating function for descents among the linear extensions of any poset has only real zeroes. Since a function with positive coefficients can have no positive zeroes, any combinatorial generating function with all real zeroes can be written in the form $$a(x+c_1)(x+c_2)\cdots(x+c_n)$$ for non-negative constants $a,c_1,c_2,\ldots,c_n$. The implication, then, is that if $D$ is the number of descents in a uniformly selected linear extension of a poset for which the Neggers-Stanley conjecture is true, then $D$ can be written as the sum of independent Bernoulli variables. In we present several generating functions for the refined Eulerian numbers. The set of permutations of $n$ which begin with $k$ is the same as the set of linear extensions of the poset defined on $\set{1,2,\ldots,n}$ by $k<a$ for all $a$ other than $k$. So we can find the Neggers-Stanley generating function for this poset explicitly, and we show that it does indeed have only real zeroes. We go on to show that several similar posets also satisfy the conjecture. (All of the posets considered were known to satisfy the conjecture by theorems of Simion [@Simion] and Wagner [@Wagner].) Basic Properties {#sec:basic} ================ If $\pi(1) = 1$, then $\pi(1)$ is certainly less than $\pi(2)$, so all descents are among the final $n-1$ numbers. And if $\pi(1) = n$, there is certain to be a descent between $\pi(1)$ and $\pi(2)$. So we know some boundary values: $$\euler{n}{d}_1 = \euler{n-1}{d} \andd \euler{n}{d}_n = \euler{n-1}{d-1} \label{eq:boundries}$$ for $n > 1$. Also, it is immediate that $$\begin{aligned} %\sum_d \euler{n}{d}& = n! \\ \sum_d \euler{n}{d}_k& = (n-1)! \\ \sum_k \euler{n}{d}_k& = \euler{n}{d}.\end{aligned}$$ Let $\rho \in S_n$ be the reversal permutation: $\rho(i) = n+1-i$. Then $\rho\pi$ is the same as $\pi$ but with $i$ replaced by $n+1-i$ everywhere. As a result, $\rho\pi$ has a descent wherever $\pi$ has an ascent, and an ascent wherever $\pi$ has a descent. So $\des(\rho\pi) = n-1-\des(\pi)$. Since $\pi \mapsto \rho\pi$ is a bijection from $S_n$ to itself, it follows that $$\euler{n}{d} = \euler{n}{n-1-d}. \label{eq:eulersym}$$ Note we could have obtained the same result from the map $\pi \mapsto \pi\rho$, since reversing $\pi$ changes ascents to descents and also reflects their positions about the center. Let $$\euler{n}{d}^k \assign \sizeof{\set{\pi \in S_n : \mbox{$\des(\pi) = d$ and $\pi(n) = k$}}}. \label{eq:supdef}$$ Both transformations yield symmetric identities for the refined Eulerian numbers. If $$\pi(1)=k \andd \des(\pi)=d$$ then $$\begin{aligned} \rho\pi(1)=n+1-k &\andd \des(\rho\pi)=n-1-d \\ \pi\rho(n)=k &\andd \des(\pi\rho)=n-1-d \\ \rho\pi\rho(n)=n+1-k &\andd \des(\rho\pi\rho)=d\end{aligned}$$ from which it follows that $$\label{eq:symmetry} \euler{n}{d}_k = \euler{n}{n-1-d}_{n+1-k} = \euler{n}{n-1-d}^k = \euler{n}{d}^{n+1-k}.$$ Recurrences {#sec:recurrences} =========== Assume $n>1$. Let $$\begin{aligned} T_k &\assign \set{\pi \in S_n: \mbox{$\pi(1)=k$ and $\des(\pi)=d$}} \\ T_{k,\ell} &\assign \set{\pi \in S_n: \mbox{$\pi(1)=k$, $\pi(2)=\ell$, and $\des(\pi)=d$}}\end{aligned}$$ and let $\pi \in T_{k,\ell}$. If $\ell<k$, then there is a descent between $\pi(1)$ and $\pi(2)$, so there must be $d-1$ descents in the “tail”, $\pi(2),\pi(3),\ldots,\pi(n)$. The tail begins with $\ell$, which is the $\ell$th largest value in the tail, so we must have $$\sizeof{T_{k,\ell}} = \euler{n-1}{d-1}_\ell$$ when $\ell<k$. Likewise, if $\ell>k$, there is no descent between $\pi(1)$ and $\pi(2)$, so there must be $d$ descents in the tail. This time $\ell$ is the $(\ell-1)$st largest value in the tail, so $$\sizeof{T_{k,\ell}} = \euler{n-1}{d}_{\ell-1}$$ when $\ell>k$. Of course $T_k$ is the disjoint union of the $T_{k,l}$, so $$\euler{n}{d}_k = \sizeof{T_k} = \sum_\ell \sizeof{T_{k,\ell}} = \sum_{\ell<k} \euler{n-1}{d-1}_\ell + \sum_{\ell>k} \euler{n-1}{d}_{\ell-1}$$ or, more succinctly, $$\label{eq:rec1} \euler{n}{d}_k = \sum_{\ell=1}^{n-1} \euler{n-1}{d-[\ell<k]}_\ell$$ where the bracket notation follows [@ConcreteMath]: $[A]$ is 1 if $A$ is true and 0 if $A$ is false. (Knuth refers to this as Iverson notation in [@KnuthNotation], and traces its origin to [@APL].) Note fails when $k<1$ or $k>n$, in which case ${\textstyle \euler{n}{d}_k} = 0$. Now suppose $1 \le k \le n-1$ and $\pi\in S_n$ begins with $k$. Swapping $k$ with $k+1$ in the sequence $\pi(1),\pi(2),\ldots,\pi(n)$ preserves descents for most $\pi$; the only exception is when $\pi(2)=k+1$, in which case a new descent is created. If we eliminate that case, the swap map is a bijection from $T_k \setminus T_{k,k+1}$ to $T_{k+1} \setminus T_{k+1,k}$, as those sets are defined above. Substituting sizes for sets, we have $$\label{eq:rec2} \euler{n}{d}_k - \euler{n-1}{d}_k = \euler{n}{d}_{k+1} - \euler{n-1}{d-1}_k.$$ is valid as long as $k \ne 0$ and $k \ne n$. (If $k < 0$ or $k > n$, all terms are 0.) A well-know recurrence for $\euler{n}{d}$ comes from considering what happens when you insert $n$ into an element of $S_{n-1}$: $$\label{eq:eulerrec} \euler{n}{d} = (n-d)\euler{n-1}{d-1} + (d+1)\euler{n-1}{d}$$ We can get a similar recurrence for the refined Eulerian numbers by considering what happens when you insert $n$ into an element of $S_{n-1}$ which begins with $k$: $$\label{eq:rec3} \euler{n}{d}_k = (n-d-1)\euler{n-1}{d-1}_k + (d+1)\euler{n-1}{d}_k.$$ In other words, one way to get an element of $S_n$ which begins with $k$ and has $d$ descents is to take an element of $S_{n-1}$ which begins with $k$ and has $d$ descents, and insert $n$ at a descent or at the end ($d+1$ choices). The other way is to start with an element of $S_{n-1}$ which begins with $k$ and has $d-1$ descents, and insert $n$ at an ascent ($n-d-1$ choices). fails when $k=n$, since a permutation of $S_{n-1}$ cannot begin with $n$. It is valid for all other values of $k$. Formulas and Moments {#sec:formulas} ==================== There is an explicit formula for the Eulerian numbers in terms of binomial coefficients: $$\label{eq:eulerformula} \euler{n}{d} = \sum_{j\ge 0} (-1)^{d-j} \binom{n+1}{d-j}(j+1)^n.$$ See for example, [@ConcreteMath p. 269]. \[Aside: follows from , which means that it is valid for all values of $d$, even if $d<0$ or $d\ge n$\]. So we have $$\begin{aligned} \euler{n}{d}_1 &= \euler{n-1}{d} = \sum_{j\ge 0} (-1)^{d-j} \binom{n}{d-j}(j+1)^{n-1} \label{eq:formulaone} \\ \euler{n}{d}_n &= \euler{n-1}{d-1} = \sum_{j\ge 0} (-1)^{d-1-j} \binom{n}{d-1-j}(j+1)^{n-1} \label{eq:formulan} = \sum_{j\ge 0} (-1)^{d-j} \binom{n}{d-j} j^{n-1}.\end{aligned}$$ These suggest a formula for $\euler{n}{d}_k$: \[thm:andkformula\] If $1 \le k \le n$, $$\label{eq:andkformula} \euler{n}{d}_k = \sum_{j\ge 0} (-1)^{d-j} \binom{n}{d-j} j^{k-1}(j+1)^{n-k}$$ where $0^0$ is interpreted as 1. Fix $k\ge 1$. The theorem is true for $n=k$ by . Suppose it is true for some $n\ge k$. Then $$\begin{aligned} \euler{n+1}{d}_k &= (n-d)\euler{n}{d-1}_k + (d+1)\euler{n}{d}_k \\ &= \sum_{j\ge 0} (-1)^{d-j}\bracket{-(n-d)\binom{n}{d-1-j} + (d+1)\binom{n}{d-j}} j^{k-1}(j+1)^{n-k}.\end{aligned}$$ The quantity in brackets reduces to $(j+1)\binom{n+1}{d-j}$, so the theorem is true for all $n\ge k$ by induction. Note that we assumed nothing about $d$; is valid even if $d<0$ or $d\ge n$. From we can deduce a formula for the $m$th “rising moment” of $\pi(1)$ when $\des(\pi)$ is fixed. Assume $\pi$ is chosen uniformly from $S_n$, and let $$\mu_m \assign \ex^{\des(\pi)=d} \pi(1)^{\overline{m}} \label{eq:mudef}$$ where $x^{\overline{m}} = x(x+1)(x+2)\cdots(x+m-1)$. $$\euler{n}{d}\mu_m = m! \sum_{j \ge 0} (-1)^{d-j} \binom{n}{d-j} \sum_{\ell=0}^{n-1} \binom{m+n}{\ell} j^{\ell}. \label{eq:momentformula}$$ From , $$\begin{aligned} \euler{n}{d}\mu_m &= \sum_{k=1}^n k^{\overline{m}} \euler{n}{d}_k = \sum_{k=1}^n \frac{(k+m-1)!}{(k-1)!} \sum_{j\ge 0} (-1)^{d-j} \binom{n}{d-j} j^{k-1}(j+1)^{n-k} \\ &= m! \sum_{j\ge 0} (-1)^{d-j} \binom{n}{d-j} \sum_{r=0}^{n-1} \binom{r+m}{r} j^r(j+1)^{n-1-r}\end{aligned}$$ (the last by setting $r=k-1$). But $(j+1)^{n-1-r} = \sum_{s=0}^{n-1-r} \binom{n-1-r}{s} j^s$. So let $\ell=r+s$ and we have $$\label{eq:muformcomplicated} \euler{n}{d}\mu_m = m! \sum_{j\ge 0} (-1)^{d-j} \binom{n}{d-j} \sum_{\ell=0}^{n-1} j^\ell \sum_{r=0}^\ell \binom{r+m}{r} \binom{n-1-r}{\ell-r}.$$ (-1,-1)(11,9) (0,0)(10,0) (10,0)[$x$]{} (0,1)(0,1)[7]{}[(0,0)(9,0)]{} [[[(0,-.25)(0,.25)]{}(0,-.75)[0]{}]{}]{} [[[(5,-.25)(5,.25)]{}(5,-.75)[$m$]{}]{}]{} [[[(6,-.25)(6,.25)]{}(6,-.75)[$m+1$]{}]{}]{} [[[(9,-.25)(9,.25)]{}(9,-.75)[$m+n-\ell$]{}]{}]{} (0,0)(0,8) (0,8)[$y$]{} (1,0)(1,0)[9]{}[(0,0)(0,7)]{} [[(-.25,0)(.25,0)]{}(-.25,0)[0]{}]{} [[(-.25,4)(.25,4)]{}(-.25,4)[$r$]{}]{} [[(-.25,7)(.25,7)]{}(-.25,7)[$\ell$]{}]{} (0,0)(1,0)(1,2)(2,2)(2,3)(5,3)(5,4)(7,4)(7,6)(9,6)(9,7) (0,0)[.15]{} (5,4)[.15]{} (6,4)[.15]{} (9,7)[.15]{} (5.5,-1)(5.5,8) (5.5,8)[$x=m+\frac12$]{} Let $\phi$ be a north/east lattice path from $(0,0)$ to $(m+n-\ell,\ell)$ (see Figure \[fig:latticepath\]). The number of such paths is $\binom{m+n}{\ell}$. If $r$ is the height at which $\phi$ crosses the line $x=m+\frac12$, then $\phi$ consists of a path from $(0,0)$ to $(m,r)$, a horizontal segment, and a path from $(m+1,r)$ to $(m+n-\ell,\ell)$. Counting the possibilities for the parts yields the identity $$\label{eq:lpidentity} \sum_{r=0}^\ell \binom{r+m}{r} \binom{n-1-r}{\ell-r} = \binom{m+n}{\ell}.$$ Substituting into yields the desired result. Note that the last sum in is a truncated binomial expansion of $(j+1)^{m+n}$. \[thm:edpi1\] If $\pi$ is chosen uniformly from among those permutations of $n$ that have $d$ descents, the expected value of $\pi(1)$ is $d+1$ and the expected value of $\pi(n)$ is $n-d$. The expected value of $\pi(1)$ is $\mu_1$, and $$\begin{aligned} \euler{n}{d} \mu_1& = \sum_{j \ge 0} (-1)^{d-j} \binom{n}{d-j} \sum_{\ell=0}^{n-1} \binom{n+1}{\ell} j^{\ell} \\ & = \sum_{j \ge 0} (-1)^{d-j} \binom{n}{d-j} \paren{(j+1)^{n+1} - j^{n+1} - (n+1)j^n} \\ & = \sum_{j \ge 0} (-1)^{d-j} \binom{n}{d-j} (j+1)^{n+1} - \sum_{i \ge 0} (-1)^{d-i} \binom{n}{d-i}(n+1+i)i^n. \intertext{The term for $i=0$ is 0, so let $j=i-1$ and combine} & = \sum_{j \ge 0} (-1)^{d-j} (j+1)^n \bracket{\binom{n}{d-j}(j+1) + \binom{n}{d-j-1}(n+j+2)}. \intertext{The quantity in brackets simplifies to $(d+1)\binom{n+1}{d-j}$, so} \euler{n}{d} \mu_1& = (d+1) \sum_{j \ge 0} (-1)^{d-j} (j+1)^n \binom{n+1}{d-j} = (d+1) \euler{n}{d}.\end{aligned}$$ Therefore $$\mu_1 = \ex^{\des(\pi)=d} \pi(1) = d+1.$$ For the second part, $$\begin{aligned} \ex^{\des(\pi)=d} \pi(n) &= \frac1{n!} \sum_k k\euler{n}{d}^k = \frac1{n!} \sum_k k\euler{n}{n-1-d}_k \\ &= \ex^{\des(\pi)=n-1-d} \pi(1) = n-d.\end{aligned}$$ Application Using Stein’s Method {#sec:stein} ================================ Charles Stein developed a method for showing that the distribution of a random variable $W$ which meets certain criteria is approximately standard normal. His technique has come to be known as Stein’s method; see [@Stein] or [@SteinApplications] for more explanation than can be given here. In its most straightforward form, Stein’s method requires finding a “companion” random variable $W^*$ such that $(W,W^*)$ is an exchangeable pair, meaning that $$\label{eq:exchangeable} \pr(W=w,W^*=w^*) = \pr(W=w^*,W^*=w)$$ for all values of $w$ and $w^*$. If we can find such a $W^*$ and if, in addition, there is a $\lambda$ between 0 and 1 such that $$\label{eq:lambda} \ex^W W^* = (1-\lambda)W$$ (that is, the expected value of $W^*$ when $W$ is fixed at some value is $1-\lambda$ times that value), then we may apply Stein’s method. We are interested in showing that if $\pi$ is chosen uniformly from $S_n$, then the random variable $D = \des(\pi)$ is approximately normal. This has been proven before, and in more generality; see [@Fulman1] for references. We will demonstrate the set-up for Stein’s method—that is, finding a companion variable and showing that it satisfies and . From there, applying the method would proceed as in [@Fulman1]. Often the companion variable in Stein’s method is defined by adding a little bit of randomness to the variable we are interested in. In this case, let $\tau$ be selected uniformly from among the transpositions in $S_n$, independently of $\pi$. Then $\tau\pi$ is uniformly distributed over $S_n$, and for any $u,v\in S_n$, $$\begin{aligned} \pr(\pi=u,\tau\pi=v) &= \pr(\pi=u,\tau=vu^{-1}) = \pr(\pi=u)\pr(\tau=vu^{-1}) \\ \pr(\pi=v,\tau\pi=u) &= \pr(\pi=v,\tau=uv^{-1}) = \pr(\pi=v)\pr(\tau=uv^{-1}).\end{aligned}$$ Both right-hand sides are $(n!)^{-1}\binom{n}{2}^{-1}$ if $vu^{-1}$ is a transposition and 0 otherwise, so $(\pi,\tau\pi)$ is an exchangeable pair. Let $D^* \assign \des(\tau\pi)$. Since $(\pi,\tau\pi)$ is an exchangeable pair, $(F(\pi),F(\tau\pi))$ is exchangeable for any function $F$. So $(D,D^*)$ is exchangeable. For $1 \le i \le n-1$ let $$D_i = [\pi(i) > \pi(i+1)] \andd D_i^* = [\tau\pi(i) > \tau\pi(i+1)]$$ be Bernoulli random variables; then $D = \sum_{i=1}^{n-1} D_i$ and $D^* = \sum_{i=1}^{n-1} D_i^*$. Fix $\pi$ and $i$ and let $a=\pi(i)$, $b=\pi(i+1)$. If $a<b$, the only ways for $\tau\pi(i)$ to be bigger than $\tau\pi(i+1)$ are if $\tau$ swaps $a$ with something bigger than $b$ ($n-b$ ways), if $\tau$ swaps $b$ with something smaller than $a$ ($a-1$ ways), or if $\tau$ swaps $a$ with $b$. So $$\ex^{D_i=0} (D_i^*-D_i) = \pr(D_i^*=1|D_i=0) = \frac{n+\pi(i)-\pi(i+1)}{\binom{n}{2}}$$ and similarly if $a>b$, $$\ex^{D_i=1} (D_i^*-D_i) = -\pr(D_i^*=0|D_i=1) = -\frac{n+\pi(i+1)-\pi(i)}{\binom{n}{2}}.$$ So in general $$\ex^{D_i} (D_i^*-D_i) = \frac{\pi(i)-\pi(i+1)}{\binom{n}{2}} + \frac{2(1-2D_i)}{n-1}.$$ Summing now over $i$ causes the $\pi(i)$ terms to telescope: $$\ex^{\pi} (D^*-D) = \sum_{i=1}^{n-1} \ex^{\pi} (D_i^*-D_i) = \frac{\pi(1)-\pi(n)}{\binom{n}{2}} + 2 - \frac{4D}{n-1}$$ which allows us to apply : $$\begin{aligned} \ex^D (D^*-D) &= \ex^D \ex^{\pi} (D^*-D) = \frac{\ex^D \pi(1) - \ex^D \pi(n)}{\binom{n}{2}} + 2 - \frac{4D}{n-1} \\ &= \frac{2}{n(n-1)}((D+1)-(n-D)) + 2 - \frac{4D}{n-1} = \frac{2(n-1)-4D}{n}.\end{aligned}$$ The mean and variance of $\des(\pi)$ are $\mu \assign (n-1)/2$ and $\sigma^2 \assign (n+1)/12$ respectively, so the variables $$W \assign \frac{\des(\pi)-\mu}{\sigma} \andd W^* \assign \frac{\des(\tau\pi)-\mu}{\sigma}$$ have mean 0 and variance 1. Then $(W,W^*)$ is an exchangeable pair and $$\ex^{W=w}(W^*-W) = \ex^{D=\sigma w + \mu} \paren{\frac{D^*-\mu}{\sigma}-\frac{D-\mu}{\sigma}} = \frac{1}{\sigma} \ex^{D=\sigma w + \mu} (D^*-D)$$ which is to say $$\ex^W (W^*-W) = \frac{2(n-1)-4(\sigma W + \mu)}{\sigma n} = -\frac4n W.$$ So if $W^*$ is obtained using the “random transposition” method described here, $(W,W^*)$ will be an exchangeable pair satisfying with $\lambda=4/n$. One can now proceed with Stein’s method and show that $W$ is close to being a standard normal random variable. Generating Functions {#sec:genfuncs} ==================== It follows from that $$a_n(x) \assign \sum_d \euler{n}{d} x^{d+1} = (1-x)^{n+1} \sum_{j \ge 0} j^n x^j$$ and therefore that $$\begin{aligned} A(x,z) :&= \sum_{n \ge 1} a_n(x) z^n/n! = \sum_{n \ge 1} (1-x)^{n+1} \sum_{j \ge 0} j^n x^j z^n / n! \\ &= (1-x) \sum_{j \ge 0} x^j \sum_{n \ge 1} \paren{j(1-x)z}^n / n! = (1-x) \sum_{j \ge 0} x^j \paren{e^{j(1-x)z} - 1} \\ &= (1-x) \paren{\frac{1}{1-xe^{(1-x)z}} - \frac{1}{1-x}} = \frac{1-x}{1-xe^{(1-x)z}} - 1 \\ &= \frac{xe^{-(1-x)z} - x}{x - e^{-(1-x)z}}.\end{aligned}$$ There is some disagreement in the literature about what $a_0$ should be. We have avoided the problem by not including it in the sum. There are various ways to define generating functions for the $\euler{n}{d}_k$, depending on which variables are kept constant. \[thm:gfall\] $$\label{eq:gfall} \sum_{n,d,k} \euler{n}{d}_k x^d y^k z^n/n! = \frac{1}{\theta} \int_\theta^{\theta^y} \frac{dt}{x - t^{1-1/y}}$$ where $\theta = \exp\left\{\paren{\frac{1-x}{y^{-1}-1}}z\right\}$. Let $B(x,y,z)$ be the left-hand side of . Note the sum is over all integers $n$, $d$, and $k$. So $$\begin{aligned} \label{eq:gfdea} \lefteqn{(y^{-1}-1)\del{B}{z} + (1-x)B = } \\ & & \sum_{n,d,k} \bracket{\euler{n+1}{d}_{k+1} - \euler{n+1}{d}_k + \euler{n}{d}_k - \euler{n}{d-1}_k} x^d y^k z^n/n! \nonumber\end{aligned}$$ Let $S(n,d,k)$ be the bracketed quantity. It is clearly 0 if $n<0$, and if $n=0$, $$S(0,d,k) = \begin{piecewise} \pwif{1}{d=0,k=0} \\ \pwif{-1}{d=0,k=1} \\ \pwot{0} \end{piecewise}$$ so $n=0$ contributes $1-y$ to the sum on the right-hand side of . If $n\ge 1$, then by , $S(n,d,k)$ is 0 unless $k=0$ or $k=n+1$, in which case $$S(n,d,0) = \euler{n+1}{d}_1 = \euler{n}{d} \andd S(n,d,n+1) = -\euler{n+1}{d}_{n+1} = -\euler{n}{d-1}.$$ Therefore $$\begin{aligned} (y^{-1}-1)\del{B}{z} + (1-x)B &= 1-y + \sum_{n \ge 1,d} \euler{n}{d} x^d z^n / n! - \sum_{n \ge 1,d} \euler{n}{d-1} x^d y^{n+1} z^n / n! \\ &= 1-y + x^{-1}A(x,z) - yA(x,yz) \\ &= \bracket{1 + x^{-1}\frac{xe^{-(1-x)z}-x}{x-e^{-(1-x)z}}} - y \bracket{1 + \frac{xe^{-(1-x)yz}-x}{x-e^{-(1-x)yz}}} \\ &= (1-x) \bracket{ \frac{ye^{-(1-x)yz}}{x-e^{-(1-x)yz}} - \frac{1}{x-e^{-(1-x)z}} }.\end{aligned}$$ Let $\alpha = \frac{1-x}{y^{-1}-1}$. Then $\theta$, as defined in the theorem, is $e^{\alpha z}$. Dividing by $y^{-1}-1$ and multiplying through by $\theta$ gives $$\theta \del{B}{z} + \alpha\theta B = \alpha\theta \bracket{ \frac{ye^{-(1-x)yz}}{x-e^{-(1-x)yz}} - \frac{1}{x-e^{-(1-x)z}} }$$ which is to say that $$\del{}{z}\paren{\theta B} = \alpha \bracket{ \frac{y\theta^y}{x-\theta^{y-1}} - \frac{\theta}{x-\theta^{1-1/y}} }.$$ Differentiating the integral on the right-hand side of , $$\begin{aligned} \del{}{z} \int_\theta^{\theta^y} \frac{dt}{x - t^{1-1/y}} &= \del{\theta^y}{z} \bracket{\frac{1}{x-\paren{\theta^y}^{1-1/y}}} - \del{\theta}{z} \bracket{\frac{1}{x-\theta^{1-1/y}}} \\ &= \frac{\alpha y \theta^y}{x-\theta^{y-1}} - \frac{\alpha\theta}{x-\theta^{1-1/y}} = \del{}{z}\paren{\theta B}.\end{aligned}$$ Since $\theta B$ and the integral have the same derivative with respect to $z$, and they both vanish when $z=0$, they are equal. Here are three more generating functions. They can all be found by plugging in and switching summation signs. $$\begin{aligned} \sum_d &\euler{n}{d}_k x^d = (1-x)^n \sum_{j\ge 0} j^{k-1}(j+1)^{n-k}x^j \label{eq:gfnk} \\ \sum_k &\euler{n}{d}_k y^k = y \sum_{j\ge 0} (-1)^{d-j} \binom{n}{d-j} \frac{(j+1)^n - (jy)^n}{j+1-jy} \label{eq:gfnd} \\ \sum_{d,k} &\euler{n}{d}_k x^d y^k = (1-x)^n y \sum_{j \ge 0} \frac{(j+1)^n - (jy)^n}{j+1-jy} x^j \label{eq:gfn}\end{aligned}$$ We can now prove a special case of the Neggers-Stanley conjecture. Define the descent polynomial of $A \subset S_n$ to be $$F_A(x) = \sum_{\pi\in A} x^{\des(\pi)}.$$ Let $P$ be a poset of $n$ elements with labels $1,2,\ldots,n$. A linear extension of $P$ is an ordering of $1,2,\ldots,n$ which preserves the ordering of $P$; that is, a $\pi\in S_n$ which is such that if $i <_P j$ then $i$ appears before $j$ in the list $\pi(1),\pi(2),\ldots,\pi(n)$. If $\mathcal{L}(P)$ denotes the set of linear extentions of $P$, then Neggers and Stanley [@StanleyConj p. 311] conjectured that for any poset, every zero of $F_{\mathcal{L}(P)}$ is real. The conjecture has been shown to be false in general [@Branden1; @Stembridge1]. But we can prove it is true in a certain special case. \[thm:neggers1\] If $P_{n,k}$ is the poset with Hasse diagram (-4,-.1)(4,1.1) (0,0)(-3.5,1) (0,0)(-2.5,1) (0,0)(-0.5,1) (0,0)(0.5,1) (0,0)(1.5,1) (0,0)(3.5,1) (0,0)[$k$]{} (-3.5,1)[$1$]{} (-2.5,1)[$2$]{} (-1.5,1)[$\cdots$]{} (-0.5,1)[$k-1$]{} (0.5,1)[$k+1$]{} (1.5,1)[$k+2$]{} (2.5,1)[$\cdots$]{} (3.5,1)[$n$]{} (0,0)(-3.5,1)(-2.5,1)(-0.5,1)(0.5,1)(1.5,1)(3.5,1) then $F_{\mathcal{L}(P_{n,k})}$ has only distinct real roots. For $u,v \ge 0$ let $$c_{u,v} \assign \sum_d \euler{u+v+1}{d}_{u+1}x^d = \sum_{\substack{ \pi\in S_{u+v+1} \\ \pi(1)=u+1 }} x^{\des(\pi)}.$$ Then setting $u=k-1$, $v=n-k$ yields the polynomial in question. If $v=0$, $c_{u,v}$ counts the reversal permutation $\rho$, which has $(u+v+1)-1 = u$ descents. Otherwise, if $v>0$, $c_{u,v}$ doesn’t count $\rho$ but it does count the permutation $$u+1,u,u-1,\ldots,1,u+v+1,u+v,\ldots,u+2$$ which has u+v-1 descents. So $$\deg(c_{u,v}) = \begin{piecewise} \pwif{u}{v=0} \\ \pwif{u+v-1}{v>0.} \end{piecewise}$$ Similarly, if $u=0$, $c_{u,v}$ counts the identity permutation, which has no descents. Otherwise it doesn’t count the identity but it does count $$u+1,u+2,\ldots,u+v+1,1,2,\ldots,u$$ which has 1 descent. So $x \nmid c_{0,v}(x)$ and if $u>0$, $x \mid c_{u,v}(x)$ but $x^2 \nmid c_{u,v}(x)$. Now let $$h_{u,v} \assign \frac{c_{u,v}}{(1-x)^{u+v+1}}.$$ Note that $c_{u,v}(1) = \sizeof{\set{\pi\in S_{u+v+1}:\pi(1)=u+1}} = (u+v)!$, so $c_{u,v}$ does not have a zero at $x=1$. Therefore $h_{u,v}$ has exactly the same zeroes as $c_{u,v}$, plus a pole at $x=1$. By , $$h_{u,v}(x) = \sum_{j\ge 0} j^u (j+1)^v x^j.$$ If $D$ represents differentiation with respect to $x$, we have $$(xD) h_{u,v}(x) = h_{u+1,v}(x) \andd (Dx) h_{u,v}(x) = h_{u,v+1}(x)$$ and so $$h_{0,v}(x) = (Dx)^v h_{0,0}(x) \andd h_{u,v}(x) = (xD)^u h_{0,v}(x).$$ (3,-7)(30,0) (5, 0)[$h_{0,0}(x)=(1-x)^{-1}$]{} (5,-1)[$h_{0,1}(x)=(1-x)^{-2}$]{} (5,-2)[$h_{0,2}(x)=(1-x)^{-3}(1+x)$]{} (5,-3)[$h_{0,3}(x)=(1-x)^{-4}(1+4x+x^2)$]{} (5,-4)[$h_{1,3}(x)=(1-x)^{-5}(8x+14x^2+2x^3)$]{} (5,-5)[$h_{2,3}(x)=(1-x)^{-6}(8x+60x^2+48x^3+4x^4)$]{} (5,-6)[$h_{3,3}(x)=(1-x)^{-7}(8x+160x^2+384x^3+160x^4+8x^5)$]{} (5,-0.25)[.6]{}[135]{}[225]{} (5,-1.25)[.6]{}[135]{}[225]{} (5,-2.25)[.6]{}[135]{}[225]{} (5,-3.25)[.6]{}[135]{}[225]{} (5,-4.25)[.6]{}[135]{}[225]{} (5,-5.25)[.6]{}[135]{}[225]{} (4.4,-0.25)[$Dx$]{} (4.4,-1.25)[$Dx$]{} (4.4,-2.25)[$Dx$]{} (4.4,-3.25)[$xD$]{} (4.4,-4.25)[$xD$]{} (4.4,-5.25)[$xD$]{} (23,0)[$-\infty$]{} (26.5,0)[$-1$]{} (30,0)[$0$]{} (23, 0)(30, 0) (23,-1)(30,-1) (23,-2)(30,-2) (23,-3)(30,-3) (23,-4)(30,-4) (23,-5)(30,-5) (23,-6)(30,-6) (26.50000,-2) (24.16667,-3)(28.83333,-3) (23.69367,-4)(27.50199,-4)(30.00000,-4) (23.41905,-5)(26.01482,-5)(29.33017,-5)(30.00000,-5) (23.25745,-6)(24.90410,-6)(28.09590,-6)(29.74256,-6)(30.00000,-6) $h_{0,0}(x) = (1-x)^{-1}$ and $h_{0,1}(x) = (1-x)^{-2}$ both have no zeroes. Suppose $v \ge 1$ and $h_{0,v}$ has only distinct real zeroes. Since $\deg(c_{0,v}) = v-1$ and $x \nmid c_{0,v}(x)$, $xc_{0,v}(x)$ and $xh_{0,v}(x)$ have $v$ distinct real zeroes. By Rolle’s Theorem, $(Dx)h_{0,v}$ must have $v-1$ distinct zeroes interlaced between those of $xh_{0,v}(x)$. Furthermore, the denominator of $xh_{0,v}(x)$ has degree $v+1$, so $xh_{0,v}(x)$ approaches 0 as $x\rightarrow\infty$. Therefore its graph must turn back toward the $x$-axis somewhere to the left of its leftmost zero, at which place there must be another zero of $(Dx)h_{0,v}$. So we have found $v$ real zeroes of $h_{0,v+1}$, and that accounts for all its zeroes. Applying the $xD$ operator goes similarly. Given that $h_{u,v}$ has $d$ distinct real zeroes, by Rolle’s Theorem $Dh_{u,v}(x)$ has $d-1$ interlaced zeroes. Since the numerator of $h_{u,v}$ has degree smaller than the denominator, $h_{u,v}$ must turn back toward the axis to the left of its leftmost zero, which accounts for one more zero of $Dh_{u,v}$. Finally, $(xD)h_{u,v}$ has one more zero at 0 (which is distinct from the others since $x^2 \nmid h_{u,v}$ and therefore $x \nmid Dh_{u,v}$). So we have found $d+1$ real zeroes of $h_{u+1,v}$, and that accounts for all of the zeroes. \[cor:upsidedown\] The same can be said for the poset (-4,-.1)(4,1.1) (0,1)(-3.5,0) (0,1)(-2.5,0) (0,1)(-0.5,0) (0,1)(0.5,0) (0,1)(1.5,0) (0,1)(3.5,0) (0,1)[$k$]{} (-3.5,0)[$1$]{} (-2.5,0)[$2$]{} (-1.5,0)[$\cdots$]{} (-0.5,0)[$k-1$]{} (0.5,0)[$k+1$]{} (1.5,0)[$k+2$]{} (2.5,0)[$\cdots$]{} (3.5,0)[$n$]{} (0,1)(-3.5,0)(-2.5,0)(-0.5,0)(0.5,0)(1.5,0)(3.5,0) The result of turning a poset upside-down is to reverse all its linear extensions, which changes ascents to descents and vice-versa. So if $F(x)$ is the descent polynomial of the original poset, the descent polynomial of the new poset is $x^{n-1}F(x^{-1})$. So the roots of the new polynomial are the inverses of the roots of the original. General Behavior {#sec:unimodal} ================ We can say in general how the sequence $\euler{n}{d}_n,\euler{n}{d}_{n-1},\ldots,\euler{n}{d}_1$ behaves. The set of numbers $\euler{n}{d}_k$, for $n$ fixed, is very nearly unimodal if arranged appropriately. \[thm:unimodal\] Fix $n$ and $d$. Then ------- --------------------------------- ------------------------------------------------------------------------------ (i) If $d=0$, $0 = \euler{n}{d}_n = \cdots = \euler{n}{d}_2 < \euler{n}{d}_1 = 1$ (ii) If $1 \le d \le (n-3)/2$, $\euler{n}{d}_n < \euler{n}{d}_{n-1} < \cdots < \euler{n}{d}_1$ (iii) If $n$ is even and $d=(n-2)/2$, $\euler{n}{d}_n < \cdots < \euler{n}{d}_2 = \euler{n}{d}_1$ (iv) If $n$ is odd and $d=(n-1)/2$, $\euler{n}{d}_n < \cdots < \euler{n}{d}_{(n+1)/2} > \cdots > \euler{n}{d}_1$ (v) If $n$ is even and $d=n/2$, $\euler{n}{d}_n = \euler{n}{d}_{n-1} > \cdots > \euler{n}{d}_1$ (vi) If $(n+1)/2 \le d \le n-2$, $\euler{n}{d}_n > \euler{n}{d}_{n-1} > \cdots > \euler{n}{d}_1$ (vii) If $d=n-1$, $1 = \euler{n}{d}_n > \euler{n}{d}_{n-1} = \cdots = \euler{n}{d}_1 = 0$. ------- --------------------------------- ------------------------------------------------------------------------------ \(i) follows from the fact that the identity is the only permutation with 0 descents. (v), (vi), and (vii) follow from (iii), (ii), and (i) respectively because $\euler{n}{d}_k = \euler{n}{n-1-d}_{n+1-k}$. Let $f_n(x) = \euler{n}{\floor{x/n} + 1}_{n\floor{x/n}+n-x}$, which means that $f_n(nd-k) = \euler{n}{d}_k$ if $0 \le d \le n-1$ and $1 \le k \le n$. ![The graphs of $f_6(x)$ and $f_7(x)$, where $f_n(nd-k) = \euler{n}{d}_k$, as defined in .[]{data-label="fig:graphs"}](caterpillar.eps){height="4in" width="4in"} Figure \[fig:graphs\] shows the graphs of $f_6(x)$ and $f_7(x)$. Each monochromatic section is a sequence of the form $\euler{n}{d}_n, \euler{n}{d}_{n-1}, \ldots, \euler{n}{d}_1$. Note the graphs plateau where one sequence meets the next. Since $\euler{n}{d}_1 = \euler{n-1}{d} = \euler{n}{d+1}_n$, each sequence begins where the previous one ends. The content of the theorem is that $f_n$ is basically unimodal. That is, the sequences on the left increase, those on the right decrease, and those in the middle behave according to (iii) through (v). The theorem is true for small $n$ by inspection. By , $$\euler{n+1}{d}_k = \sum_{\ell=1}^{k-1} \euler{n}{d-1}_\ell + \sum_{\ell = k}^n \euler{n}{d}_\ell = \sum_{\ell=1}^{k-1} f_n(n(d-1)-\ell) + \sum_{\ell=k}^n f_n(nd-\ell).$$ Let $i = \ell+n-k$ in the first sum and $\ell-k$ in the second and we have $$\euler{n+1}{d}_k = \sum_{i=n-k+1}^{n-1} f(n(d-1)-(i-n+k)) + \sum_{i=0}^{n-k} f(nd - (i+k)) = \sum_{i=0}^{n-1} f(nd - k - i).$$ So imagine a caterpillar of length $n$ crawling on the graph of $y = f_n(x)$, as shown in the top graph of Figure \[fig:graphs\]. If his head is at $x$-position $nd-k$, the equation above says that the sum of the heights of his segments (or his total potential energy) is $\euler{n+1}{d}_k$. If he were to take a step forward, his total energy would be $\euler{n+1}{d}_{k+1}$. That would be an increase in energy if the new height of his head is higher than the current height of his tail. The theorem now follows easily by induction. Behavior if $d \ll n$ {#sec:geometric} ===================== If $d$ is much less than $n$, and $\pi$ is selected at random from those permutations of $n$ letters which have $d$ descents, then the distribution of $\pi(1)$ approaches a geometric distribution uniformly, in the following sense. Fix an integer $d > 0$. Suppose $\pi_n$ is chosen uniformly from those permutations of $n$ letters which have $d$ descents. Then for any $\epsilon > 0$ there is an $N$ such that $$\abs{\frac{\pr(\pi_n(1) = k)}{(1-p)p^{k-1}} - 1} < \epsilon \label{eq:geomthm}$$ for all integers $n$ and $k$ with $n \ge N$ and $1 \le k \le n$, where $p = \frac{d}{d+1}$. For $0 \le j \le d$, let $P_j(n) = (-1)^{d-j}\binom{n}{d-j}$. Then by and , $$\begin{aligned} \euler{n}{d}_k &= d^{k-1}(d+1)^{n-k} \sum_{0 \le j \le d} P_j(n)\paren{\frac{j}{d}}^{k-1}\paren{\frac{j+1}{d+1}}^{n-k} \\ \euler{n}{d} &= (d+1)^n \sum_{0 \le j \le d} P_j(n+1)\paren{\frac{j+1}{d+1}}^n.\end{aligned}$$ Since $(1-p)p^{k-1} = d^{k-1}/(d+1)^k$, the left-hand side of is $$\abs{\frac{\sum_{0 \le j \le d} P_j(n) \paren{\frac{j}{d}}^{k-1} \paren{\frac{j+1}{d+1}}^{n-k}} {\sum_{0 \le j \le d} P_j(n+1) \paren{\frac{j+1}{d+1}}^{n}} - 1}$$ and since $P_d(n) = \binom{n}{0} = 1$, the last term of both sums is 1. Therefore we have $$\abs{\frac{\sum_{0 \le j < d} \bracket{ P_j(n) \paren{\frac{j}{d}}^{k-1} \paren{\frac{j+1}{d+1}}^{n-k} - P_j(n+1) \paren{\frac{j+1}{d+1}}^n}} {1 + \sum_{0 \le j < d} P_j(n+1) \paren{\frac{j+1}{d+1}}^n}}.$$ Since $j/d < (j+1)/(d+1)$ when $0 \le j < d$, that’s bounded above by $$\frac { \sum_{0 \le j < d} \bracket{ \abs{P_j(n)} \paren{\frac{j+1}{d+1}}^{n-1} + \abs{P_j(n+1)} \paren{\frac{j+1}{d+1}}^n } }{ \abs{1 + \sum_{0 \le j < d} P_j(n+1) \paren{\frac{j+1}{d+1}}^n} }.$$ Now each term in each sum is a polynomial in $n$ times a decaying exponential in $n$. So both sums go to 0 as $n$ goes to infinity. The total variation distance between the distribution of $\pi_n(1)$ and the geometric distribution with parameter $p = \frac{d}{d+1}$ approaches 0 as n approaches infinity. If Both Ends Are Fixed {#sec:bothends} ====================== We might now ask about the number of permutations with $d$ descents whose first and last positions are fixed. Let $$\euler{n}{d}_k^\ell \assign \sizeof{\set{\pi \in S_n: \mbox{$\des(\pi)=d$, $\pi(1)=k$, and $\pi(n)=\ell$}}}.$$ Suppose $1 \le k < k+m \le n$. Then $$\euler{n}{d}_k^{k+m} = \euler{n-1}{d}^m \andd \euler{n}{d}_{k+m}^k = \euler{n-1}{d-1}_m.$$ \[thm:bothends\] Let $\psi \in S_n$ be the $n$-cycle $(n,n-1,\ldots,2,1)$. Then for any $\pi \in S_n$, $$\psi\pi(i) = \begin{piecewise} \pwif{\pi(i)-1}{\pi(i)>1} \\ \pwif{n}{\pi(i)=1.} \end{piecewise}$$ (Imagine a device like a car odometer, with a window and $n$ wheels, on each of which are painted the numbers 1 through $n$. $\pi$ can be represented by turning the $i$th wheel until $\pi(i)$ shows through the window, for all $i$. If one then rolls all the wheels backward a notch, $\psi\pi$ shows through the window. For this reason we will refer to the transformation $\pi \mapsto \psi\pi$ as a [**rollback**]{}.) If $1 \le i < n$, let $D_i(\pi) = [\pi(i)>\pi(i+1)]$. The pair $\pi(i),\pi(i+1)$ has one of four types: Type $D_i(\pi)$ $D_i(\psi\pi)$ $D_i(\psi\pi)-D_i(\pi)$ ------ --------------------- ------------ ---------------- ------------------------- A $1<\pi(i)<\pi(i+1)$ 0 0 0 B $1<\pi(i+1)<\pi(i)$ 1 1 0 C $1=\pi(i)<\pi(i+1)$ 0 1 1 D $1=\pi(i+1)<\pi(i)$ 1 0 -1 Most pairs are of type A or B. $\pi$ will have one pair of type C unless $\pi(n)=1$ and one pair of type D unless $\pi(1)=1$. Therefore $$\des(\psi\pi)-\des(\pi) = \sum_{i=1}^{n-1} D_i(\psi\pi)-D_i(\pi) = \begin{piecewise} \pwif{1}{\pi(1)=1} \\ \pwif{-1}{\pi(n)=1} \\ \pwott{0} \end{piecewise}$$ Let $$\begin{aligned} P_a^b &\assign \set{\pi\in S_n:\mbox{ $\pi(1)=a$ and $\pi(n)=b$}} \\ Q^b &\assign \set{\pi\in S_{n-1}: \pi(n)=b}.\end{aligned}$$ Consider the following sequence of bijections: $$\begin{CD} P_k^{k+m} @>{\mbox{rollback}}>> P_{k-1}^{k-1+m} @>{\mbox{rollback}}>> \cdots @>{\mbox{rollback}}>> P_1^{1+m} @>{\mbox{rollback}}>> P_n^m @>{\mbox{shorten}}>> Q^m \end{CD}$$ where “shortening” a permutation means removing $n$. (See Figure /ref[fig:rollback]{} for an example.) The first $k-1$ rollbacks all preserve $\des$, and the final one increments $\des$. But the shortening decrements it again, since it removes $n$ from the front of the permutation. Therefore the net effect, across the whole sequence, is to preserve $\des$. So $\euler{n}{d}_k^{k+m} = \euler{n-1}{d}^m$ for all $d$. The second part of the theorem follows from the bijective sequence $$\begin{CD} P_{k+m}^k @>{\mbox{rollback}}>> P_{k-1+m}^{k-1} @>{\mbox{rollback}}>> \cdots @>{\mbox{rollback}}>> P_{1+m}^1 @>{\mbox{rollback}}>> P_m^n @>{\mbox{shorten}}>> Q_m \end{CD}$$ where $Q_a = \set{\pi\in S_n:\pi(1)=a}$. Here the final rollback decrements $\des$, and the shortening leaves it unchanged. So $\euler{n}{d}_{k+m}^k = \euler{n-1}{d-1}_m$. \[fig:rollback\] ![Examples of the actions of the bijections described in Theorem \[thm:bothends\], for $n=9$. Vertical lines show the positions of descents. If $\pi(1)<\pi(n)$, as at the top left, then the permutation is “rolled back” until $n$ appears at the front, and then $n$ is removed. In each of the rollbacks but the last, one of the internal bars moves one position to the right, to accomodate a 1 changing to an $n$, but the total number of descents stays the same. Only when the number in the first position changes from 1 to $n$ do we gain a descent, but it vanishes again when we remove $n$ in the last step. The procedure is is similar when $\pi(1)>\pi(n)$, as on the right, but the last rollback eliminates a descent, and removing $n$ leaves the number of descents unchanged.](rollback.eps "fig:"){height="170pt" width="470pt"} If $1 \le k,\ell \le n$ and $P_{n,k}^\ell$ is the poset on $\set{1,2,\ldots,n}$ defined by $k <_P a <_P \ell$ for all $a$ other than $k$ and $\ell$, then the descent polynomial of $\mathcal{L}(P_{n,k}^\ell)$ has only distinct real zeroes. If $\ell = k+m$, then the polynomial in question is $$\sum_d \euler{n}{d}_k^\ell x^d = \sum_d \euler{n-1}{d}^m x^d$$ which was shown to have real distinct zeroes in . As in that corollary, it follows immediately that turning the poset upside-down inverts the roots of the polynomial, leaving them real. Remarks {#sec:remarks} ======= In we noted that if $\pi$ is uniformly distributed over $S_n$ then the distribution of $D = \des(\pi)$ is approximately normal. Thus the normal density function $$\frac1{\sqrt{2\pi}\sigma} \exp\curly{-\frac12\paren{\frac{d-\mu}{\sigma}}^2}$$ with $\mu = \frac{n-1}2$ and $\sigma = \sqrt{\frac{n+1}{12}}$ is a good approximation for $\frac1{n!} \euler{n}{d}$ when $d$ is close to $\mu$. However, it can be off by orders of magnitude when $d$ is very small or very large. Theorem \[thm:unimodal\] shows that the sequence $\euler{n}{d}_1, \euler{n}{d}_2, \ldots, \euler{n}{d}_n$ is an interpolation between $\euler{n-1}{d}$ and $\euler{n-1}{d-1}$, so it seems a reasonable hypothesis that if $d$ is close to $\frac{n-1}2$, then $\euler{n}{d}_k$ is well approximated by $$\frac{(n-1)!}{\sqrt{\frac{\pi n}6}} \exp\curly{-\frac12 \paren{\frac{d + \frac{n-k}{n-1} - \frac{n}2}{\frac{\sqrt{n}}{12}}}^2}.$$ Experimental evidence for $n\le 200$ suggests that this is in fact the case. So while the distribution of $\pi(1)$ given $\des(\pi)$ is by no means normal, it does seem to behave like a segment of the normal curve when $d$ is near $\frac{n-1}2$. More generally, there may be some underlying curve which the Eulerian numbers, properly normalized, can be said to approach as $n$ grows large. It will look like a bell curve, but not be exactly normal, since the normal approximation is not very good when $d\ll n$. If so, it seems likely that the refined Eulerian numbers presented in this paper can be said to approach points on the same curve. Acknowledgements ================ Sergi Elizalde found an alternate version of the generating function in , and also sketched an alternate proof for using generating functions. The author would also like to thank Persi Diaconis and Jason Fulman for introductions to Stein’s method, and Sergey Fomin, Mark Skandera, and Boris Pittel for explanations of the Neggers-Stanley problem. Bruce Sagan and Herbert Wilf encouraged this work forward. And the most thanks go to Divakar Viswanath, who discussed every aspect of this paper at every stage of developement. [^1]: Department of Mathematics, University of Michigan, 525 East University Avenue, Ann Arbor, MI 48109-1109, USA. Email:
{ "pile_set_name": "ArXiv" }
--- author: - | M. De Gerone$^a$[^1], F. Gatti$^{a,b}$, W. Ootani$^c$, Y. Uchiyama$^{c}$, M. Nishimura$^c$, S. Shirabe$^d$, P.W. Cattaneo$^e$, M. Rossella$^e$\ Istituto Nazionale di Fisica Nucleare, Sezione di Genova,\ Via Dodecaneso 33, 16146, Genova (GE), Italy\ Universitá degli Studi di Genova,\ Via Dodecaneso 33, 16146, Genova, Italy,\ International Center for Elementary Particle Physics, University of Tokyo\ 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan\ Department of physics, Kyushu University\ 6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581, Japan\ Istituto Nazionale di Fisica Nucleare, Sezione di Pavia,\ Via Agostino Bassi, 6, 27100, Pavia (PV), Italy\ E-mail: title: 'Design and test of an extremely high resolution Timing Counter for the MEG II experiment: preliminary results ' --- =1 Introduction: the MEG experiment {#sec:intro} ================================ The MEG experiment has been running since 2008 at the Paul Scherrer Institut (Villigen, CH), looking for the $\mu\rightarrow e\gamma$ decay. The MEG collaboration recently published the results based on the analysis of data collected in the years 2009-2011: BR($\mu\rightarrow e\gamma$)$\le 5.7\times 10^{-13}$ @90$\%$ C.L. [@bib:MEG2013]. While the analysis of the 2012-2013 data is still ongoing, an upgrade program of the MEG experiment (MEG II) has unfolded since 2012 [@bib:MEGup], aiming to improve the experiment sensitivity by an order of magnitude, down to $\sim 5\times 10^{-14}$. In order to reach such a sensitivity, most of the current detectors have to be re-designed or modified. In this paper, the development of an extremely high resolution detector for the measurement of the positron timing is described in detail. The Timing Counter upgrade {#sec:TCup} ========================== The MEG detector [@bib:MEGdetector] is designed to measure with the highest possible resolution the kinematic variables that define the signature of the decay $\mu\rightarrow e\gamma$. Photons are detected by a Liquid Xenon detector placed outside the magnetic spectrometer where positrons are reconstructed (see figure \[fig:megold\]). The spectrometer is made of a superconductive magnet, a set of Drift CHambers (DCH) and the Timing Counter (TC). The DCH system, together with the specially designed field provided by the COBRA magnet, measures positron energy and emission angle, while the purpose of the TC is to measure the positron time of impact. ![Schematic view of the MEG detector: side and front views.[]{data-label="fig:megold"}](MEGPIE5Detector_Aug09_AllSideTopView.png){width="90.00000%"} ![Picture of the current Timing Counter. PMT and scintillator bars lodged in a black plastic socket are visible.[]{data-label="fig:currentTC"}](TC.jpg){width="70.00000%"} The current Timing Counter [@bib:TCpaper] is made of two identical arrays (placed inside the magnet up- and down-stream the target position) of 15 scintillating bars (Bicron BC404), with $80\times 4 \times 4\, \mathrm{cm}^{3}$ size arranged in a barrel-like shape (see figure \[fig:currentTC\]). Each bar is read-out on both sides by a fine mesh PhotoMultiplier Tube (PMT, Hamamatsu R5924). Signals from PMTs are processed to be fed into the trigger and DAQ system. The Timing Counter has been running since 2008, showing good and stable time resolution of $\sim65$ ps [@bib:TCcalib].\ Some issues suggest that the design of the detector has to be changed to increase the resolution: - the PMT operation in high magnetic field and helium environment deteriorates the PMT transit time spread and gain, also using fine mesh PMTs; - large size scintillator bars generate uncertainties on impact point reconstruction and spread of the trajectories of the optical photons inside the scintillator itself; - the large amount of material crossed by the positron in the TC bar prevents the use of hits beyond the first one. These problems originate from the usage of PMTs and large size scintillator bars. Thus, the natural choice is to increase the detector granularity and to upgrade the read-out system, exploiting the recent development of fast high gain solid state detector like Silicon PhotoMultipliers (SiPMs). A detector consisting of many scintillator plates (from now on: pixel) with SiPM read-out allows to overcome the limitations of the current TC (see figure \[fig:newTC\] as possible layout): - magnetic field has no influence on SiPM operation; - higher granularity results in smaller uncertainties from impact position measurement; - thanks to the smaller amount of material along the positron trajectory, it is possible to take advantage of the information coming from all the pixels crossed by the particle. ![Schematic view of the new timing counter design. On the left: overview of the detector. On the right: detail of the single counter configuration.[]{data-label="fig:newTC"}](newTC_detail.png){width="1.\textwidth"} The last point is quite remarkable, because the time resolution is expected to improve as $1/\sqrt{N_{hit}}$, where $N_{hit}$ is the number of pixels crossed by the positron. Moreover, the small size of the single element results in a more flexible configuration of the detector, allowing the possibility to tailor the position and the density of the pixels along the detector. Also, very short rise time scintillator (like Bicron BC422, see section \[sec:scintcomp\]) can be used even in presence of a short attenuation length.\ Single counter good performances have already been proved [@bib:pixelStoykov; @bib:pixelWataru]. In the following, the research and development work on several prototypes in order to choose the best material and device is presented. Single counter optimisation {#sec:pixelopt} =========================== The optimisation of the single counter configuration started from a systematic study among the SiPMs and the scintillators available to compare the properties relevant for our application. SiPM comparison {#sec:sipmcomp} --------------- Silicon Photomultipliers are good candidate for the new TC, thanks to their characteristics: good time resolution, quite high gain, compactness. We tested different models from Hamamatsu Photonics, Advansid, Ketek and SensL. All these devices share some features: they have a size $3\times3$ mm$^3$, in order to be easily coupled to few mm thick scintillator pixels, and a good sensitivity in the near ultraviolet range, in order to match common plastic scintillators emission spectra. The SiPM models under test are summarised in table \[tab:SiPM\]. Manufacturer Model Type Note --------------------- -------------------- -------------------------- ----------------------- S10362-33-050C Conventional (Old) MPPC Ceramic package S10931-050P surface mount Hamamatus Photonics S12572-050C(X) New (standard type) MPPC Metal quench resistor S12572-020C(X) 25 $\mu m$ pitch S12652-050C(X) Trench-type MPPC Metal quench resistor 3X3MM50UMLCT-B Improved fill factor Advansid NUV type Ketek PM3350 prototype-A Trench Type SensL MicroFB-30050-SMT B-Type Fast output : Summary of SiPMs model tested.[]{data-label="tab:SiPM"} For each model, the noise level (dark count rate and cross-talk) together with the Photon Detection Efficiency (PDE) has been evaluated. Moreover, also the breakdown dependence on temperature has been evaluated. Finally, the timing resolution has been measured on a prototype pixel with fixed sizes. #### Setup SiPMs are put in a thermal chamber, which allows to keep the device at fixed temperature (23$^\circ$C in the following measurements). Signals are transmitted on a coaxial cable to a voltage amplifier (developed at PSI, based on MAR-6SM amplifier [@bib:pixelStoykov]), then they are sampled at 5 Gs/s by a waveform digitiser (DRS4 evaluation board [@bib:DRSStefan; @bib:DRSStefanNIM] also developed at PSI). #### Dark noise and cross-talk The noise level of the device is evaluated by looking at the waveforms acquired by a random trigger. The dark count rate is calculated from the probability of observing zero photo-electron P(N$_{\mbox{p.e.}}$= 0) in a fixed time window. The result is shown in figure \[fig:darkcount\] as a function of the applied over-voltage.\ The cross-talk probability is calculated from the P(N$_{\mbox{p.e.}}\ge 2$)/P(N$_{\mbox{p.e.}}\ge 1$) ratio including a correction for the accidental coincidence of dark pulses. The result is shown in figure \[fig:crosstalk\]. The cross-talk probability almost linearly increases with the over-voltage. The standard-type SiPMs, namely SiPMs without a trench structure, turned out to have worse performance with respect to the trench type whose improved structure strongly reduces the noise level. Anyway, the typical energy release in a pixel should guarantee an adequate signal-to-noise ratio also for those SiPMs with higher dark count and cross-talk rates. #### PDE The PDE for Near UltraViolet (NUV) light is measured with a LED whose wavelength (370$-$410 nm) approximately matches the scintillator emission peak. The LED intensity is adjusted in such a way that the average number of observed photo-electrons ranges between 0.5 and 1.0. The relative PDE is then calculated from P(N$_{\mbox{p.e.}}$= 0) in accordance with Poisson statistics, and thus the measured PDE value does not include the effect of cross-talk nor after-pulsing. The result is shown in figure \[fig:pde\]. The highest PDE is obtained with Hamamatsu S12572 model, with $50\, \mu m$ pitch cell. A more detailed description of noise and PDE studies can be found in [@bib:IEEEYusuke]. \ #### Breakdown voltage versus temperature dependence The BreakDown voltage (BD) versus temperature dependence has been measured by plotting the I-V characteristic of each SiPM at different temperatures (see figure \[fig:BDcurve\]) in the range $20^\circ\div 45^\circ$, resulting in a linear dependence, ![I-V curves for different temperatures acquired with Advansid NUV SiPM.[]{data-label="fig:BDcurve"}](ADV_IV_Tcomp_log.pdf){width="70.00000%"} #### Time resolution The basic setup for the timing resolution measurement is the same described above. A scintillator pixel with size $60\times30\times5$ mm$^{3}$ is read-out on each side by an array of 3 SiPMs connected in series. SiPMs are coupled to the pixel with optical grease. A 35 ns coaxial cable (7.5 m) transports signals to amplifiers, simulating the final experimental conditions. Counters are excited by using a $^{90}$Sr $\beta$-source, providing electrons with 2.2 MeV endpoint energy. An external Reference Counter (RC) made of a small piece of scintillator (BC422, size: $5\times5\times5$ mm$^{3}$) coupled to a Hamamatsu S10362-33-050C SiPM is used for triggering purposes. The timing is extracted by applying a software constant fraction discrimination on the recorded waveform with discriminating fraction in the range $5\div10\%$ depending on SiPM model.\ The time resolution of the system is evaluated as the width of the distribution $\Delta T = T_{RC}-\left( T_{0}+T_{1}\right)/2$, being $T_{RC}$ and $T_{i}$ the time measured by the reference counter and each SiPM array respectively. The RC resolution is evaluated $\sigma(T_{RC}) =30$ ps and subtracted. The summary of the results is shown in figure \[fig:timereso\] as a function of the applied over-voltage. Scintillator comparison {#sec:scintcomp} ----------------------- Three types of ultra fast plastic scintillator from Saint-Gobain Crystals, BC418, BC420 and BC422, were tested. The main characteristics of each scintillator are summarised in table \[tab:scint\], where also the characteristics of the BC404 are listed. The test was performed using $60\times30\times5$ mm$^{3}$ pixels. The best resolution is obtained with BC422, the one with the fastest rise time.\ Properties BC404 BC418 BC420 BC422 ---------------------------- ------- ------- ------- ------- Light Yield (% Anthracene) 68 67 64 55 Rise Time (ns) 0.7 0.5 0.5 0.35 Decay time (ns) 1.8 1.4 1.5 1.6 Wavelength peak (nm) 408 391 391 370 Attenuation length (cm) 140 100 110 8 Measured resolution (ps) - 48 51 43 : Summary of the properties of plastic scintillators tested. Measured time resolutions on a $60\times30\times5$ mm$^{3}$ sample are also listed.[]{data-label="tab:scint"} Beam test ========= In order to test the detector in experimental conditions similar to the final one and check the multiple hit scheme, a small prototype was built and tested at the Beam Test Facility (BTF) at the INFN Laboratori Nazionali di Frascati [@bib:BTF]. The BTF beam can be tuned in such a way to provide electrons with energy similar to the MEG signal ($48$ MeV in our test) with average bunch multiplicity lower than 1. We decided to test counters equipped with both Hamamatsu and Advansid SiPMs, the ones with the best trade off between time resolution and temperature dependence. Setup {#setupBTF} ----- We prepared two sets of pixel prototypes with BC418 scintillator, with $90\times40\times5$ mm$^{3}$ sizes, equipped with Hamamatsu S12572-050C(X) (8 counters) and Advansid NUV (6 counters) SiPMs.\ The pixels, wrapped with 3M Radiant Mirror Film, are mounted on a moving stage that controls the movement in the plane perpendicular to the beam. The whole system is mounted on an optical bench enclosed in a shielded black box. The same reference counter described in section \[sec:sipmcomp\] is placed along the beam trajectory in front of the pixels. A lead glass calorimeter is placed behind the pixels for beam monitoring. The whole system is aligned to the beam line by using a laser tracker. Signals from SiPMs are fed into six DRS4 evaluation boards and sampled at 2.5 Gs/s. Data analysis {#sec:anal} ------------- #### Charge analysis Events are selected by cutting on the charge distribution of the first two pixels. An example of distribution is shown in figure \[fig:Q1vsQ2\], where the bunch multiplicity is clearly visible. Moreover, we applied also a cut on the reference counter charge spectrum, by selecting the events around the Landau peak of the charge distribution. ![Charge distribution of the first couple of pixels. The selected events (single electron bunch) are marked in red.[]{data-label="fig:Q1vsQ2"}](Q1_vs_Q2_HAM_comp_low){width="70.00000%"} The timing resolution is then evaluated by taking the width of the $\Delta T$ distribution, defined in two different ways: $$\begin{aligned} \label{eq:DT1} \Delta T(N) &=& T_{\mbox{RC}} - \frac{1}{N}\sum^{N}_{i=1} T_{i} \,, \\ \label{eq:DT2} \Delta T(N) &=& \frac{1}{\sqrt{2}}\left[\frac{1}{N/2}\sum^{N/2}_{j=1} T_{a_j}-\frac{1}{N/2}\sum^{N/2}_{i=1} T_{b_i}\right] \,,\end{aligned}$$ where $T_{\mbox{RC}}$ and $T_{l}$ is the time measured by the reference counter and by the pixels respectively, with $a_j$ and $b_i$ running on even and odd indices respectively. In formula \[eq:DT2\] the sum is made over two different subgroups of pixels. In both cases, we can evaluate the timing resolution as a function of the number of hits used in the time averaging. #### DRS calibrations Dedicated runs were taken to evaluate the contribution from the electronics jitter. It was found to be 18.7 $\pm 1.2$ ps and 16.2 $\pm 1.2$ ps for pixels whose arrays are read-out by the same or different boards, respectively. The former contribution is higher because the jitters from channels on the same baord are fully correlated. Results ------- We checked the multiple hit scheme, relying on the approach in Eq. \[eq:DT1\] as in section \[sec:sipmcomp\] studying the time resolution versus the number of hits. The contribution from the electronics, described in section \[sec:anal\] is taken into account. The RC resolution, which was checked with dedicated runs and found to be $\sigma(T_{RC})=27$ ps is also subtracted. As expected, the best result is obtained with the largest number of hits, with $\sigma(\Delta T) < 30$ ps. Preliminary resolutions are summarised in figure \[fig:multihit\], compared with the expected $1/\sqrt{N_{\mbox{hit}}}$ behaviour, which is also shown. An average $N_{\mbox{hit}} \sim 6.6$ is expected in the experiment. ![Summary of the measured resolutions versus the number of hits: HAM (Hamamatsu) and ADV (Advansid) relying on the comparison of different pixel sets and HAM-RC and ADV-RC relying on the Reference Counter (RC). The resolutions following the expected $1/\sqrt{N_{hit}}$ behaviour are also shown. []{data-label="fig:multihit"}](multi_hit_new.png){width=".7\textwidth"} Conclusions =========== We presented the R&D work on the upgrade of the Timing Counter for the MEG II experiment. The basic concepts of the new design, namely the good time resolution achievable with small scintillator counters read-out by SiPMs and the improvement of the overall time resolution by averaging the time measurements over multiple hits has been tested. Optimising the choice among different types of SiPM and scintillators leads to obtain extremely good time resolution with a single counter down to $\sigma(\Delta T)\sim 43$ ps. A beam test performed at the Beam Test Facility in Frascati proved experimentally the multiple hit scheme. Analysis is still ongoing, a prelimiary resolution $\sigma(\Delta T)<30$ ps with eight counters is measured. The authors thank the Beam Test Facility crew, the mechanical and electronics workshops at INFN Section of Genova and the Paul Scherrer Institute detector group for their valuable help. [19]{} J. Adam et al., \[MEG Collaboration\], *New Constraint on the Existence of the $\mu^{+}\rightarrow e^{+}\gamma$ Decay*, *Phys. Rev. Lett.* **110** (2013) 201801. A.M. Baldini et al., \[MEG Collaboration\], *MEG upgrade proposal*, \[arXiv:1301.7225\] \[physics.ins-det\]. J. Adam et al., \[MEG Collaboration\], *The MEG detector for $\mu^{+}\rightarrow e^{+}\gamma$ decay search*, *Eur. Phys. J. C* **73** (2013) 2365. M. De Gerone et al., *Development and Commissioning of the Timing Counter for the MEG Experiment*, *IEEE Trans. Nucl. Sci.* **59** (2012) 379. M. De Gerone et al., *The MEG timing counter calibration and performance*, *Nucl. Inst. Meth. A* **638** (2011) 41. A. Stoykov et al., *A time resolution study with a plastic scintillator read out by a Geiger-mode Avalanche Photodiode*, *Nucl. Inst. Meth. A* **695** (2012) 202. W. Ootani, *Development of Pixelated Scintillation Detector for Highly Precise Time Measurement in MEG Upgrade*, *Nucl. Inst. Meth. A* **732** (2013) 146. <http://www.psi.ch/drs/evaluation-board> S. Ritt et al., *Application of the DRS Chip for Fast Waveform Digitizing*, *Nucl. Inst. Meth. A* **623** (2010) 486. Y. Uchiyama \[MEG collaboration\], *Nuclear Science Symposium Conference Record, IEEE, Seoul, Korea, 2013.*, in press G. Mazzitelli et al., *Commissioning of the DA$\Phi$NE beam test facility*, *Nucl. Inst. Meth. A* **515** (2003) 524. [^1]: Corresponding author.
{ "pile_set_name": "ArXiv" }
--- abstract: | The paper investigates the non-vanishing of $H^1({\mathcal{E}}(n))$, where ${\mathcal{E}}$ is a (normalized) rank two vector bundle over any smooth irreducible threefold $X$ of degree $d$ such that ${\mathrm{Pic}}(X) \cong {\mathbb{Z}}$. If $\epsilon$ is the integer defined by the equality $\omega_X = {\mathcal{O}}_X(\epsilon)$, and $\alpha$ is the least integer $t$ such that $H^0({\mathcal{E}}(t)) \ne 0$, then, for a non-stable ${\mathcal{E}}$ ($\alpha \le 0)$ the first cohomology module does not vanish at least between the endpoints $\frac{\epsilon-c_1}{2}$ and $-\alpha-c_1-1$. The paper also shows that there are other non-vanishing intervals, whose endpoints depend on $\alpha$ and also on the second Chern class $c_2$ of ${\mathcal{E}}$. If ${\mathcal{E}}$ is stable the first cohomology module does not vanish at least between the endpoints $\frac{\epsilon-c_1}{2}$ and $\alpha-2$. The paper considers also the case of a threefold $X$ with ${\mathrm{Pic}}(X) \ne {\mathbb{Z}}$ but ${\mathrm{Num}}(X) \cong {\mathbb{Z}}$ and gives similar non-vanishing results.\ **Keyword:** rank two vector bundles, smooth threefolds, non-vanishing of 1-cohomology.\ **MSC 2010:** 14J60, 14F05. author: - 'E. Ballico' - 'P. Valabrega' - 'M. Valenzano' title: | Non-vanishing theorems for rank two vector\ bundles on threefolds [^1] --- Introduction ============ In 1942 G. Gherardelli ([@Ghe]) proved that, if $C$ is a smooth irreducible curve in ${\mathbb{P}}^3$ whose canonical divisors are cut out by the surfaces of some degree $e$ and moreover all linear series cut out by the surfaces in ${\mathbb{P}}^3$ are complete, then $C$ is the complete intersection of two surfaces. Shortly and in the language of modern algebraic geometry: every $e$-subcanonical smooth curve $C$ in ${\mathbb{P}}^3$ such that $h^1({\mathcal{I}}_C(n)) = 0$ for all $n$ is the complete intersection of two surfaces. Thanks to the Serre correspondence between curves and vector bundles (see [@Hvb], [@H1], [@H2]) the above statement is equivalent to the following one: if ${\mathcal{E}}$ is a rank two vector bundle on ${\mathbb{P}}^3$ such that $h^1({\mathcal{E}}(n)) = 0$ for all $n$, then ${\mathcal{E}}$ splits. There are many improvements of the above result with a variety of different approaches (see for instance [@CV1], [@CV2], [@Ellia], [@P], [@RV]): it comes out that a rank two vector bundle ${\mathcal{E}}$ on ${\mathbb{P}}^3$ is forced to split if $h^1({\mathcal{E}}(n))$ vanishes for just one strategic $n$, and such a value $n$ can be chosen arbitrarily within a suitable interval, whose endpoints depend on the Chern classes and the least number $\alpha$ such that $h^0({\mathcal{E}}(\alpha)) \ne 0$. When rank two vector bundles on a smooth threefold $X$ of degree $d$ in ${\mathbb{P}}^4$ are concerned, similar results can be obtained, with some interesting difference. In 1998 Madonna ([@Madonna]) proved that on a smooth threefold $X$ of degree $d$ in ${\mathbb{P}}^4$ there are ACM rank two vector bundles (i.e. whose 1-cohomology vanishes for all twists) that do not split. And this can happen, for a normalized vector bundle ${\mathcal{E}}$ ($c_1\in\{0,-1\})$, only when $ 1-\frac{d+c_1}{2} < \alpha < \frac{d-c_1}{2}$, while an ACM rank two vector bundle on $X$ whose $\alpha$ lies outside of the interval is forced to split. The following non-vanishing results for a normalized non-split rank two vector bundle on a smooth irreducible thereefold of degree $d$ in ${\mathbb{P}}^4$ are proved in [@Madonna]: if $\alpha \le 1-\frac{d+c_1}{2}$, then $h^1({\mathcal{E}}(\frac{d-3-c_1}{2}))\ne 0$ if $d+c_1$ is odd, $h^1({\mathcal{E}}(\frac{d-4-c_1}{2}))\ne 0, h^1({\mathcal{E}}(\frac{d-2-c_1}{2}))\ne 0$ if $d+c_1$ is even, while $h^1({\mathcal{E}}(\frac{d-c_1}{2}))\ne 0$ if $d+c_1$ is even and moreover $\alpha \le -\frac{d+c_1}{2}$; if $\alpha\ge \frac{d-c_1}{2}$, then $h^1({\mathcal{E}}(\frac{d-3-c_1}{2}))\ne 0$ if $d+c_1$ is odd, while $h^1({\mathcal{E}}(\frac{d-4-c_1}{2}))\ne 0$ if $d+c_1$ is even. In [@Madonna] it is also claimed that the same techniques work to obtain similar non-vanishing results on any smooth threefold $X$ with ${\mathrm{Pic}}(X) \cong {\mathbb{Z}}$ and $h^1({\mathcal{O}}_X(n)) = 0$, for every $n$. The present paper investigates the non-vanishing of $H^1({\mathcal{E}}(n))$, where ${\mathcal{E}}$ is a rank two vector bundle over any smooth irreducible threefold $X$ of degree $d$ such that ${\mathrm{Pic}}(X) \cong {\mathbb{Z}}$ and $H^1({\mathcal{O}}_X(n)) = 0, \forall n$. Actually we can prove that for such an ${\mathcal{E}}$ there is a wider range of non-vanishing for $h^1({\mathcal{E}}(n))$, so improving the above results. More precisely, when ${\mathcal{E}}$ is (normalized and) non-stable ($\alpha \le 0$) the first cohomology module does not vanish at least between the endpoints $\frac{\epsilon-c_1}{2}$ and $-\alpha-c_1-1$, where $\epsilon$ is defined by the equality $\omega(X) = {\mathcal{O}}_X(\epsilon)$ (and is $d-5$ if $X \subset {\mathbb{P}}^4$). But we can show that there are other non-vanishing intervals, whose endpoints depend on $\alpha$ and also on the second Chern class $c_2$ of ${\mathcal{E}}$. If on the contrary ${\mathcal{E}}$ is stable the first cohomology module does not vanish at least between the endpoints $\frac{\epsilon-c_1}{2}$ and $\alpha-2$, but other ranges of non-vanishing can be produced. We give a few examples obtained by pull-back from vector bundles on ${\mathbb{P}}^3$. We must remark that most of our non-vanishing results do not exclude the range for $\alpha$ between the endpoints $1-\frac{d+c_1}{2}$ and $\frac{d-c_1}{2}$ (for a general threefold it becomes $-\frac{\epsilon+3+c_1}{2} < \alpha < \frac{\epsilon+5-c_1}{2})$. Actually [@Madonna] produces some examples of nonsplit ACM rank two vector bundles on smooth hypersurfaces in ${\mathbb{P}}^4$, but it can be seen that they do not conflict with our theorems. As to threefolds with ${\mathrm{Pic}}(X) \ne {\mathbb{Z}}$, we need to observe that a key point is a good definition of the integer $\alpha$. We are able to prove, by using a boundedness argument, that $\alpha$ exists when ${\mathrm{Pic}}(X) \ne {\mathbb{Z}}$ but ${\mathrm{Num}}(X) \cong {\mathbb{Z}}$. In this event the correspondence between rank two vector bundles and two-codimensional subschemes can be proved to hold. In order to obtain non-vanishing results that are similar to the results proved when ${\mathrm{Pic}}(X) \cong {\mathbb{Z}}$, we need also use the Kodaira vanishing theorem, which holds in characteristic 0. We can extend the results to characteristic $p > 0$ if we assume a Kodaira-type vanishing condition. Notation ======== We work over an algebraically closed field ${\mathbf{k}}$ of any characteristic.\ Let $X$ be a non-singular irreducible projective algebraic variety of dimension 3, for short a smooth threefold.\ We fix an ample divisor $H$ on $X$, so we consider the polarized threefold $(X,H)$.\ We denote with ${\mathcal{O}_{X}}(n)$, instead of ${\mathcal{O}_{X}}(nH)$, the invertible sheaf corresponding to the divisor $nH$, for each $n\in{\mathbb{Z}}$.\ For every cycle $Z$ on $X$ of codimension $i$ it is defined its degree with respect to $H$, i.e. $\deg(Z;H) := Z \cdot H^{3-i}$, having identified a codimension 3 cycle on $X$, i.e. a $0$-dimensional cycle, with its degree, which is an integer.\ From now on (with the exception of section 7) we consider a smooth polarized threefold $(X,{\mathcal{O}}_{X}(1))=(X,H)$ that satifies the following conditions: $\mathbf{(C1)}$ : ${\mathrm{Pic}}(X)\cong{\mathbb{Z}}$ generated by $[H]$, $\mathbf{(C2)}$ : $H^1(X,{\mathcal{O}_{X}}(n)) = 0$ for every $n\in{\mathbb{Z}}$, $\mathbf{(C3)}$ : $h^0(O_X(1)) \ne 0$. By condition $\mathbf{(C1)}$ every divisor on $X$ is linearly equivalent to $aH$ for some integer $a\in{\mathbb{Z}}$, i.e. every invertible sheaf on $X$ is (up to an isomorphism) of type ${\mathcal{O}_{X}}(a)$ for some $a\in{\mathbb{Z}}$, in particular we have for the canonical divisor $K_X \sim \epsilon H$, or equivalently $\omega_{X}\simeq{\mathcal{O}}_{X}(\epsilon)$, for a suitable integer $\epsilon$. Furthermore, by Serre duality condition $\mathbf{(C2)}$ implies that $H^2(X,{\mathcal{O}_{X}}(n)) = 0$ for all $n\in{\mathbb{Z}}$.\ Since by assumption $A^{1}(X)={\mathrm{Pic}}(X)$ is isomorphic to ${\mathbb{Z}}$ through the map $[H]\mapsto 1$, where $[H]=c_{1}({\mathcal{O}}_{X}(1))$, we identify the first Chern class $c_1({\mathcal{F}})$ of a coherent sheaf with a whole number $c_1$, where $c_1({\mathcal{F}}) = c_1 H$.\ The second Chern class $c_2({\mathcal{F}})$ gives the integer $c_2 = c_2({\mathcal{F}})\cdot H$ and we will call this integer the second Chern number or the second Chern class of ${\mathcal{F}}$.\ We set $$d := \deg(X;H) = H^3,$$ so $d$ is the degree of the threefold $X$ with respect to the ample divisor $H$.\ Let $c_1(X)$ and $c_2(X)$ be the first and second Chern classes of $X$, that is of its tangent bundle $TX$ (which is a locally free sheaf of rank 3); then we have $$c_1(X) = [-K_X] = -\epsilon [H],$$ so we identify the first Chern class of $X$ with the integer $-\epsilon$. Moreover we set $$\tau := \deg(c_2(X);H) = c_2(X) \cdot H,$$ i.e. $\tau$ is the degree of the second Chern class of the threefold $X$.\ In the following we will call the triple of integers $(d,\epsilon,\tau)$ the **characteristic numbers** of the polarized threefold $(X,{\mathcal{O}_{X}}(1))$. We recall the well-known Riemann-Roch formula on the threefold $X$ (see [@Valenzano], proposition 4). \[gRR\] Let ${\mathcal{F}}$ be a rank $r$ coherent sheaf on $X$ with Chern classes $c_{1}({\mathcal{F}})$, $c_{2}({\mathcal{F}})$ and $c_{3}({\mathcal{F}})$. Then the Euler-Poincaré characteristic of ${\mathcal{F}}$ is $$\begin{aligned} \chi({\mathcal{F}}) = & \frac{1}{6}\Big(c_{1}({\mathcal{F}})^{3} - 3 c_{1}({\mathcal{F}})\cdot c_{2}({\mathcal{F}}) + 3 c_{3}({\mathcal{F}})\Big) + \frac{1}{4}\Big(c_{1}({\mathcal{F}})^{2} - 2 c_{2}({\mathcal{F}})\Big)\cdot c_{1}(X) + \\ & + \frac{1}{12} c_{1}({\mathcal{F}})\cdot\Big(c_{1}(X)^{2} + c_{2}(X)\Big) + \frac{r}{24} c_{1}(X)\cdot c_{2}(X)\end{aligned}$$ where $c_{1}(X)$ and $c_{2}(X)$ are the Chern classes of $X$, that is the Chern classes of the tangent bundle $TX$ of $X$. So applying the Riemann-Roch Theorem to the invertible sheaf ${\mathcal{O}_{X}}(n)$, for each $n\in{\mathbb{Z}}$, we get the Hilbert polynomial of the sheaf ${\mathcal{O}_{X}}(1)$ $$\chi({\mathcal{O}_{X}}(n)) = \frac{d}{6} \left(n - \frac{\epsilon}{2}\right) \left[ \left(n - \frac{\epsilon}{2}\right)^2 + \frac{\tau}{2d} - \frac{\epsilon^2}{4} \right]\!.$$ Let ${\mathcal{E}}$ be a rank 2 vector bundle on the threefold $X$ with Chern classes $c_1({\mathcal{E}})$ and $c_2({\mathcal{E}})$, so with Chern numbers $c_1$ and $c_2$. We assume that ${\mathcal{E}}$ is normalized, i.e. that $c_1 \in\{0,-1\}$. It is defined the integer $\alpha$, the so called first relevant level, such that $h^0({\mathcal{E}}(\alpha)) \ne 0, h^0({\mathcal{E}}(\alpha-1)) = 0$. If $\alpha > 0$, ${\mathcal{E}}$ is called stable, non-stable otherwise. We set $$\vartheta = \frac{3c_2}{d} - \frac{\tau}{2d} + \frac{\epsilon^2}{4}-\frac{3c_1^2}{4}, \qquad \zeta_0 = \frac{\epsilon-c_1}{2}, \quad \text{and} \quad w_0 = [\zeta_0]+1,$$ where $[\zeta_0]=$ integer part of $\zeta_0$, so the Hilbert polynomial of ${\mathcal{E}}$ can be written as $$\chi({\mathcal{E}}(n)) = \frac{d}{3}\big(n - \zeta_0\big)\Big[\big(n - \zeta_0\big)^2 - \vartheta\Big].$$ If $\vartheta\ge0$ we set $$\zeta = \zeta_0 + \sqrt{\vartheta}$$ so in this case the Hilbert polynomial of ${\mathcal{E}}$ has the three real roots $\zeta' \le \zeta_0 \le \zeta$ where $\zeta' = \zeta_0 - \sqrt{\vartheta}, \zeta = \zeta_0 + \sqrt{\vartheta}$. We also define $\bar\alpha = [\zeta]+1$.\ The polinomial $\chi({\mathcal{E}}(n))$, as a rational polynomial, has three real roots if and only if $\vartheta\ge0$, and it has only one real root if and only if $\vartheta<0$.\ If ${\mathcal{E}}$ is normalized, we set $$\delta = c_2+d\alpha^2+c_1d\alpha.$$ We have $\delta = 0$ if and only if ${\mathcal{E}}$ splits (see [@VV], Lemma 3.13: the proof works in general). Unless stated otherwise, we work over the smooth polarized threefold $X$ and *${\mathcal{E}}$ is a normalized non-split rank two vector bundle on $X$*. About the characteristic numbers $\epsilon$ and $\tau$ ====================================================== In this section we want to recall some essentially known properties of the characteristic numbers of the threefold $X$ (see also [@SB] for more general statements). We start with the following remark. \[OX\] 1. For the fixed ample invertible sheaf ${\mathcal{O}_{X}}(1)$ we have $$h^0({\mathcal{O}_{X}}(n)) \begin{cases} = 0 & \quad\text{for } n < 0 \\ = 1 & \quad \text{for } n = 0 \\ \ne 0 & \quad\text{for } n > 0 \end{cases}$$ and also $h^0({\mathcal{O}_{X}}(m)) - h^0({\mathcal{O}_{X}}(n)) > 0$ for all $n,m\in{\mathbb{Z}}$ with $m > n \ge 0$.\ 2. It holds $$\chi({\mathcal{O}_{X}}) = h^0({\mathcal{O}_{X}}) - h^3({\mathcal{O}_{X}}) = 1 - h^0({\mathcal{O}_{X}}(\epsilon)),$$ so we have: $$\chi({\mathcal{O}_{X}}) = 1 \iff \epsilon < 0, \ \ \ \chi({\mathcal{O}_{X}}) = 0 \iff \epsilon = 0, \ \ \ \chi({\mathcal{O}_{X}}) < 0 \iff \epsilon > 0.$$ Let $(X,{\mathcal{O}_{X}}(1))$ be a smooth polarized threefold with [characteristic numbers $(d,\epsilon,\tau)$]{}. Then it holds: 1) : $\epsilon \ge -4$, 2) : $\epsilon = -4$ if and only if $X = {\mathbb{P}}^3$, i.e. $(d,\epsilon,\tau)=(1,-4,6)$ and so $\frac{\tau}{2d} - \frac{\epsilon^2}{4} = -1$, 3) : if $\epsilon = -3$, then $X$ is a hyperquadric in ${\mathbb{P}}^4$, so $(d,\epsilon,\tau)=(2,-3,8)$ and $\frac{\tau}{2d} - \frac{\epsilon^2}{4} = -\frac{1}{4}$, 4) : $\epsilon\tau$ is a multiple of $24$, in particular if $\epsilon < 0$ then $\epsilon \tau = -24$, 5) : if $\epsilon\ne 0$, then $\tau > 0$, 6) : if $\epsilon = 0$, then $\tau > -2d$, 7) : $\tau$ is always even, 8) : if $\epsilon$ is even, then $\frac{\tau}{2d} - \frac{\epsilon^2}{4} \ge -1$, 9) : if $\epsilon$ is odd, then $\frac{\tau}{2d} - \frac{\epsilon^2}{4} \ge -\frac{1}{4}$, 10) : if $\epsilon < 0$, then the only possibilities for $(\epsilon,\tau)$ are the following $$(\epsilon,\tau) \in \{(-4,6), \, (-3,8), \, (-2,12), \, (-1,24) \},$$ For statements **1)**, **2)**, **3)** see [@SB].\ **4)** Observe that $\chi({\mathcal{O}_{X}}) = - \frac{1}{24} \epsilon \tau$ is an integer, and moreover, if $\epsilon < 0$, then $\chi({\mathcal{O}_{X}})=1$.\ **5)** By Remark \[OX\] we have: if $\epsilon > 0$ then $- \frac{1}{24} \epsilon \tau < 0$, while if $\epsilon < 0$ then $- \frac{1}{24} \epsilon \tau > 0$. In both cases we deduce $\tau > 0$.\ **6)** If $\epsilon = 0$, then we have $$\chi({\mathcal{O}_{X}}(n)) = \frac{d}{6} n \left(n^2 + \frac{\tau}{2d} \right),$$ and also $$\chi({\mathcal{O}_{X}}(n)) = h^0({\mathcal{O}_{X}}(n)) > 0 \quad \forall n > 0,$$ therefore we must have $\frac{2d+\tau}{12} > 0$, so $\tau > -2d$.\ **7)** Assume that $\epsilon$ is even, then we have $$d\left(1- \frac{\epsilon}{2}\right)\left(1+ \frac{\epsilon}{2}\right) + \frac{\tau}{2} = d\left(1-\frac{\epsilon^2}{4}+\frac{\tau}{2d}\right) = 6\, \chi\left({\mathcal{O}_{X}}\left(\frac{\epsilon}{2}+1\right)\right) \in{\mathbb{Z}}$$ and moreover $d\left(1- \frac{\epsilon}{2}\right)\left(1+ \frac{\epsilon}{2}\right)\in{\mathbb{Z}}$, so $\tau$ must be even.\ If $\epsilon$ is odd, the proof is quite similar.\ **8)** Let $\epsilon$ be even. If it holds $$h^0\left({\mathcal{O}_{X}}\!\left(\frac{\epsilon}{2}+1\right)\right) - h^0\left({\mathcal{O}_{X}}\!\left(\frac{\epsilon}{2}-1\right)\right) = \chi\left({\mathcal{O}_{X}}\left(\frac{\epsilon}{2}+1\right)\right) < 0,$$ then we must have $h^0\left({\mathcal{O}_{X}}\left(\frac{\epsilon}{2}-1 \right)\right) \ne 0$, which implies $$h^0\left({\mathcal{O}_{X}}\!\left(\frac{\epsilon}{2}+1\right)\right) - h^0\left({\mathcal{O}_{X}}\!\left(\frac{\epsilon}{2}-1\right)\right) \ge 0,$$ a contradiction. So we must have:$$\chi\left({\mathcal{O}_{X}}\!\left(\frac{\epsilon}{2}+1\right)\right) = \frac{d}{6}\left(1 + \frac{\tau}{2d} - \frac{\epsilon^2}{4}\right) \ge 0,$$ therefore $$\frac{\tau}{2d} - \frac{\epsilon^2}{4} \ge -1.$$\ **9)** The proof is quite similar to the proof of **8**).\ **10)** If $\epsilon < 0$, then by **1)** we have $\epsilon\in\{-4,-3,-2,-1\}$, and moreover $\epsilon \tau = -24$ by **4)**, so we obtain the thesis. Non-stable vector bundles ($\alpha \le 0$) ========================================== **Case $\epsilon \ge 1$.**\ In this subsection we make the following assumptions: *${\mathcal{E}}$ is a normalized non-split rank two vector bundle with $\alpha \le 0$ and $\epsilon \ge 1$*.\ The case $\epsilon \le 0$ is investigated in the next subsection. \[nonstable0\] Assume that $\zeta_0 < -\alpha-c_1-1$. Then it holds: $$h^1({\mathcal{E}}(n))-h^2({\mathcal{E}}(n)) = (n-\zeta_0)\delta$$ for every $n$ such that $\zeta_0 < n \le -\alpha-c_1-1$. First we assume $c_1 = 0$. It is enough to observe that, from the inequality $n+\alpha \le 0$ and the exact sequence $$0 \to {\mathcal{O}}_X(n-\alpha) \to {\mathcal{E}}(n) \to {\mathcal{I}}(n+\alpha) \to 0$$ we obtain: $h^0({\mathcal{E}}(n)) = h^0({\mathcal{O}}_X(n-\alpha)) = \chi({\mathcal{O}}_X(n-\alpha))+h^0({\mathcal{O}}_X(\epsilon-n+\alpha)) = \chi({\mathcal{O}}_X(n-\alpha))$ since $\epsilon-n+\alpha \le -1$. We also have: $h^0({\mathcal{E}}(\epsilon-n)) = h^0({\mathcal{O}}_X(\epsilon-n-\alpha)) = \chi({\mathcal{O}}_X(\epsilon-n-\alpha))+h^0({\mathcal{O}}_X(n+\alpha)) = \chi({\mathcal{O}}_X(\epsilon-n-\alpha))$.\ Now it is enough to observe that $h^1({\mathcal{E}}(n))-h^2({\mathcal{E}}(n)) = h^0({\mathcal{E}}(n))-h^0({\mathcal{E}}(\epsilon-n)) -\chi({\mathcal{E}}(n)) = \chi({\mathcal{O}}_X(n-\alpha))- \chi({\mathcal{O}}_X(\epsilon-n-\alpha))-\chi({\mathcal{E}}(n))$. If we use the Riemann-Roch formulas for the Euler functions we obtain the required equality.\ Now we assume $c_1 = -1$. We recall that $h^3({\mathcal{E}}(n)) = h^0({\mathcal{E}}(\epsilon-n+1))$. As before we have $h^1({\mathcal{E}}(n))-h^2({\mathcal{E}}(n)) = \chi({\mathcal{O}}_X(n-\alpha))- \chi({\mathcal{O}}_X(\epsilon-n-\alpha+1))-\chi({\mathcal{E}}(n))$ and the computation is very similar. Observe that the statement of Proposition \[nonstable0\] still holds when $n = \zeta_0$, because the two sides of the equality vanish. \[nonstable1\] Let us assume that $\zeta_0 < -\alpha-c_1-1$ and let $n$ be such that $\zeta_0 < n \le -\alpha-1-c_1$. Then $h^1({\mathcal{E}}(n)) \ge (n-\zeta_0)\delta$. In particular $h^1({\mathcal{E}}(n)) \ne 0$. It is enough to observe that $$h^1({\mathcal{E}}(n))-h^2({\mathcal{E}}(n)) = (n-\zeta_0)\delta$$ and that the right side of the equality is strictly positive for a non-split vector bundle. Observe that the above theorem describes a non-empty set of integers if and only if $-\alpha -1 -c_ 1 > \zeta_0$; this means $\alpha < -\frac{\epsilon+2+c_1}{2}$, i.e. $\alpha \le -\frac{\epsilon+3+c_1}{2}$. So our assumption on $\alpha$ agrees with the bound of [@Madonna].\ Observe that the inequality on $\alpha$ implies that $\alpha \le -2$ if $\epsilon \ge 1$. \[nonstable2\] Assume that $\frac{6\delta}{d}-\frac{\tau}{2d}+\frac{\epsilon^2}{4}-\frac{3c_{1}^2}{4}\ge 0$. Let $n > \zeta_0$ be such that $ \epsilon-\alpha-c_1+1 \le n < \zeta_0+\sqrt{\frac{6\delta}{d}-\frac{\tau}{2d}+\frac{\epsilon^2}{4}-\frac{3c_{1}^2}{4}}$ and put $$S(n) =\frac{d}{6}\left(n-\frac{\epsilon-c_1}{2}\right)\!\left[(n-\frac{\epsilon-c_1}{2})^2-6\frac{c_2+d \alpha^2+c_1d\alpha}{d}+\frac{\tau}{2d}-\frac{\epsilon^2}{4}+\frac{3c_1^2}{4}\right]\!.$$ Then $h^1({\mathcal{E}}(n))\ge -S(n) > 0$. In particular $h^1({\mathcal{E}}(n))\ne 0$. Assume $c_1 = 0$. Under our hypothesis $h^0({\mathcal{E}}(\epsilon-n)) = 0$ and so $-h^1({\mathcal{E}}(n))+h^2({\mathcal{E}}(n)) = \chi({\mathcal{E}}(n)) - h^0({\mathcal{O}}_X(n-\alpha))$. Observe that $\chi({\mathcal{E}}(n)) - h^0({\mathcal{O}}_X(n-\alpha)) -S(n) = \frac{1}{2}nd\alpha(-\epsilon+n+\alpha))+\frac{1}{12}d\alpha(-3\epsilon \alpha+2\alpha^2+\epsilon^2+\frac{\tau}{d}) \le 0$. Therefore we have: $h^1({\mathcal{E}}(n)) \ge h^2({\mathcal{E}}(n))-S(n)$. Hence $h^1({\mathcal{E}}(n))$ may possibly vanish when $$\left(n-\frac{\epsilon}{2}\right)^2-6\frac{c_2+d \alpha^2}{d}+\frac{\tau}{2d}-\frac{\epsilon^2}{4} \ge 0.$$ When $S(n) < 0$, so $-S(n) > 0$, $h^1({\mathcal{E}}(n)) \ge -S(n) > 0$ and in particular it cannot vanish.\ If $c_1 = -1$ the proof is quite similar. Now we put $\frac{\tau}{2d}-\frac{\epsilon^2}{4} =\lambda$ and consider the following degree $3$ polynomial: $$F(X) = X^3+\left(\lambda-\frac{6\delta}{d}\right)X+\frac{6\alpha \delta}{d}.$$ It is easy to see that, if $\frac{6\delta}{d}-\frac{\tau}{2d}+\frac{\epsilon^2}{4} \le 0$, $F(X)$ is strictly increasing and so it has only one real root $X_0$. \[nonstable3\] Assume that $\frac{6\delta}{d}-\frac{\tau}{2d}+\frac{\epsilon^2}{4} \le 0$. Let $n$ be such that $ \epsilon-\alpha-c_1+1 \le n < -\alpha+X_0+\zeta_0$, where $X_0$ = unique real root of $F(X)$ . Then $h^1({\mathcal{E}}(n)) \ge -\frac{d}{6}F(n+\alpha-\zeta_0) > -\frac{d}{6}F(X_0) = 0$. In particular $h^1({\mathcal{E}}(n)) \ne 0$. Assume $c_1 = 0$, the proof being quite similar if $c_1 = -1$.\ It holds (see proposition \[nonstable0\]): $$\begin{aligned} h^1({\mathcal{E}}(n)) & - h^2({\mathcal{E}}(n)) = \chi({\mathcal{O}}_X(n-\alpha))-\chi({\mathcal{E}}(n)) = \\ & = \left(n-\frac{\epsilon}{2}\right)(c_2+d\alpha^2)+\chi({\mathcal{O}}_X(\epsilon-n-\alpha)) = \\ & = \left(n-\frac{\epsilon}{2}\right)(c_2+d\alpha^2)-\frac{d}{6}\left(n+\alpha-\frac{\epsilon}{2}\right)\left(\left(n+\alpha-\frac{\epsilon}{2}\right)^2+\lambda\right).\end{aligned}$$ If we put: $X = n+\alpha-\frac{\epsilon}{2}$, we obtain: $\frac{d}{6} F(X) = \frac{d}{6}(X^3+(\lambda-\frac{6\delta}{d})X+\frac{6\alpha \delta}{d}) = h^2({\mathcal{E}}(n))-h^1({\mathcal{E}}(n))$. Therefore $h^1({\mathcal{E}}(n)) > -\frac{d}{6}F(n+\alpha-\zeta_0) > -\frac{d}{6}F(X_0) = 0$. Observe that in Theorems \[nonstable2\] and \[nonstable3\] $\alpha$ can be $0$. **Case $\epsilon \le 0$.** In the event that $\epsilon \le -2$, we have $\epsilon-\alpha-c_1+1 \le -\alpha-c_1-1$. Therefore Theorems \[nonstable2\], \[nonstable3\] give something new only beyond $-\alpha-c_1-1.$ First of all we observe that Theorems \[nonstable1\], \[nonstable3\] obviously hold as they are stated also when $\epsilon \le 0$. So we discuss Theorem \[nonstable2\].\ A. $\epsilon \le -2$.\ In theorem \[nonstable2\] we need to know that $$\frac{1}{2}nd\alpha(-\epsilon+n+\alpha)+\frac{1}{12}d\alpha(\epsilon^2+\frac{\tau}{d}-3\epsilon \alpha+2\alpha^2) \le 0.$$ The first term of the sum is for sure negative; as for $$\frac{1}{12}d\alpha\left(\epsilon^2+\frac{\tau}{d}\right)+\frac{1}{12}d\alpha^2(-3\epsilon+2\alpha)$$ we observe that the quantity in brackets has discriminant $$\Delta = \epsilon^2-8\frac{\tau}{d} = 4\left(\frac{\epsilon^2}{4}-\frac{\tau}{2d}+\frac{\tau}{2d}-8\frac{\tau}{d}\right) \le 4(1-15) < 0.$$ Therefore it is positive for all $\alpha \le 0$ and the product is negative.\ B. $\epsilon = -1$.\ In theorem \[nonstable2\] we need to know that $$\frac{1}{2}nd\alpha(1+n+\alpha)+\frac{1}{12}d\alpha\left(1+\frac{\tau}{d}\right)+\frac{1}{12}d\alpha^2(3+2\alpha) \le 0.$$ If $\alpha \le -2$, then it is enough to observe that $\frac{\tau}{d}+3\alpha+2\alpha^2 \ge 0$. If $\alpha = -1$ we have to consider $-\frac{1}{2}n^2d+\frac{1}{12}d\frac{\tau}{d}$ and then we observe that $6n^2+\frac{\tau}{d} > 0$. If $\alpha = 0$ obviously the quantity is $0$.\ C. $\epsilon = 0$.\ In theorem \[nonstable2\] we need to know that $$\frac{1}{2}nd\alpha(n+\alpha)+\frac{1}{12}d\alpha\left(\frac{\tau}{d}\right)+\frac{1}{12}d\alpha^2(2\alpha) \le 0.$$\ It is enough to observe that $2\alpha^2+\frac{\tau}{d} > 0$ by Proposition 3.2, $\mathbf{6)}$ if $\alpha < 0$; otherwise we have a $0$ quantity, and that $n+\alpha \le 0$. Observe that the case $\alpha = 0$ in Theorem \[nonstable1\] can occur only if $\epsilon \le -c_1-3$. In theorem \[nonstable2\] we do not use the hypothesis $-\frac{\epsilon+3}{2} \ge \alpha$, but we assume that $6\frac{c_2+d \alpha^2}{d}-\frac{\tau}{2d}+\frac{\epsilon^2}{4}-1\ge 0$. In theorem \[nonstable3\] we do not use the hypothesis $-\frac{\epsilon+3}{2} \ge \alpha$, but we assume that $6\frac{c_2+d \alpha^2}{d}-\frac{\tau}{2d}+\frac{\epsilon^2}{4} < 0$. Moreover in both theorems there is a range for $n$, the left endpoint being $\epsilon-\alpha-c_1+1$ and the right endpoint being either $\zeta_0+\sqrt{6\frac{c_2+d \alpha^2}{d}-\frac{\tau}{2d}+\frac{\epsilon^2}{4}-1}$ (\[nonstable2\]) or $\zeta_0-\alpha+X_0$ (\[nonstable3\]).\ In [@Madonna] there are examples of ACM nonsplit vector bundles on smooth threefolds in ${\mathbb{P}}^4$, with $-\frac{\epsilon+3+c_1}{2} < \alpha < \frac{\epsilon+5-c_1}{2}$. We want to emphasize that our theorems do not conflict with the examples of [@Madonna]: if $C$ is any curve described in [@Madonna] and lying on a smooth threefold of degree $d$, then our numerical constraints cannot be satisfied (we have checked it directly in many but not all cases). Let us consider a smooth degree $d$ threefold $X \subset {\mathbb{P}}^4$.\ We have: $$\epsilon = d-5,\ \ \tau = d(10-5d+d^2),\ \ \theta = \frac{3c_2}{d}-\frac{d^2-5+3c_1^2}{4}$$ (see [@Valenzano]). As to the characteristic function of $O_X$ and ${\mathcal{E}}$, it holds: $$\chi({\mathcal{O}}_X(n)) = \frac{d}{6}\left(n-\frac{d-5}{2}\right)\!\left[\left(n-\frac{d-5}{2}\right)^2+\frac{d^2-5}{4}\right]\!,$$ $$\chi({\mathcal{E}}(n)) = \frac{d}{3} \left(n-\frac{d-5-c_1}{2}\right)\!\left[\left(n-\frac{d-5-c_1}{2}\right)^2+\frac{d^2}{4}-\frac{5}{4}+\frac{3c_1^2}{4}-\frac{3c_2}{d}\right]\!.$$ Then it is easy to see that the hypothesis of Theorem \[nonstable2\], i.e. $6\frac{\delta}{d}-\frac{d^2-5+3c_1^2}{4}\ge 0$ is for sure fulfilled if $c_2 \ge 0, \alpha \le -\frac{d-2+c_1}{2}$. In fact we have (for the sake of simplicity when $c_1 = 0)$: $-6\frac{6c_2+d\alpha^2}{d}+\frac{d^2-5}{4} \le \frac{d^2-5}{4}-6\frac{d^2-2d+1}{4} = -\frac{5d^2-12d+11}{4} < 0$. Condition $\mathbf{(C2)}$ holds for sure if $X$ is a smooth hypersurface of $ {\mathbb{P}}^4$. In general, for a characteristic $0$ base field, only the Kodaira vanishing holds ([@HAL], remark 7.15) and so, unless we work over a threefold $X$ having some stronger vanishing, we need assume, in Theorems \[nonstable1\], \[nonstable2\], \[nonstable3\] that $n-\alpha \notin \{0,...,\epsilon\}$ (which implies, by duality, that also $\epsilon -n+\alpha \notin \{0,...,\epsilon\}$). Observe that the first assumption ($n-\alpha \notin \{0,...,\epsilon\})$ in the case of Theorem \[nonstable1\] is automatically fulfilled because of the hypothesis $\zeta_0 < -\alpha-c_1-1$, and in Theorems \[nonstable2\] and \[nonstable3\] because of the hypothesis $\epsilon-\alpha-c_1+1 \le n$. In fact $n-\alpha$ is greater than $\epsilon$. But this implies that $\epsilon-n+\alpha < 0$ and so also the second condition is fulfilled, at least when $\epsilon \ge 0$. For the case $\epsilon < 0$ in positive characteristic see [@SB]. Observe that, if $\epsilon < 0$, Kodaira (and so $\mathbf{(C2)}$) holds for every $n$. For a general discussion, also in characteristic $p > 0$, of this question, see section 7, remark 7.8. In the above theorems we assume that ${\mathcal{E}}$ is a nonsplit bundle. If ${\mathcal{E}}$ splits, then (see section 2) $\delta = 0$. In Theorem \[nonstable1\] this implies $h^1({\mathcal{E}}(n))-h^2({\mathcal{E}}(n)) = 0$ and so nothing can be said on the non-vanishing. Let us now consider Theorem \[nonstable2\]. If $\delta = 0$, then we must have: $\zeta_0 < n < \zeta_0+\sqrt{-\frac{\tau}{2d}+\frac{\epsilon^2}{4}-\frac{3c_1^2}{4}} \le \zeta_0+1$ (the last inequality depending on Proposition 3.2, $\mathbf{8),9)})$. As a consequence $\zeta_0$ cannot be a whole number. Moreover, since we have $2\zeta_0-\alpha+1 \le n < \zeta_0+\sqrt{-\frac{\tau}{2d}+\frac{\epsilon^2}{4}-\frac{3c_1^2}{4}}$, we obtain that $\zeta_0 < \alpha \le 0$, hence $\epsilon-c_1 \le -1$. If $c_1 = 0$, $\epsilon \in \{-1, -3\}$. If $\epsilon = -3$, then $n$ must satisfy (see Proposition 3.2, $\mathbf{8})$ the following inequalities: $-\frac{3}{2} < n < -1$, which is a contradiction. If $\epsilon = -1$, then, by Proposition 3.2, $\mathbf{8})$ we have $-1+\alpha+1 < -\frac{1}{2}+ \frac{1}{2} = 0$, which implies $\alpha > 0$, a contradiction. If $c_1 = -1$, then $\epsilon \in \{-2, -4\}$. If $\epsilon = -4$, we have $\sqrt{-\frac{\tau}{2d}+\frac{\epsilon^2}{4}-\frac{3c_1^2}{4}} = \frac{1}{2}$, and so we must have: $-\frac{3}{2} < n < -1$, which is impossible. If $\epsilon = -2$, then $\zeta_0 = -\frac{1}{2}$ and so $-2-\alpha+2 < -\frac{1}{2}+\sqrt{1-\frac{3}{4}}$, which implies $-\alpha < 0$ hence $\alpha > 0$, a contradiction with the non-stability of ${\mathcal{E}}$. Then we consider Theorem \[nonstable3\]. The vanishing of $\delta$ on the one hand implies $\lambda > 0$ and $X_0 = 0$. But on the other hand from our hypothesis on the range of $n$ we see that $\zeta_0 \le -2$, hence $\epsilon = -4, c_1 = 0$. But this contradicts Proposition 3.2, $\mathbf{2)}$. Stable vector bundles ===================== In the present section we assume that $\alpha \ge \frac{\epsilon-c_1+5}{2}$, or equivalently that $c_1+2\alpha\ge\epsilon+5$. This means that $\alpha \ge 1$ in any event, so ${\mathcal{E}}$ is stable. The following lemma holds both in the stable and in the non-stable case. \[leftvanishing\] If $h^1({\mathcal{E}}(m)) = 0$ for some integer $m \le \alpha-2$, then $h^1({\mathcal{E}}(n)) = 0$ for all $n \le m$. First of all observe that, by our condition $\mathbf{(C3)}$, from the restriction exact sequence we can obtain in cohomology the exact sequence $$0 \to H^0({\mathcal{E}}(t-1) \to H^0({\mathcal{E}}(t)) \to H^0({\mathcal{E}}_H(t)) \to 0.$$ Then we can follow the proof given in [@VV] for ${\mathbb{P}}^3$ (where condition $\mathbf{(C3)}$ is automatically fulfilled). \[stable1\] Let ${\mathcal{E}}$ be a rank 2 vector bundle on the threefold $X$ with first relevant level $\alpha$. If $\alpha\ge\frac{\epsilon+5-c_1}{2}$, then $h^1({\mathcal{E}}(n))\ne 0$ for $w_0\le n\le \alpha-2$. By the hypothesis it holds $w_0 \le \alpha-2$, so we have $h^0({\mathcal{E}}(n))=0$ for all $n\le w_0+1$. Assume $h^1({\mathcal{E}}(w_0))=0$, then by Lemma \[leftvanishing\] it holds $h^1({\mathcal{E}}(n))=0$ for every $n\le w_0$. Therefore we have $$\chi({\mathcal{E}}(w_0)) = h^0({\mathcal{E}}(w_0)) + h^1({\mathcal{E}}(-w_0+\epsilon-c_1)) - h^0({\mathcal{E}}(-w_0+\epsilon-c_1)) = 0.$$ Now observe that the characteristic function has at most three real roots, that are symmetric with respect to $\zeta_0$. Therefore, if $w_0$ is a root, then $w_0 = \zeta_0+\sqrt{\theta}$ and the other roots are $\zeta_0$ and $ \zeta_0-\sqrt{\theta}$. This implies that $\chi({\mathcal{E}}(w_0+1)) > 0$. On the other hand $$\chi({\mathcal{E}}(w_0+1)) = - h^1({\mathcal{E}}(w_0+1)) \le 0,$$ a contradiction. So we must have $h^1({\mathcal{E}}(w_0))\ne0$, then by Lemma \[leftvanishing\] we obtain the thesis. If ${\mathcal{E}}$ is *ACM*, then $\alpha<\frac{\epsilon+5-c_1}{2}$. \[stable2\] Let ${\mathcal{E}}$ be a normalized rank 2 vector bundle on the threefold $X$ with $\vartheta\ge0$, then the following hold: 1) : $h^1({\mathcal{E}}(n))\ne 0$ for $\zeta_0< n < \zeta$. 2) : $h^1({\mathcal{E}}(n))\ne 0$ for $w_0\le n \le \bar\alpha-2$, and also for $n=\bar\alpha-1$ if $\zeta\notin{\mathbb{Z}}$. 3) : If $\zeta\in{\mathbb{Z}}$ and $\alpha<\bar\alpha$, then $h^1({\mathcal{E}}(\bar\alpha-1))\ne 0$. **1)** The Hilbert polynomial of the bundle ${\mathcal{E}}$ is strictly negative for each integer such that $w_0\le n < \zeta$, but for such an integer $n$ we have $h^2({\mathcal{E}}(n))\ge0$ and $h^0({\mathcal{E}}(n))-h^0({\mathcal{E}}(-n+\epsilon-c_1)) \ge 0$ since $n\ge-n+\epsilon-c_1$ for every $n\ge w_0$, therefore we must have $h^1({\mathcal{E}}(n))\ne 0$.\ **2)** It is simply a restatement of 1) in term of $\bar\alpha$, which is, by definition, the integral part of $\zeta+1$.\ **3)** If $\zeta\in{\mathbb{Z}}$, then $\zeta=\bar\alpha-1$, so we have $\chi({\mathcal{E}}(\bar\alpha-1))=\chi({\mathcal{E}}(\zeta))=0$. Moreover $h^0({\mathcal{E}}(\bar\alpha-1))\ne0$ since $\alpha<\bar\alpha$, therefore $h^0({\mathcal{E}}(\bar\alpha-1))-h^3({\mathcal{E}}(\bar\alpha-1)) > 0$, and $h^1({\mathcal{E}}(n)) = 0$ implies $h^1({\mathcal{E}}(m)), \forall m \le n$; hence we must have $h^1({\mathcal{E}}(\bar\alpha-1))\ne 0$ to obtain the vanishing of $\chi({\mathcal{E}}(\bar\alpha-1))$. If ${\mathcal{E}}$ is *ACM*, then $\vartheta<0$. Observe that in this section we assume $\alpha \ge \frac{\epsilon-c_1+5}{2}$, in order to have $w_0 \le \alpha-2$ and so to have a non-empty range for $n$ in Theorem \[stable1\]. Observe that in the stable case we need not assume any vanishing of $h^1({\mathcal{O}}_X(n))$. Observe that split bundles are excluded in this section because they cannot be stable. Examples ======== We need the following Let $X \subset {\mathbb{P}}^4$ be a smooth threefold of degree $d$ and let $f$ be the projection onto ${\mathbb{P}}^3$ from a general point of ${\mathbb{P}}^4$ not on $X$, and consider a normalized rank two vector bundle ${\mathcal{E}}$ on ${\mathbb{P}}^3$ which gives rise to the pull-back ${\mathcal{F}}= f^*({\mathcal{E}})$. We want to check that $f_\ast (\mathcal {O}_X) \cong \oplus _{i=0}^{d-1} \mathcal {O}_{\mathbb {P}^3}(-i)$.\ Since $f$ is flat and $\deg (f)=d$, $f_\ast (\mathcal {O}_X)$ is a rank $d$ vector bundle. The projection formula and the cohomology of the hypersurface $X$ shows that $f_\ast (\mathcal {O}_X)$ is ACM. Thus there are integers $a_0\ge \cdots \ge a_{d-1}$ such that $f_\ast (\mathcal {O}_X) \cong \oplus _{i=0}^{d-1} \mathcal {O}_{\mathbb {P}^3}(a_i)$. Since $h^0(X,\mathcal {O}_X)=1$, the projection formula gives $a_0 = 0$ and $a_i<0$ for all $i>0$. Since $h^0(X,\mathcal {O}_X(1)) = 5 = h^0(\mathbb {P}^3,\mathcal {O}_{\mathbb {P}^3}(1))+h^0(\mathbb {P}^3,\mathcal {O}_{\mathbb {P}^3})$, the projection formula gives $a_1=-1$ and $a_i \le -2$ for all $i \ge 2$. Fix an integer $t \le d-2$ and assume proved $a_i = -i$ for all $i\le t$ and $a_i < -t$ for all $i>t$. Since $h^0(X,\mathcal {O}_X(t+1))= \binom{t+5}{4} = \sum _{i=0}^{t} \binom{t+4-i}{3}$, we get $a_{t+1} = -t-1$ and, if $t+1 \le d-2$, $a_i<-t-1$ for all $i>t+1$. Since $f_\ast (\mathcal {O}_X) \cong \oplus _{i=0}^{d-1} \mathcal {O}_{\mathbb {P}^3}(-i)$, the projection formula gives the following formula for the first cohomology module: $$H^i({\mathcal{F}}(n)) = H^i({\mathcal{E}}(n)) \oplus H^i({\mathcal{E}}(n-1)) \oplus ...\oplus H^i({\mathcal{E}}(n-d+1))$$ all $i$. Observe that, as a consequence of the above equalitiy for $ i = 0$, we obtain that ${\mathcal{F}}$ has the same $\alpha$ as ${\mathcal{E}}$. Moreover the pull-back ${\mathcal{F}}= f^*({\mathcal{E}})$ and ${\mathcal{E}}$ have the same Chern class $c_1$, while $c_2({\mathcal{F}}) = dc_2({\mathcal{E}})$ and therefore $\delta({\mathcal{F}}) = d\delta({\mathcal{E}})$. **Examples** **1.** (a stable vector bundle with $c_1 = 0$, $c_2 = 4$ on a quadric hypersurface $X$).\ Choose $d = 2$ and take the pull-back ${\mathcal{F}}$ of the stable vector bundle ${\mathcal{E}}$ on ${\mathbb{P}}^3$ of [@VV], example 4.1. Then the numbers of ${\mathcal{F}}$ (see Notation) are: $c_1 = 0$, $c_2 = 4$, $\alpha = 1$, $\bar\alpha = 2$, $\zeta_0 = -\frac{3}{2}$, $w_0 = -1$, $\theta = \frac{25}{4}$, $\zeta = -\frac{3}{2}+\sqrt{\frac{25}{4}} = 1 \in {\mathbb{Z}}$. From [@VV], example 4.1, we know that $h^1({\mathcal{E}}) \ne 0$. Since $H^1({\mathcal{F}}(1)) = H^1({\mathcal{E}}(1)) \oplus H^1({\mathcal{E}})$, we have: $ h^1({\mathcal{F}}(1))\ne 0$, one shift higher than it is stated in Theorem \[stable2\], 2.\ **2.** (a non-stable vector bundle with $c_1 = 0$, $c_2 = 45$ on a hypersurface of degree $5$). Choose $d = 5$ and take the pull-back ${\mathcal{F}}$ of the stable vector bundle ${\mathcal{E}}$ on ${\mathbb{P}}^3$ of [@VV], example 4.5. Then the numbers of ${\mathcal{F}}$ (see Notation) are: $c_1 = 0$, $c_2 = 45$, $\alpha = -3$, $\delta = 90$, $\zeta_0 = 0$. From [@VV], theorem 3.8, we know that $h^1({\mathcal{E}}(12)) \ne 0$. Since $H^1({\mathcal{F}}(16)) = H^1({\mathcal{E}}(16)) \oplus \dots \oplus H^1({\mathcal{E}}(12))$, we have: $ h^1({\mathcal{F}}(16))\ne 0$ (Theorem \[nonstable2\] states that $h^1({\mathcal{F}}(10) \ne 0$).\ **3.** (a stable vector bundle with $c_1 = -1$, $c_2 = 2$ on a quadric hypersurface).\ Let ${\mathcal{E}}$ be the rank two vector bundle corresponding to the union of two skew lines on a smooth quadric hypersurface $Q \subset {\mathbb{P}}^4$. Then its numbers are : $c_1 = -1$, $c_2 = 2$, $\alpha = 1$ and it is known that $h^1({\mathcal{E}}(n)) \ne 0$ if and only if $n = 0$.\ Observe that in this case $\theta = \frac{5}{2} \ge 0, \zeta_0 = -1$, $\bar\alpha = 1$. Therefore theorem \[stable2\] states exactly that $h^1({\mathcal{E}}(0)) \ne 0$, hence this example is sharp.\ **4.** (a non-stable vector bundle with $c_1 = 0$, $c_2 = 8$ on a quadric hypersurface).\ Choose $d = 2$ and take the pull-back ${\mathcal{F}}$ of the non-stable vector bundle ${\mathcal{E}}$ on ${\mathbb{P}}^3$ of [@VV], example 4.10. Then the numbers of ${\mathcal{F}}$ (see Notation) are: $c_1 = 0$, $c_2 = 8$, $\alpha = 0$, $\zeta_0 = -\frac{3}{2}$, $\delta = 8$. We know (see [@VV], example 4.10) that $h^1({\mathcal{E}}(2)) \ne 0, h^1({\mathcal{E}}(3)) = 0$. Since $H^1({\mathcal{F}}(3)) = H^1({\mathcal{E}}(3)) \oplus H^1({\mathcal{E}}(2))$, we have: $ h^1({\mathcal{F}}(3)) \ne 0$, exactly the bound of Theorem \[nonstable2\]. The bounds for a degree $d$ threefold in ${\mathbb{P}}^4$ agree with [@VV], where ${\mathbb{P}}^3$ is considered. Threefolds with ${\mathrm{Pic}}(X) \ne {\mathbb{Z}}$ ==================================================== Let $X$ be a smooth and connected projective threefold defined over an algebraically closed field $\mathbb {K}$. Let ${\mathrm{Num}}(X)$ denote the quotient of ${\mathrm{Pic}}(X)$ by numerical equivalence. Numerical classes are denoted by square brackets $[\,\,]$. We assume ${\mathrm{Num}}(X) \cong \mathbb {Z}$ and take the unique isomorphism $\eta : {\mathrm{Num}}(X)\to \mathbb {Z}$ such that $1$ is the image of a fixed ample line bundle. Notice that $M\in {\mathrm{Pic}}(X)$ is ample if and only if $\eta ([M])>0$. \[boundedness\] Let $\eta : {\mathrm{Num}}(X) \to \mathbb {Z}$ be as before. Notice that every effective divisor on $X$ is ample and hence its $\eta$ is strictly positive. For any $t\in \mathbb {Z}$ set ${\mathrm{Pic}}_t(X):= \{L\in {\mathrm{Pic}}(X): \eta ([L])=t\}$. Hence ${\mathrm{Pic}}_0(X)$ is the set of all isomorphism classes of numerically trivial line bundles on $X$. The set ${\mathrm{Pic}}_0(X)$ is parametrized by a scheme of finite type ([@La], Proposition 1.4.37). Hence for each $t\in \mathbb {Z}$ the set ${\mathrm{Pic}}_t(X)$ is bounded. Let now ${\mathcal{E}}$ be a rank $2$ vector bundle on $X$. Since ${\mathrm{Pic}}_1(X)$ is bounded there is a minimal integer $t$ such that there is $B\in {\mathrm{Pic}}_t(X)$ and $h^0(E\otimes B) >0$. Call it $\alpha (E)$ or just $\alpha$. By the definition of $\alpha$ there is $B\in {\mathrm{Pic}}_{\alpha}(X)$ such that $h^0(X,{\mathcal{E}}\otimes B) >0$. Hence there is a non-zero map $j: B^\ast \to E$. Since $B^\ast$ is a line bundle and $j\ne 0$, $j$ is injective. The definition of $\alpha$ gives the non-existence of a non-zero effective divisor $D$ such that $j$ factors through an inclusion $B^\ast \to B^\ast (D)$, because $\eta ([D]) >0$. Thus the inclusion $j$ induces an exact sequence $$\label{eqa1} 0 \to B^\ast \to {\mathcal{E}}\to \mathcal {I}_Z\otimes B \otimes \det ({\mathcal{E}}) \to 0$$ in which $Z$ is a closed subscheme of $X$ with pure codimension $2$.\ Observe that $\eta([B]) = \alpha, \eta([B^*]) = -\alpha, \eta([B \otimes det({\mathcal{E}})]) = \alpha +c_1$, hence the exact sequence is quite similar to the usual exact sequence that holds true in the case ${\mathrm{Pic}}(X) \cong {\mathbb{Z}}$. NOTATION:\ We set $\epsilon:= \eta ([\omega _X])$, $\alpha := \alpha ({\mathcal{E}})$ and $c_1:= \eta ([\det ({\mathcal{E}})])$. So we can speak of a normalized vector bundle ${\mathcal{E}}$, with $c_1 \in \{ 0,-1\}$. Moreover we say that ${\mathcal{E}}$ is stable if $\alpha > 0$, nonstable if $\alpha \le 0$. Moreover ª$\zeta_0, \zeta, w_0, \bar \alpha, \theta$ are defined as in section 2. Fix any $L \in {\mathrm{Pic}}_1(X)$ and set: $d = L^3 = $ degree of $X$.The degree $d$ does not depend on the numerical equivalence class. In fact, if $R$ is numerically equivalent to $0$, then $(L+R)^3 = L^3+R^3+3L^2R+3LR^2 = L^3+0+0+0 = L^3$. Then it is easy to see that the formulas for $\chi({\mathcal{O}}_X(n))$ and $\chi({\mathcal{E}}(n))$ given in section 2 still hold if we consider ${\mathcal{O}}_X \otimes L^{\otimes n}$ and ${\mathcal{E}}\otimes L^{\otimes n}$ (see [@Valenzano]). \[a1\] (a) Assume the existence of $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) =1$ and $h^0(X,L)>0$. Then for every integer $t>\alpha $ there is $M\in {\mathrm{Pic}}(X)$ such that $\eta ([M])=t$ and $h^0(X,E\otimes M)>0$. \(b) Assume $h^0(X,L)>0$ for every $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) =1$. Then $h^0(X,E\otimes M)>0$ for every $M\in {\mathrm{Pic}}(X)$ such that $\eta ([M]) > \alpha$. \[a2\] Let ${\mathcal{E}}$ be a normalized rank two vector bundle and assume the existence of a spanned $R\in {\mathrm{Pic}}(X)$ such that $\eta ([R])=1$. If char $K > 0$, assume that $\vert R \vert$ induces an embedding of $X$ outside finitely many points. Assume $$\label{eqb1} 2\alpha \le -\epsilon-3-c_1$$ and $h^1(X,{\mathcal{E}}\otimes N)=0$ for every $N\in {\mathrm{Pic}}(X)$ such that $\eta ([N]) \in \{-\alpha -c_1-1,\alpha +2+e\}$. If $h^1(X,B)=0$ for every $B\in {\mathrm{Pic}}(X)$ such that $\eta ([B])=-2\alpha -c_1$, then ${\mathcal{E}}$ splits. If moreover $h^1(X,M)=0$ for every $M\in {\mathrm{Pic}}(X)$ then it is enough to assume that $h^1(X,{\mathcal{E}}\otimes N)=0$ for every $N\in {\mathrm{Pic}}(X)$ such that $\eta ([N]) = -\alpha -c_1-1$. By assumption there is $M\in {\mathrm{Pic}}(X)$ such that $\eta ([M]) = \alpha$ and $h^0(X,{\mathcal{E}}\otimes M)>0$. Set $A:= M^\ast$. We have seen in remark \[boundedness\] that ${\mathcal{E}}$ fits into an extension of the following type: $$\label{b2} 0 \to A \to {\mathcal{E}}\to \mathcal {I}_C\otimes \det ({\mathcal{E}})\otimes A^\ast \to 0$$ with $C$ a locally complete intersection closed subscheme with pure dimension $1$. Let $H$ be a general element of $\vert R\vert$ and $T$ the intersection of $H$ with another general element of $\vert R\vert$. Observe that $T$, under our assumptions, is generically reduced by Bertini’s theorem-see [@HAL], Theorem II, 8.18 and Remark II, 8.18.1. Since $R$ is spanned, $T$ is a locally complete intersection curve and $C\cap T=\emptyset$. Hence ${\mathcal{E}}\vert T$ is an extension of $\det ({\mathcal{E}})\otimes A^\ast \vert T$ by $A\vert T$. Since T is generically reduced and locally a complete intersection, it is reduced. Hence $h^0(T,M^\ast ) = 0$ for every ample line bundle $M$ on $T$. Since $\omega _T \cong (\omega _X\otimes R^{\otimes 2})\vert T$, we have $\dim (\mbox{Ext}^1(T,\det ({\mathcal{E}})\otimes A^\ast,A) =h^0(T,\det ({\mathcal{E}})\otimes (A^\ast)^{\otimes 2} \otimes \omega _\otimes R^{\otimes 2}))\vert T)= 0$ (indeed $\eta ([\det ({\mathcal{E}})\otimes (A^\ast )^{\otimes 2}\otimes \omega _X\otimes R^{\otimes 2}]) = 2\alpha +c_1+e+2 <0$). Hence ${\mathcal{E}}\vert T \cong A\vert T\oplus (\det ({\mathcal{E}})\otimes A^\ast )\vert T$. Let $\sigma$ be the non-zero section of $({\mathcal{E}}\otimes (A\otimes \det ({\mathcal{E}})^\ast )\vert T$ coming from the projection onto the second factor of the decomposition just given. The vector bundle ${\mathcal{E}}\vert H$ is an extension of $\det ({\mathcal{E}})\otimes A^\ast \vert H$ by $A\vert H$ if and only if $C\cap H =\emptyset$. Since $R$ is ample, $C\cap H = \emptyset$ if and only if $C=\emptyset$. Hence we get simultaneously $C\cap H=\emptyset$ and ${\mathcal{E}}\vert H \cong A\vert H\oplus \det ({\mathcal{E}})\otimes A^\ast \vert H$ if we prove the existence of $\tau \in H^0(H,({\mathcal{E}}\otimes (A\otimes \det ({\mathcal{E}})^\ast )\vert H)$ such that $\tau \vert T = \sigma$. To get $\tau$ it is sufficient to have $H^1(H,(E\otimes (A\otimes \det ({\mathcal{E}})^\ast \otimes R^\ast )\vert H) =0$. A standard exact sequence shows that $H^1(H,({\mathcal{E}}\otimes (A\otimes \det ({\mathcal{E}})^\ast \otimes R^\ast )\vert H) =0$ if $h^1(X,({\mathcal{E}}\otimes (A\otimes \det ({\mathcal{E}})^\ast \otimes R^\ast ) =0$ and $h^2(X,({\mathcal{E}}\otimes (A\otimes \det ({\mathcal{E}})^\ast \otimes R^\ast \otimes R^\ast) =0$. Since ${\mathcal{E}}^\ast \cong {\mathcal{E}}\otimes \det ({\mathcal{E}})^\ast$, Serre duality gives $h^2(X,(E\otimes (A\otimes \det ({\mathcal{E}})^\ast \otimes R^\ast \otimes R^\ast) =h^1(X,{\mathcal{E}}\otimes A\otimes R^{\otimes 2}\otimes \omega _X)$. Since $\eta ([A\otimes \det ({\mathcal{E}})^\ast \otimes R^\ast ])=-\alpha -c_1-1$ and $\eta ([A\otimes R^{\otimes 2}\otimes \omega _X] )= \alpha +e+2$, we get that $C=\emptyset$. The last sentence follows because $\eta ([A^{\otimes 2}\otimes \det ({\mathcal{E}})^\ast] )= -2\alpha -c_1$. \[a3\] Instead of the smoothness of $X$ we may assume that $X$ is locally algebraic factorial, i.e. that all local rings $\mathcal {O}_{X,P}$ are factorial. This assumption seems to be essential, because without it a non zero section of $E\otimes M$ with $\eta ([M]) = \alpha (E)$ could vanish on an effective Weil divisor and hence we could not claim the existence of the exact sequence (\[b2\]). \[a4\] Fix integers $t < z \le \alpha -2$. Assume the existence of $L \in {\mathrm{Pic}}(X)$ such that $\eta ([L])=z$ and $h^1(X,E\otimes L)=0$. If there is $R\in {\mathrm{Pic}}(X)$ such that $\eta ([R])=1$ and $h^0(X,R)>0$, then there exists $M\in {\mathrm{Pic}}(X)$ such that $\eta ([M]) =t$ and $h^1(X,E\otimes M)=0$. If $h^0(X,R)>0$ for every $R\in {\mathrm{Pic}}(X)$ such that $\eta ([R])=1$, then $h^1(X,E\otimes M)=0$ for every $M\in {\mathrm{Pic}}(X)$ such that $\eta ([M]) =t$. The proof can follow the lines of Lemma \[leftvanishing\]. In fact consider a line bundle $R$ with $\eta([R]) = 1$ and let $H$ be the zero-locus of a non-zero section of $R$; then we have the following exact sequence: $$0 \to {\mathcal{E}}\otimes L \to {\mathcal{E}}\otimes L \otimes R \to ({\mathcal{E}}\otimes L\otimes R)_H \to 0.$$ Now observe that the vanishing of $h^1(X,{\mathcal{E}}\otimes L)$ implies that $h^0({\mathcal{E}}\otimes L \otimes R)_H = 0$. And now we can argue as in Lemma \[leftvanishing\] (see also [@VV]). \[a6\] (a) Assume the existence of $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) =1$ and $h^0(X,L)>0$. Then for every integer $t>\alpha $ there is $M\in {\mathrm{Pic}}(X)$ such that $\eta ([M])=t$ and $h^0(X,E\otimes M)>0$. \(b) Assume $h^0(X,L)>0$ for every $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) =1$. Then $h^0(X,E\otimes M)>0$ for every $M\in {\mathrm{Pic}}(X)$ such that $\eta ([M]) > \alpha$. In all our results of sections 4 and 5 we use the vanishing of $h^1({\mathcal{O}}_X(n))$ (and by Serre duality of $h^2({\mathcal{O}}_X(n))$), $\forall n$ (or, at least, $\forall n\notin \{0,\cdots,\epsilon\}$), see Remark 4.12. From now on we need to use similar vanishing conditions and so we introduce the following condition: $\mathbf{(C4)}$ $h^1(X,L) =0$ for all ${\mathrm{Pic}}(X)$ such that either $\eta ([L]) <0$ or $\eta ([L]) >\epsilon$.\ Observe that $\mathbf{(C4)}$ is always satisfied in characteristic 0 (by the Kodaira vanishing theorem). In positive characteristic it is often satisfied. This is always the case if $X$ is an abelian variety ([@Mumford] p. 150).\ Observe also that, if $\epsilon \le -1$, the Kodaira vanishing and our condition put no restriction on $n$ (see also Remark 4.12). **Example**. If (\[eqb1\]) holds, then $-2\alpha -c_1 > \epsilon$. Hence we may apply Proposition \[a2\] to $X$. In particular observe that, in the case of an abelian variety with ${\mathrm{Num}}(X) \cong \mathbb {Z}$ or in the case of a Calabi-Yau threefold with ${\mathrm{Num}}(X) \cong \mathbb {Z}$, we have $\epsilon = 0$. Notice that Proposition \[a2\] also applies to any threefold $X$ whose $\omega _X$ is a torsion sheaf. With the assumption of condition $\mathbf{(C4)}$ the proofs of Theorems \[nonstable1\], \[nonstable2\], \[nonstable3\] can be easily modified in order to obtain the statements below (${\mathcal{E}}$ is normalized, i.e. $\eta ([\det ({\mathcal{E}})]) \in \{-1,0\}$), where, by the sake of simplicity, we assume $\epsilon \ge 0$ (if $\epsilon < 0$, $\mathbf{(C4)}$, which holds by [@SB], implies that all the vanishing of $h^1$ and $h^2$ for all $L\in {\mathrm{Pic}}(X)$ hold). \[v1\] Assume $\mathbf{(C4)}$, $\alpha \le 0$, the existence of $R\in {\mathrm{Pic}}(X)$ such that $\eta ([R])=1$ and $\zeta _0 < -\alpha -c_1-1$. Fix an integer $n$ such that $\zeta _0 < n \le -\alpha - 1 -c_1$. Fix $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) = n$. Then $h^1({\mathcal{E}}\otimes L) \ge (n-\zeta _0)\delta >0$. Observe that we should require the following conditions: $n-\alpha \notin \{0,\dots ,\epsilon \}, \epsilon-n+\alpha \notin \{0,\dots ,\epsilon \}$. But they are automatically fulfiled under the assumption that $\zeta_0 < -\alpha-c_1-1$. \[v2\] Assume $\mathbf{(C4)}$, $\alpha \le 0$, the existence of $R\in {\mathrm{Pic}}(X)$ such that $\eta ([R])=1$ and the same hypotheses of Theorem \[nonstable2\]. Fix $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) = n$. Then $h^1({\mathcal{E}}\otimes L) \ge -S(n) >0$ ($S(n)$ being defined as in Theorem \[nonstable2\]). \[v3\] Assumption as in Theorem \[nonstable3\]. Moreover assume $\mathbf{(C4)}$ and $n -\alpha \notin \{0,\dots ,\epsilon \}$. Fix $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) = n$. Then $h^1({\mathcal{E}}\otimes L) \ge -\frac{d}{6}F(n+\alpha -\zeta _0)>0$ ($F$ being defined as in Theorem \[nonstable3\]). Observe that in Theorems \[v2\] and \[v3\] we should require $n -\alpha \notin \{0,\dots ,\epsilon \}$, but the assumption $\epsilon-\alpha-c_1+1 \le n$ implies that it is automatically fulfilled. Observe that in Theorems \[v2\] and \[v3\] we require $n -\alpha \notin \{0,\dots ,\epsilon \}$, but the assumption $\epsilon-\alpha-c_1+1 \le n$ implies that the requirement is automatically fulfilled. The proofs of the above theorems are based on the existence of the exact sequence (\[eqa1\]) and on the properties of $\alpha$. They follow the lines of the proofs given in the case ${\mathrm{Pic}}(X) \cong {\mathbb{Z}}$. Here and in section 4 we actually need only the Kodaira vanishing (true in characteristic 0 and assumed in characteristic $p > 0$) and no further vanishing of the first cohomology. Also the stable case can be extended to a smooth threefold with ${\mathrm{Num}}(X) \cong {\mathbb{Z}}$. Observe that the proofs can follow the lines of the proofs given in the case ${\mathrm{Pic}}(X) \cong {\mathbb{Z}}$ and make use of Remark 7.6 (which extends \[leftvanishing\]). More precisely we have: \[v4\] Assumptions as in \[stable1\] and fix $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) = n$. Then, if $\alpha\ge\frac{\epsilon+5-c_1}{2}$, then $h^1({\mathcal{E}}\otimes L)\ne 0$ for $w_0\le n\le \alpha-2$. \[v4\] Assumptions as in \[stable2\] and fix $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) = n$. Then the following hold: 1) : $h^1({\mathcal{E}}\otimes L)\ne 0$ for $\zeta_0< n < \zeta$. 2) : $h^1({\mathcal{E}}\otimes L)\ne 0$ for $w_0\le n \le \bar\alpha-2$, and also for $n=\bar\alpha-1$ if $\zeta\notin{\mathbb{Z}}$. 3) : If $\zeta\in{\mathbb{Z}}$ and $\alpha<\bar\alpha$, then $h^1({\mathcal{E}}\otimes N) \ne 0$, for every $N$ such that $\eta([N]) = \bar\alpha-1$. \[a5\] The above theorems can be applied to any $X$ such that ${\mathrm{Num}}(X)$ $\cong \mathbb {Z}$, $\epsilon =0$ and $h^1(X,L)=0$ for all $L\in {\mathrm{Pic}}(X)$ such that $\eta ([L]) \ne 0$, for instance to $X = $ an abelian threefold with ${\mathrm{Num}}(X) \cong \mathbb {Z}$. If $X$ is any threefold (in characteristic $0$ or positive) such that $h^1(X,L) = 0, \forall L \in {\mathrm{Pic}}(X)$, then we can avoid the restriction $n-\alpha \notin \{0,...,\epsilon\}$. Not many threefolds, beside any $X \subset {\mathbb{P}}^4$, fulfil these conditions. Observe that in Theorem \[v4\] we do not assume $\mathbf{(C4)}$ (see also remark 5.8) Observe that also in the present case (${\mathrm{Num}}(X) \cong {\mathbb{Z}}$), we have: $\delta = 0$ if and only if ${\mathcal{E}}$ splits. Therefore Remarks 4.13 and 5.9 apply here. [00]{} L. Chiantini, P. Valabrega, Subcanonical curves and complete intersections in projective 3-space, Ann. Mat. Pura Appl. **138** (4) (1984) 309–330. L. Chiantini, P. Valabrega, On some properties of subcanonical curves and unstable bundles, Comm. Algebra **15** (1987) 1877–1887. PH. Ellia, Sur la cohomologie de certains fibr' es de rang deux sur ${\mathbb{P}}^3$, Ann. Univ. Ferrara **38** (1992) 217–227. G. Gherardelli, Sulle curve sghembe algebriche intersezioni complete di due superficie, Atti Reale Accademia d’Italia **4** (1943) 128–132. R. Hartshorne, Algebraic Geometry, GTM 52, Springer Verlag, New York, 1977. R. Hartshorne, Stable vector bundles of rank 2 on ${\mathbb{P}}^3$, Math. Ann. **238** (1978) 229–280. R. Hartshorne, Stable reflexive sheaves, Math. Ann. **254** (1980) 121–176. R. Hartshorne, Stable Reflexive Sheaves II, Invent. Math. **66** (1982) 165–190. R. Lazarsfeld, Positivity in Algebraic Geometry I, Springer, Berlin, 2004. C. Madonna, A Splitting Criterion for Rank 2 Vector Bundles on Hypersurfaces in ${\mathbb{P}}^4$, Rend. Sem. Mat. Univ. Pol. Torino **56** (2) (1998) 43–54. D. Mumford, Abelian Varieties, Oxford University Press, 1974. S. Popescu, On the splitting criterion of Chiantini and Valabrega, Rev. Roumaine Math. Pures Appl. **33** (10) (1988) 883–887. M. Roggero, P. Valabrega, Some vanishing properties of the intermediate cohomology of a reflexive sheaf on ${\mathbb{P}}^n$, J. Algebra **170** (1) (1994) 307–321. N. Shepherd-Barron, Fano Threefolds in positive characteristic, Compositio Math. **105** (1997) 237–265. P. Valabrega, M. Valenzano, Non-vanishing theorems for non-split rank $2$ bundles on ${\mathbb{P}}^3$: a simple approach, Atti Acc. Peloritana **87** (2009) 1–18 \[DOI:10.1478/C1A0901002\]. M. Valenzano, Rank $2$ reflexive sheaves on a smooth threefold, Rend. Sem. Mat. Univ. Pol. Torino **62** (2004) 235–254. <span style="font-variant:small-caps;">Edoardo BALLICO</span>\ Dipartimento di Matematica, Università di Trento\ via Sommarive 14, 38050 Povo (TN), Italy\ e-mail: `ballico@science.unitn.it`\ <span style="font-variant:small-caps;">Paolo VALABREGA</span>\ Dipartimento di Matematica, Politecnico di Torino\ Corso Duca degli Abruzzi 24, 10129 Torino, Italy\ e-mail: `paolo.valabrega@polito.it`\ <span style="font-variant:small-caps;">Mario VALENZANO</span>\ Dipartimento di Matematica, Università di Torino\ via Carlo Alberto 10, 10123, Torino, Italy\ e-mail: `mario.valenzano@unito.it` [^1]: The paper was written while all authors were supported by MIUR (PRIN grant) and by local funds of their Universities, and were members of INdAM-GNSAGA.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Estimates of the bulk metal abundance of the Sun derived from the latest generation of model atmospheres are significantly lower than the earlier standard values. In Paper I we demonstrated that a low solar metallicity is inconsistent with helioseismology if the quoted errors in the atmospheres models (of order 0.05 dex) are correct. In this paper we undertake a critical analysis of the solar metallicity and its uncertainty from a model atmospheres perspective, focusing on CNO. We argue that the non-LTE corrections for abundances derived from atomic features are overestimated in the recent abundance studies, while systematic errors in the absolute abundances are underestimated. If we adopt the internal consistency between different indicators as a measure of goodness of fit, we obtain intermediate abundances \[C/H\] = 8.44 +/- 0.06, \[N/H\] = 7.96 +/- 0.10 and \[O/H\] = 8.75 +/- 0.08. The errors are too large to conclude that there is a solar abundance problem, and permit both the high and low scales. However, the center-to-limb continuum flux variations predicted in the simulations appear to be inconsistent with solar data, which would favor the traditional thermal structure and lead to high CNO abundances of (8.52, 7.96, 8.80) close to the seismic scale. We argue that further empirical tests of non-LTE corrections and the thermal structure are required for precise absolute abundances. The implications for beryllium depletion and possible sources of error in the numerical simulations are discussed.' author: - | M.H. Pinsonneault$^1$ & Franck Delahaye $^{1,2}$\ 1 Department of Astronomy, The Ohio State University, Columbus OH 43210 USA\ 2 LUTH, (UMR 8102 associée au CNRS et à l’Université Paris 7), Observatoire de Paris, F-92195 Meudon, France. title: ' The Solar Heavy Element Abundances: II. Constraints from Stellar Atmospheres.' --- Introduction ============ The uncertainty in the absolute chemical composition of stars is the limiting factor in our ability to do high precision stellar astrophysics. Traditionally, we have had to rely on a small database of fundamental stellar parameters such as mass, distance, and radius. However, current and upcoming space missions promise a wealth of astrometric and photometric data. Large surveys undertaken primarily for other purposes (microlensing, planet searches and cosmology) have discovered thousands of eclipsing binaries, yielding numerous precise mass estimates. The rapidly developing field of optical interferometry has also permitted a growing number of direct radius estimates. Asteroseismology is also growing in importance, and missions such as COROT promise a wealth of detailed information on the pulsational properties of solar-like stars. Our stellar interiors models have become highly sophisticated and successful when compared with observational diagnostics. In particular, the resolution of the solar neutrino problem in favor of the solar model predictions and the agreement between theoretical predictions and helioseismic data are both encouraging signs. The combination of better observations and theory has opened the prospect of a new era of precision stellar astrophysics, which could have broad consequences for diverse subfields of astronomy. Stellar atmospheres theory has traditionally employed a series of approximations when deriving abundances. Classical models assume an ad hoc turbulent velocity field adjusted to yield abundances independent of excitation potential and line strength. Convection is usually treated in an approximate fashion, with the mixing length theory. Horizontal temperature fluctuations (granulation) are not included. The models also typically assume that the molecular and atomic levels are described by local thermodynamic equilibrium (LTE), e.g. by the local temperature alone. The compilations of solar abundances used for theoretical solar models [@ag1989; @gn1993; @gs1998] employed model atmospheres with approximations at the level described above. The mean thermal structure employed was semi-empirical [@hm1974 hereafter HM74]. When these approximations are relaxed, different conclusions about the abundances are obtained. Departures from LTE are expected at a modest level for solar conditions, and have been investigated by a number of authors [for example @c1986; @sh1990; @k1993; @stb2001; @w2001]. Numerical simulations of convection have matured to the level where they can be used to predict velocity fields, temperature fluctuations, and changes in the mean thermal structure of the upper atmosphere [@sn1998]. Abundances derived from these simulations yield a very different pattern, which has been developed in a series of papers [@pla2001; @pla2002; @asplund2000a; @asplund2000b; @agsak2004; @agsak2005]; papers by Lodders (2003) and @ags2005 summarize the revised abundance scale. The net effect is in the sense of systematically lower metal abundances. The downward revisions for the heavier elements (e.g. Fe and Si) are small, while the claimed reduction in the abundances of lighter species (especially CNO) is more dramatic. Models employing different treatments of granulation and non-LTE corrections [@h2001; @sh2002] predict smaller abundance reductions. The central temperature predicted by interiors models is sensitive to the abundances of the heavier elements, but not the lighter ones. As a result, the new abundance scale does not disturb the agreement between interiors models and observational data for purposes such as the mass-luminosity relationship and solar neutrino fluxes. However, the inferred solar sound speed profile, and the radii of interiors models, is sensitive to the bulk metallicity. Serious problems have emerged when comparing interiors models with the revised abundance scale. These discrepancies are evidence for problems in our understanding of stellar interiors, stellar atmospheres, or both. In Paper I [@dp2006] we investigated the errors in solar abundances predicted by the combination of stellar interiors models and helioseismic data. In this paper we examine the uncertainties in the abundance predictions from stellar atmospheres theory. We begin with a brief summary of the results from Paper I, and follow with a discussion of the motivation and main results from the revised stellar atmospheres models. In Section 2, we perform a critical analysis of the precision of the solar CNO abundances and discuss the implications for Be. We demonstrate in that section that the errors in the abundances are larger than previously estimated, and that there is evidence that the “best” current solar CNO abundances are intermediate between the new and old scales, with errors permitting both. We discuss the implications of our finding and future tests in Section 3. In particular, we argue that inconsistencies between the solar thermal structure and that predicted by the simulations would favor a higher abundance scale closer to the seismic value and discuss uncertainties in the numerical convection simulations. Constraints from Helioseismology -------------------------------- Helioseismology provides two powerful constraints on the solar composition: diagnostics of the internal solar temperature gradient and diagnostics of the equation of state. Inversions of the observed solar pulsation frequencies yield accurate measures of the sound speed as a function of depth. In turn, the gradient in the sound speed can be directly tied to the temperature gradient. Since the temperature gradient is related to the opacity, and thus the composition, information on the solar abundances is encoded in the seismic data for the radiative interior. One can even obtain meaningful constraints on the solar age from the helium abundance profile deduced in the deep interior. In addition to the vector information on the sound speed profile, there are also precise scalar quantities that can be extracted. The thermal structure at the base of the solar convection zone is nearly adiabatic, while the temperature gradient in the interior is radiative. As discussed in Paper I, Section 2.2 the resulting discontinuity in $\nabla$ generates a distinct signal that can be used to precisely localize the base of the solar convection zone [$R_{cz}= 0.7133 \pm 0.0005 R_{sun}$ @ba2004]. Seismology also sets strict limits on convective overshooting [$< 0.05 H_P$, @cdmt1995]. The depth of the solar convection zone is sensitive to the light metal abundances in the Sun but insensitive to most of the other uncertainties in solar interiors models (see Paper I for a detailed error budget). Ionization also induces a depression in the adiabatic temperature gradient, and the absolute abundances of the species in question can be inferred from the magnitude of the perturbation in the surface convection zone. An extremely precise surface helium abundance can be deduced from this effect ($Y_{surf}=0.2483 \pm 0.0046$, see Paper I, Section 2.3 for the sources used in this estimate). More recently, @ab2006 have demonstrated that the ionization signal of metals in the convection zone can be detected in the seismic data, leading to a bulk metallicity $Z=0.017 \pm 0.002$. Because the majority of the solar metals are in the form of CNO, this is primarily a constraint on their abundance. In principle one might be able to use this technique to solve for individual heavy element abundances by fitting the strength of distinct ionization stages. However, it is not yet clear that there is sufficient spatial resolution in the seismic data to permit such a detailed analysis. In Paper I we demonstrated that the combination of the surface convection zone depth and surface helium abundance constraints was a powerful diagnostic of the solar heavy element abundances. The surface helium abundance is tied to the initial solar helium abundance with a correction for gravitational settling. The initial helium is sensitive to the central opacity and the abundances of the heavier metals (especially iron). The convection zone depth is sensitive to the opacity at temperatures   2 million K, where bound-free opacity from light metals (CNONe) is an important contributor. The most significant new finding in Paper I was that the combination of the two scalar constraints could be used to rule out some abundance combinations with high statistical significance. The detailed sound speed profile adds additional information; we found that models consistent with the scalar constraints could be constructed with low oxygen and very high neon, but such models exhibited substantial sound speed deviations relative to solar data in the deep interior. The inferred solar oxygen and iron abundances ($[O/H]=8.86 \pm 0.05, [Fe/H]= 7.50 \pm 0.05$) are consistent with the @gs1998 absolute abundances, but strongly inconsistent with the new abundance scale within its quoted errors [@l2003; @ags2005]. Although there are potentially positive chemical evolution consequences for the revised abundance scale [@turck2004], it is not easy to generate interiors models that are consistent with both seismology and the low abundance scale. The most commonly cited possible explanations on the interiors side (high neon, enhanced gravitational settling, and errors in the high temperature radiative opacities) are all strongly disfavored. As previously mentioned, solutions with high neon degrade agreement with the sound speed profile, and are also problematic from a stellar atmospheres perspective [@s2005]. An increase in the degree of gravitational settling increases the convection zone depth but decreases surface helium, trading improved agreement with one diagnostic for worse agreement in another. Enhanced differential settling of metals with respect to helium is inconsistent with the underlying physics and would have to be extreme [@gwc2005]. Three independent quantum mechanical calculations yield extremely similar Rosseland mean opacities at the temperatures of interest for the base of the solar convection zone. As discussed in Paper I, both the atomic physics and equation of state are relatively simple in this regime, and the concordance between different calculations is thus not a surprise. Physical processes neglected in classical stellar models (such as rotational mixing and radiative acceleration) can be independently constrained by other data, and in any case they would tend to induce higher rather than lower surface abundances. The scalar constraints are insensitive to the other theoretical ingredients in standard solar models (e.g. convection theory, surface boundary conditions, low temperature opacities, equation of state, and nuclear reaction rates). The considerations above indicate that it is extremely challenging to reconcile a low solar metal abundance with current stellar interiors models and seismic data. This does not imply that a metal-poor Sun is impossible, but it certainly motivates an investigation of the uncertainty in the atmospheres models used to derive the abundances. Model Atmosphere Ingredients ---------------------------- The new solar abundance estimates are derived from a variety of changes in the model atmospheres. Changes in oscillator strengths and equivalent widths of spectral lines contribute for some diagnostics, and as discussed below we largely concur with the revised values. The magnitude of non-LTE corrections depends on the atomic model and the relative importance of photo-excitation and collisions on the level populations in the model atmosphere. The comprehensive re-examinations of the solar oxygen [@agsak2004 hereafter AGSAK04] and carbon [@agsab2005 hereafter AGSAB05] adopted a particular model for NLTE corrections, and we assess its accuracy and uncertainties by comparison with limb darkening data and other published calculations. In @pla2001 [@pla2002] the abundances derived from forbidden lines have been reduced by the application of blending corrections; AGSAK04 and AGSAB05 used the revised equivalent widths for C and O abundance studies. Both the uncertainty in these corrections and their central value is incorporated in our error analysis. Three other coupled changes in the atmosphere models are the treatment of convective velocity fields (“macro/microturbulence”), horizontal temperature fluctuations (granulation), and the impact of convective overshooting on the mean thermal stratification. All of these features are derived from numerical convection simulations; a good discussion can be found in @sn1998. Their combined impact can be deduced by the comparison of the results in the 3D case in the published studies with the results from the semi-empirical 1D HM74 thermal structure and the theoretical 1D MARCS models. Since the initial 3D model atmosphere is derived with physics similar to that in the MARCS code, the impact of the convection treatment can be indirectly inferred by comparing MARCS and 3D abundances. A comparison of HM74 with MARCS and 3D is a measure of the impact of different choices of the thermal structure. The numerical convection simulations predict line profiles in excellent agreement with the data for iron and silicon lines [@asplund2000a; @asplund2000b]. There are some trends with excitation potential that may be related to NLTE corrections [@stb2001] and issues with the initial generation of simulations when compared with line profiles in the outer solar photosphere [@scott2006]. The concordance of the predicted amplitude of horizontal temperature fluctuations with the solar granulation pattern is encouraging [@sn1998; @asplund2000a]. The validity of the steep solar temperature gradient in the simulations is not as clear; there is an apparent conflict between the simulations and the mean thermal structure of the solar atmosphere, as well as the degree of convective penetration for the upper atmospheric layers [@ay2006]. Since these effects are all tied together in the abundance studies, we focus on the agreement between different diagnostics of abundance as a valuable test of the precision of the results obtained from different atmospheric models. The Solar CNOBe Abundances ========================== Revised solar abundances have been derived for a number of species. In this paper we focus on CNO for several reasons. First, these are the elements where the difference in abundance is largest, and they are also the cases where there is the largest variety of distinct abundance indicators. The difference between interiors and atmospheres based abundances for heavier species, such as Fe and Si, are not statistically significant. Furthermore, details of the new abundance estimates and internally consistent comparison with prior work are published for only some of the heavier elements. The solar Be abundance is an important diagnostic of mixing, and the photospheric abundance is linked to the solar O (Balachandran & Bell 1998). We therefore also briefly discuss the implications of our result for Be. We begin with a description of our overall approach, and then follow with individual sections on oxygen, carbon, and nitrogen. Our primary references for O, C, N are respectively AGSAK04, AGSAB05, and @ags2005. We include other studies for external comparisons, and discuss newer results based on other models and diagnostics when available. Overall Approach ---------------- Our primary metric for the accuracy of the abundances derived from the different model atmospheres is the consistency of the estimates derived from distinct classes of indicators. One might justifiably apply a different standard, noting for example the greater degree of sophistication in the input physics for the hydrodynamic simulations. However, it is not clear that the ingredients that induce the abundance changes (such as changes in the temperature gradient) are actually a necessary consequence of these improvements in the model, as the absolute errors in the first-principles theoretical models have not been quantified. We proceed in two steps. In the first, we compute relative abundances and errors within a given assumed model atmosphere for the atomic and molecular features. Since the abundances from atomic lines have smaller model-to-model differences (and thus smaller systematic errors), we adopt the atomic abundances for our base estimate. The difference between atomic and molecular abundances is then used to infer which of the different atomic scales should be adopted for the central value, and the uncertainty in the differential scales is used as a measure of the systematic error arising from the choice of model atmospheres. For the atomic line diagnostics, we include random errors from the dispersion of results from single permitted lines about the adopted mean. For the forbidden lines, uncertainties in oscillator strengths and blending corrections become the dominant random error source. We also include systematic errors (NLTE corrections and zero-point shifts in the average oscillator strength) by comparing the study values with external constraints and other published calculations. This is important when comparing the atomic and molecular indicators, since NLTE corrections are usually included in the former but not the latter. As a result, changes in the degree of NLTE corrections have a direct impact on goodness of fit. When available, we adopt a weighted mean of the permitted and forbidden atomic indicators and the error in the mean when comparing with the molecular data. For the molecular indicators, we include all measured lines of a given diagnostic and use the total dispersion (rather than the error in the mean) as a measure of goodness of fit. We adopt a weighted mean of the various indicators for an average molecular abundance. However, the error in the mean will understate the true uncertainty; it is frequently the case that the mean values for different molecular diagnostics differ by more than the dispersion within each indicator. We therefore compute the dispersion of each indicator about the adopted mean and average these values to obtain an error in the molecular abundances. One could alternately compute the dispersion in the molecular abundances and treat this as a measure of the systematic uncertainty, adding it to the error in the mean in quadrature; this procedure yields somewhat smaller errors. Although the latter approach may be practical when there are numerous molecular probes available, we prefer the former method for situations like oxygen (where there are 2 values, and thus an unreliable estimate of the uncertainty in the mean). We derive final abundance estimates for each species by comparing the mean abundances derived within each class of models for atomic and molecular indicators. In the case of oxygen, the 3D and HM74 models exhibit comparable differences with opposite sign, while the MARCS models have an internally consistent intermediate abundance. We therefore adopt a mean of the different derived oxygen abundances and an uncertainty from the scatter. In the case of nitrogen, the HM74 abundances are preferred. The case of carbon depends on the origin of systematic differences in abundances inferred from atomic features. If the low zero point of AGSAB05 is adopted, the 3D models are the favored solution. If the higher zero point of previous work is adopted, the situation is similar to that for oxygen. Oxygen Abundance Indicators --------------------------- AGSAK04 derived a low solar oxygen abundance ($8.66 \pm 0.05$) from four distinct indicators: atomic lines (forbidden and permitted) and two different classes of infrared molecular lines ((v,r) and (r,r)). All four indicators had formerly been used to obtain higher absolute oxygen abundance (8.83 to 8.90). Abundance estimates from all indicators are reduced in the theoretical atmospheres that include substantial overshooting because lines become stronger for a fixed abundance in the presence of a steeper temperature gradient. Molecular abundances are reduced more than atomic ones because the cooler atmospheric structure of the 3D hydro models changes the chemical equilibrium. AGSAK04 included other effects that reduced the abundances derived from atomic features without impacting the molecular indicators: a combination of changes in oscillator strengths, the inclusion of blending features, and their claim of large non-LTE effects for the permitted lines. In this section we discuss the uncertainties in each of these cases. We advocate a substantial decrease in the magnitude of the NLTE corrections for the \[O/H\] derived from the permitted atomic oxygen lines, and an increased error in both the abundance derived from the forbidden line (from uncertainties in the oscillator strength) and the permitted lines (from uncertainties in the NLTE corrections). We also derive an increased error from the \[O/H\] derived from the IR molecular lines from the internal scatter and trends with excitation potential, and argue that the correspondence between the trends in the MARCS and 3D models is evidence for errors in the common underlying model. Our basic results are summarized in Table 1. In Table 1, the upper part of the table repeats the mean abundances and errors for the 4 different indicators presented in AGSAK04. We present our revised estimates for the same three cases in the lower part. The last three rows in each sub-table give the mean discrepancy between atomic and molecular indicators. At the end of the section we synthesize this information to obtain our best estimate for the solar oxygen abundance. In each of the following subsections, the error estimates derived are internal ones. The differences between the results from the three classes of models are prima facia evidence that systematic errors are important. The systematic errors are discussed in the final subsection. ### Forbidden Oxygen Lines The forbidden oxygen line at 6300.3 Å  has traditionally yielded high oxygen abundances. @pla2001 argued that a nearby blended Ni line contributed significantly to the oxygen feature. They treated the continuum level, log (gfNi), and the oxygen abundance as free parameters, but assumed that the line profiles as given from the simulations were exact. The inclusion of the Ni feature induced a direct reduction of 0.13 dex in the inferred oxygen abundance. In addition, the usage of a 3D model atmosphere structure led to a further reduction of 0.08 dex in the oxygen abundance to an estimated \[O/H\] = 8.69 +/- 0.05. In AGSAK04 another forbidden line at 6363.7 Å  was considered as a second indicator. The authors reduced the equivalent width by 0.5 mÅ  for an estimated contribution from a blended CN feature to obtain an oxygen equivalent width of 1.4 mÅ , also implying a low abundance. We begin our analysis by noting that abundances derived from blended features are usually treated with great caution. The most conservative procedure is to ignore the blending feature and treat the derived abundance as an upper limit. When this is done for the two forbidden lines, the maximum abundance obtained for 3D, HM, and MARCS are (8.82, 8.8), (8.9, 8.88), (8.86, 8.84) respectively. In the section that follows, we include the reduction in abundances from estimates of the blending contribution. We adopt the AGSAK04 values for \[O/H\] derived from the 6300.3 Å  line, subject to the caution on the strength of the Ni blending feature below. For the 6363.7 Å  line, @m2004 argued that the (10,5) Q$_2$ 25.5 CN line is unblended in the solar spectrum and has the same oscillator strength as the feature blended with the forbidden line. He derived a smaller correction for the blended CN line (0.35 mÅ  rather than the 0.5 mÅ  value used in AGSAK04), which we adopt here. This leads to a modest 0.04 dex increase in the derived abundance from that line, which we also treat as an uncertainty in the \[O/H\] derived from this feature from the uncertainty in the contribution of CN to the blended feature. The error analysis for a blended feature is more complex than the one that can be employed for an isolated line. The derived \[O/H\] is sensitive to the continuum level, and an error component for this should be included; @pla2001 estimate this uncertainty at 0.02 dex, which we adopt for both lines. AGSAK04 adopted lower values for the oscillator strengths than those found in the NIST database, but their choice is well-supported by the improved atomic physics [see @sz2000] . However, the errors in individual theoretical log(gf) values are higher than those assigned in @pla2001 and subsequent papers; we adopt 0.04 dex for individual lines. @pla2001 also estimated that uncertainties in the underlying equation of state induces errors of 0.02 dex. Especially for the 6300 Å  line, the results depend heavily on the detailed line profiles, particularly in cases where the individual components cannot be directly disentangled. @pla2001 estimated uncertainties of 0.02 dex from the central wavelength of the Ni feature and a 0.04 dex uncertainty from the central wavelength of the \[O I\] line. We treat these errors as representative of the uncertainties in the line profiles, and adopt them for both forbidden lines. The treatment of the Ni line in the main forbidden line is more problematic. In the initial study, the oscillator strength was highly uncertain. @pla2001 treated log (gf) for Ni as a free parameter. The continuum level, $log (gf_{Ni})$, and \[O/H\] were treated as free parameters and the combination that produced the minimum $\chi^2$ was adopted. However, @j2003 have measured the oscillator strength of the Ni feature (log gf = -2.11), and the Ni abundance of the solar mixture is well-constrained by knowledge of the solar Si/Fe and the relative meteoritic abundances. With the new gf value and \[Ni/H\]=6.25 the blending feature would be 0.23 dex stronger than the best $\chi^2$ value obtained in the 2001 paper. AGSAK04 did not report the inferred strength of the blending feature in their fit, but the similarity in the absolute abundance suggests that it would be comparable. We believe that it is no longer appropriate to treat this as a free parameter, and the same method should be used as is done for other blended features: namely, the strength of the Ni line should be held fixed (and varied within its uncertainty) while the free parameters are the oxygen abundance and continuum level. The direct effect of increasing a blending contribution is usually to decrease the abundance, but the present case is more complicated. For the quoted values in the AGSAK04 fit, the Ni line contributes 25% of the total equivalent width of the line. An increase of 0.23 dex in the strength of the feature would imply a total Ni contribution of 43% of the blended equivalent width at a fixed continuum level; such a combination would be a poor fit to the line shape and would yield a reduction in the inferred oxygen of 0.12 dex. An increase in the continuum level would be required to restore the agreement with the line profile, which would in turn lead to an increased total equivalent width. The net impact on the derived oxygen is not obvious, and not necessarily in the negative direction. @r1998 invoked log gf=-1.95 for the Ni feature and obtained \[O/H\]=8.75 for the forbidden line when his oscillator strength for the forbidden line was adjusted to the same value as that employed by Allende Prieto et al. In the absence of other information, we assign an additional error component of 0.04 dex for the strength of the Ni feature, slightly higher than the value advocated by @m2004. Adding the errors in quadrature, we obtain an uncertainty of 0.078 per line (0.055 in the average) and abundances derived from the forbidden lines systematically 0.02 dex higher than those found in AGSAK04. ### Permitted Atomic Oxygen Lines The permitted atomic oxygen lines have relatively high oscillator strengths, but very high excitation potentials. The most commonly used features are the OI triplet at 7771.8, 7774.2, 7775.4 Å , with an excitation potential of 9.15 eV. AGSAK04 also considered three other atomic features (6158.1 Å , 8446.7 Å , and 9266 Å ). The primary reason for the low oxygen abundance inferred by AGSAK04 from the triplet is a large non-LTE correction. Because these lines arise from such a high energy state, non-LTE effects must be included. However, the quoted values of the non-LTE corrections in the literature vary drastically. For the triplet, the average is -0.06 dex for @h2001, -0.22 to -0.28 for AGSAK04, and -0.16 dex for @pafb2004. These variations can be partially traced to different assumptions about the importance of collisional excitation (as opposed to photoionization), but larger differences at the 0.10 dex level remain even for cases that make similar assumptions about hydrogen collisions. AGSAK04 neglected hydrogen collisions in their estimate of the non-LTE effects. They justified this by noting that for some well-studied lines, the classical @d1968 formulism overestimates the collision rate. However, an inspection of their Figure 6 indicates that the neglect of collisional effects in their model yields changes as a function of limb darkening that differ from the observed solar values. This impression is confirmed by the more detailed study of @pafb2004, who found that models including hydrogen collisions ( their $S_H=1$ case) were a better fit to the solar data. We therefore conclude that the non-LTE corrections in AGSAK04 are overestimated, which has a significant effect on the concordance of the different oxygen indicators. AGSAK04 applied larger downward reductions to the HM model than to the 3D and MARCS models, which we do not believe to be justified. @pafb2004 indicate that similar corrections are obtained for Kurucz and 3D models in their detailed study of the triplet as a function of limb darkening. This is particularly important because the discordance between the oxygen derived for the 1D models from atomic and molecular lines was used as a primary argument for the superiority of the 3D models, and this discrepancy can be directly traced to the assignment of very large NLTE corrections to the HM model. By very similar logic, the internal dispersion in abundance for the atomic lines in the 1D case arises from the assignment of large NLTE corrections to some of the lines; the internal agreement of the HM case is improved (and that of the 3D and MARCS cases degraded) with smaller NLTE effects. We illustrate this point in Figure 1. For the range of NLTE corrections that we consider reasonable (shaded band) the internal dispersion for the 3D models exceeds that of either 1D model. This emphasizes the role of substantial NLTE corrections to the scientific conclusions of AGSAK04. In order to quantify this effect, we normalized the non-LTE corrections in Table 3 of AGSAK04 to an average for the triplet of three values: 0.16 dex (the best fit from @pafb2004 and 0.11 dex (the mean between Holweger 2001 and @pafb2004), and 0.06 dex [@h2001]. We adopt the 0.11 dex level as our best case for reasons outlines below. The NLTE corrections for the other lines were scaled by the linear ratio of the average NLTE corrections for the triplet and the target values. We present revised abundance estimates for the permitted lines in AGSAK04 computed in this manner in Table 2. In Table 2, the first set of values contains the LTE results. The next three sets represent calculations where the NLTE corrections were normalized to obtain a mean triplet correction of 0.06 dex, 0.11 dex (our adopted mean), and 0.16 dex. The final set of results includes the NLTE corrections originally applied in AGSAK04. This procedure yields non-LTE \[O/H\] abundances of (8.69, 8.73, and 8.76) for 3D models, 1D HM74, and 1D MARCS respectively in the 0.16 dex case and (8.73, 8.76, 8.80) in the 0.11 dex case. Since the NLTE corrections are significant for the triplet, the uncertainty in these corrections is a major ingredient in the error budget. Even the reduced NLTE corrections of Allende Prieto et al. (2004) for the triplet are substantially larger than the corrections used by Holweger (2001), who found an average NLTE correction of -0.06 dex, 0.10 dex lower than the value reported by AGSAK04. Surprisingly, none of the authors involved commented on the origin of the difference. Holweger (2001) obtained average LTE and NLTE triplet abundances of 8.78 and 8.72 respectively; his NLTE abundance is close to that obtained for the two 1D models in AGSAK04. From @pafb2004, the case with no hydrogen collisions was ruled out at the $3 \sigma$ level, and LTE models were ruled out with high confidence. However, the authors did not consider whether even lower NLTE corrections than their $S_H = 1$ case would have provided improved fits to the data. In cases such as this, we see no justification for simply adopting one NLTE correction (0.16 dex) over another published value (0.06 dex), and adopt the average of the two (0.11 dex). We note that our central values are close to what we would infer if we simply took the triplet alone as an oxygen abundance indicator and assigned a 0.16 dex NLTE correction. We used the dispersion in the abundances derived from individual lines as a base random error. A change of 0.05 dex in the triplet NLTE correction yields an average change in \[O/H\] of 0.035 dex, which we include as an additional systematic error. Finally, the log (gf) values from AGSAK04 are lower than previously published values by @bhgvf1991; we therefore add another 0.025 dex systematic error, following a similar error analysis by @m2004. The net effect is a total error estimate (summarized in Table 1) of 0.064 to 0.074 dex. ### Infrared Molecular Lines The IR molecular oxygen lines have been the primary abundance indicator used in previous compilations of solar abundances [e.g. @gs1998], and 1D model atmospheres yield relatively high absolute oxygen abundances. The absolute abundances are a strong function of the thermal structure of the model atmospheres, and the different thermal structure of the 3D model of AGSAK04 yields much lower predicted abundances than the 1D models. We note that @h2001 discounted CNO abundances derived from molecular lines because of their high temperature sensitivity. AGSAK04 considered two sets of OH lines: (v,r) and (r,r). The abundances predicted as a function of excitation potential from the 3D hydro simulations are compared with those from the 1D Holweger-Mueller and MARCS codes in Figures 2 and 3. On these figures we have also indicated the average atomic abundances for each class of models. We note the presence of striking trends with excitation potential in the 3D and MARCS models for the (r,r) lines. The correspondence between the MARCS and 3D trends indicates that the origin of these features is common to both models, which indicates that it is a feature of the base atmospheric treatment rather than being induced by the convection simulation. In the context of 1D models, trends such as those seen in the 3D models would be interpreted as a problem with the thermal structure or the assumed microturbulence. AGSAK04 noted this trend, and claimed that it could be removed by invoking an outer atmosphere structure even cooler than the one predicted by the simulations. Figure 1 suggests an alternate explanation, namely that the thermal structure in the outer layers is closer to the hotter semi-empirical HM74 model. We will return to this point when we consider more recent work on CO abundances in the outer solar atmosphere. AGSAK04 discarded the (r,r) data for the weaker and stronger lines, in effect deriving an abundance from the valley in Figures 2 and 3. There is no better justification for discarding the high than the low points in this figure; such a procedure is not required for the 1D models. We therefore derived average abundances using all of the features for all three models and both indicators; the standard deviation about the mean is an indicator of the quality of the fit for the individual bands. Our results are summarized in Table 1. The average molecular abundances were obtained with a weighted mean, but a simple averaging of the errors underestimates the dispersion. We therefore computed $\sigma$ for each band around the weighted molecular mean in Table 1 and averaged the (v,r) and (r,r) values to obtain the total error in the molecular abundances presented there. @m2004 considered a third molecular oxygen abundance indicator, and derived relative abundance patterns comparable to what AGSAK04 found. We do not include this in Table 1 because it is not clear that systematic errors between model atmospheres codes can be properly accounted for in a differential analysis. If we had included the @m2004 the mean molecular abundances would have been minimally altered for 3D and HM. The average would be reduced for MARCS and the internal error in the MARCS \[O/H\] would be dramatically increased. This provides further evidence that there is an underlying issue in the thermal structure of the MARCS model. Melendez also computed abundances with the same indicators as AGSAK04 for a Kurucz model atmosphere, and derived similar abundances as would be found for the HM74 model. @ay2006 present evidence for a high solar oxygen derived from CO studies; we postpone a discussion of this interesting result to our conclusion, in the context of tests of the solar thermal structure. ### Oxygen Abundance and Error Analysis Our overall result from the reanalysis of the AGSAK04 oxygen indicators is that the abundances derived from the atomic indicators are systematically increased for all models. In the original paper, the 3D abundance estimators were found to yield consistent abundances, while the 1D abundances from different methods were highly discordant. This conclusion no longer holds when the reduced NLTE corrections inferred from limb darkening studies are employed. In fact, one would obtain very similar conclusions to those presented in Table 1 from the triplet abundance of 8.72 presented in Allende Prieto et al. (2004) for the 3D model (e.g. the 0.16 dex case presented in Table 2). Rather than simply adopting one model or another as correct, we interpret the difference between the HM74 and 3D abundances as evidence that the thermal structure of the Sun is intermediate between the two. The difference between the atomic and molecular abundances is roughly equal in magnitude and opposite in sign between these models; the MARCS model yields an intermediate abundance where the two classes of indicators give the same abundance, but with a larger error. The error in (Atomic-Molecular) is substantial; these differences are formally significant only at slightly more than $1 \sigma$. We therefore argue that the mean of the derived atomic abundances (8.75) is a reasonable estimator of what one would obtain from a model with a thermal structure capable of reproducing the atomic and molecular data; one would obtain 8.74 from comparing HM and 3D, and 8.76 from MARCS alone (which is already internally consistent). We have a random error of 0.05 dex for the mean atomic abundance, but this is insufficient for a total error because of the presence of strong systematic differences. Adopting the consistency between atomic and molecular indicators as a measure of goodness of fit, $~1 \sigma$ deviations could make either the 3D model (\[O/H\]=8.68) or the HM model (8.80) consistent. We treat this as a $1 \sigma = 0.06 $ systematic error, and note that it is comparable to the zero-point shift that we obtain for the atomic indicators relative to AGSAK04 estimated below. We can also examine systematic errors by comparing the AGSAK04 values with abundance estimates by other authors. These are most easily analyzed by comparing LTE abundances for the triplet. Holweger (2001) derived an average LTE triplet abundance of 8.78 for his standard model and 8.85 for the alternate VAL model. @bhgvf1991 reported 8.84 for the triplet for a HM model and 8.78 for a MACKKL atmosphere. These should be compared with 8.89, 8.87, and 8.93 for the three LTE cases in AGSAK04. We note that the 1D cases in AGSAK04 used the equivalent widths obtained with the line broadening of the 3D hydro models, so there will be differences between their results and those obtained with other 1D codes. The average LTE abundance is 8.85, with $\sigma = 0.055 dex$. Systematic differences at the 0.06 dex level are thus a reasonable estimate of the current state of the art for oxygen abundances when estimated with different techniques. Adding systematic (0.06) and random (0.05) errors in quadrature, we obtain 8.75 +/- 0.08 as our final oxygen abundance estimate for the Sun. This is less than $1.6 \sigma$ below the helioseismic abundance, and therefore we conclude that the existence of a solar oxygen problem has not been demonstrated with high statistical significance. Carbon Abundance Indicators --------------------------- The overall story for carbon follows a similar path to the changes in the inferred oxygen abundance, and the comprehensive reanalysis of AGSAB05 for carbon has a similar logical structure to the 2004 oxygen paper. Although both the carbon and oxygen are reduced, the C/O ratio is preserved. A cool outer solar atmosphere in the 3D models yields substantially reduced abundances from molecular indicators, while blending features and non-LTE effects reduce the carbon abundance inferred from atomic features. What distinguishes the carbon from the oxygen case is that the NLTE effects are smaller, and as a result one might anticipate a smaller offset in the atomic line abundances than the change in atomic oxygen abundance indicators. However, this is not the actual published result; if anything, the derived carbon from atomic features is lower than the molecular value for all of the models presented in AGSAB05. Furthermore, we will demonstrate that this effect cannot be explained by any of the effects used to explain the differences in the comparison with prior work given by AGSAB05. Until the origin of this difference is understood, we therefore have to consider two different systematic sets of abundances for atomic features, and the best choice of model hinges on which set is correct. In this section, we discuss the three classes of indicators (forbidden and allowed atomic lines, and molecular) in turn, and as for the oxygen abundance synthesize our final best estimate and error in the fourth subsection. Our overall estimates are presented in Table 3. The top set of values represents the original AGSAB05 values for the different indicators. The middle set is what we would obtain with the low AGSAB05 normalization of the atomic abundances, while the bottom set is what we obtain with the higher Biemont/Holweger normalization. ### Forbidden Lines The \[CI\] line at 8727 Å  was the subject of a detailed analysis by @pla2002. They incorporated blending from a nearby Si feature to reduce the equivalent width attributable to carbon from 6.5 to 5.3 mÅ , with a corresponding reduction in the inferred abundance. They also employ a lower oscillator strength than previous studies. Unlike the case of oxygen, the carbon abundance derived from the forbidden line is almost as temperature sensitive as that derived from molecular features, so the atomic versus molecular diagnostic is less powerful for carbon than for oxygen. We adopt the AGSAB05 central values for our base case, but note that there may be explained systematics in the atomic carbon abundances in AGSAB05 which we discuss below. We include their error estimate for uncertainties in the equation of state (0.02 dex), but assign a larger uncertainty to the atomic physics (0.04 dex) in accord with the quoted theoretical uncertainties. Our principle reservation on the error budget is the uncertainty in the continuum level and the contribution to the equivalent width of the blend from the wing of the Si feature. Their reduced $\chi^2$ permits only small deviations (of order 0.01 dex) in the derived carbon abundance, but the base model relies upon the assumption that the underlying velocity field is exact. Although the overall agreement with Fe [@asplund2000a] and Si [@a2000] line profiles is good, it is not errorless. We cannot evaluate this ingredient directly, but an estimate based upon the mean deviation observed in clean lines would seem to be a worthwhile exercise. For the present, we therefore assign the same blending uncertainty of 0.03 dex adopted by @pla2002 to obtain a total uncertainty of 0.054 dex. ### Permitted Atomic Lines AGSAB05 considered a subset of the permitted atomic features used in previous solar abundance studies [e.g. @bhgv1993; @sh1990]. @sh1990 found that small NLTE corrections are required for CI, and that the strength of the correction depends on equivalent width. They found an average of -0.05 dex; if restricted to the weaker lines included in AGSAB05, their average NLTE correction would be -0.02 dex. AGSAB05 computed Non-LTE corrections for 1D models, and the MARCS corrections were applied to the 3D models. Hydrogen collisions were not included in the NLTE corrections; this resulted in larger downward abundance revisions (an average of -0.08 to -0.09) than @sh1990. AGSAB05 note that the case of carbon should be an analog of oxygen, and we concur. As a result, we contend that the case with hydrogen collisions should be included in the base model. Both sources indicate that including hydrogen collisions roughly halves the expected NLTE correction. We therefore considered two cases for NLTE corrections: a maximum of half the AGSAB05 value (corresponding to their hydrogen collision case, average -0.04 dex) and a minimum of one quarter of the AGSAB05 value (corresponding to the SH90 case, average of -0.02 dex). Our best value is the average between the two (a mean of -0.03 dex), and the error induced by uncertainties in NLTE corrections is 0.01 dex; adopting the AGSAB05 hydrogen collision case would only have changed our mean value by 0.01 dex. We applied these proportional NLTE corrections to the AGSAB05 LTE results for their three classes of models (middle values, Table 3). There is a small reduction in the dispersion (and mean trend with equivalent width) for the 1D models and a corresponding increased for both in the 3D model; none of these features, however, are drastic. The internal dispersion in the permitted atomic abundances is of order 0.03 dex. A more substantial issue emerges when we compare the AGSAB05 abundances with prior work, and this is true even for the LTE estimates. The mean LTE abundance for the @bhgv1993 sample is 8.56 for the lines in common with AGSAB05; this should be compared with a HM LTE value of 8.48 for the latter compilation. This offset of 0.08 dex is comparable to the average difference between atomic and molecular abundance indicators. The mean difference in equivalent width and oscillator strength for the lines in common is negligible, and would yield an offset of less than 0.01 dex if applied under the assumption that all of the lines are on the linear part of the curve of growth. We illustrate the differences in Figure 4, defined in the sense (Biemont- AGSAB05). In this figure we have corrected the Biemont abundances to the AGSAB05 equivalent width and oscillator strengths. The differences are significant even for weak lines, suggesting that differences in the classical line broadening are probably not responsible. A similar, but smaller, effect is present in the forbidden line. @pla2002 inferred a HM abundance of 8.48, which would also be obtained from @sh1990 when a blending correction is made to the equivalent width. AGSAB05 could not trace a comparable difference (0.06 dex) relative to the earlier work of Lambert. The only obvious source that we can derive is a note by Sturenburg & Holweger that they corrected their atomic abundances for the fraction of C tied up in CO, which could be of the right order to explain the differences. Until the origin of this discrepancy (which is not present for oxygen) is explained, we have to treat this as a systematic uncertainty in the atomic abundance scale. Abundances derived under this scale are the last set of values in Table 3. ### Molecular Lines We consider the same four molecular indicators that were included in AGSAB05. They chose to disregard one of them (CH electronic lines) in their derived mean abundances, on the grounds that they are located in a crowded portion of the spectrum and sensitive to the treatment of line broadening. However, the formal errors in the CH electronic abundances are similar to those for the other molecular species, and as such we see no obvious reason to exclude them. We do treat the CH (v,r) abundances as being more reliable, as they are based on many more lines than the other diagnostics. We therefore assigned double weight to the CH values and single weight to both the C2 electronic and CH electronic values. As for oxygen, the mean was derived by a weighted average of the carbon obtained with different molecular indicators, and the scatter of the individual line measurements for all diagnostics around the adopted mean was taken as a measure of the random error. We did not include carbon (or oxygen) abundances derived from CO line studies, because there are complex correlated errors. Had we included them, the net effect would have been to increase the molecular abundances relative to the atomic values. ### Carbon Abundance and Error Analysis Our final inference concerning carbon depends on which atomic abundance scale is adopted. If we take the low scale of AGSAB05, the 3D model atomic and molecular abundances (8.40, 8.42) are closer than those for HM74 (8.45, 8.55); 1D MARCS abundances are also consistent (8.40, 8.44). We would estimate a mean value of 8.41-8.42, with a random error of 0.04 dex. The HM74 average of 8.50 would be an effective $2 \sigma$ internal inconsistency, implying a 0.04 dex systematic uncertainty for a total abundance of 8.41 +/- 0.06. Adopting the higher scale would give pairwise results of (8.44, 8.42), (8.50, 8.55), (8.44, 8.44); all three models are internally consistent within the errors, and a mean abundance would be 8.47 with a total error of 0.05 (0.04 random, 0.03 systematic). We adopt the mean of these approaches (8.44), and estimate an error of 0.04 (random) and 0.04 (systematic) for a total of 0.06 dex when combined in quadrature. Nitrogen Abundance Indicators ----------------------------- Our discussion of nitrogen is necessarily briefer than that of oxygen and carbon, largely because the published results are preliminary and incomplete. Holweger (2001) derived a non-LTE \[N/H\] = 8.0 +/- 0.11, comparable to results in previous compilations of solar abundances from @gs1998. The compilation of models in @ags2005 yields atomic and molecular nitrogen abundance estimates of (7.85 +/-0.08, 7.73 +/- 0.05), (7.97 +/- 0.08, 7.95 +/- 0.05), (7.94 +/- 0.08, 7.82 +/- 0.05) for 3D, HM, and MARCS respectively. The same correspondence between 3D and MARCS that was seen in oxygen is replicated in nitrogen, but the internal consistency in the HM model is higher than that in the other models. The formal significance of the disagreement in the 3D models is under $2 \sigma$, however, so we cannot exclude the possibility that they may be consistent. We therefore adopt the HM result as the central value (7.96 +/- 0.06), and treat the difference with the 3D result (7.78 +/- 0.06) as a $2 \sigma$ systematic error. This yields a total uncertainty in \[N/H\] of 0.10 dex dominated by systematic uncertainties. The Solar Beryllium Abundance ----------------------------- There is an interesting linkage between the solar O and Be abundances. In stellar interiors Be is destroyed at modest temperatures (of order 3.5 million K). It can therefore be used as a diagnostic of mixing in stars, especially in conjunction with the more fragile light element Li [@mhp1997]. Traditional model atmospheres studies [@cbm1975] yield a solar photospheric beryllium abundance roughly half of the meteoritic abundance. However, the only accessible Be feature is located in a crowded portion of the spectrum in the near UV, and the continuum opacity is uncertain in this regime (largely from the contribution of numerous weak iron lines). Since the strength of a line is a function of the ratio of the line to the continuous opacity, a higher photospheric Be could be derived if the continuous opacity background was higher than that of the model. In an important paper, @bb1998 pointed out that nearby OH lines could be used to test the continuous opacity close to beryllium. They derived a UV OH lines that were too strong if they used the absolute oxygen abundance obtained from the IR OH lines, and interpreted this as evidence that the continuous opacity is underestimated in the spectral window relevant for Be. Similar conclusions for 3D models were obtained by @a2004. Following @l2003, we note that the uncertainties in the ad hoc corrections are substantial. Asplund (2004) quotes photospheric and meteoritic abundance errors of 0.09 and 0.08 dex respectively, implying that his zero net photospheric depletion has a $1 \sigma$ uncertainty of 0.12 dex. Even if the Balachandran and Bell argument is entirely correct, the data sets a $2 \sigma$ limit of 0.24 dex on beryllium depletion and does not require that it be zero. There is also the possibility of substantial NLTE corrections to the UV OH lines, which would reduce or even eliminate the requirement for a mechanism to reduce the strength of the lines. We also note that the value of the oxygen used by @bb1998 for the IR lines (8.91) in the HM74 model is larger than the value derived from other molecular and atomic indicators, and even slightly larger than the value from the (v,r) transitions in the same model from the work of Asplund and collaborators. We contend that this promising approach still has substantial errors, including large uncertainties in the absolute solar oxygen abundance. We therefore believe that the approach of @l2003 is the best current picture of the degree of beryllium depletion in the Sun: namely, there is a substantial uncertainty in the degree of solar beryllium depletion, and that further work is required before powerful observational bounds can be used to constrain interiors calculations. Conclusions and Future Tests ============================ Our basic conclusion is simple: the difference between the solar CNO abundances as derived from model atmospheres and model interiors considerations is not statistically significant. The systematic errors in photospheric abundance indicators will have to be reduced before a “solar abundance problem” can be established (or ruled out) with confidence. However, the disagreement between the solar thermal structure and that of the simulations would favor the higher abundance scale, and there is some recently published evidence to that effect. If this is confirmed, it switches the nature of the problem from being a question of the correct abundance scale to a question of the uncertainties in numerical convection simulations. We begin with a synthesis and explanation of our findings. We then divide our conclusions into two parts. We recommend steps to more firmly establish the photospheric abundance scale, and contend that accurate solar abundances require tests of the thermal structure of the models and the magnitude of non-LTE abundance corrections. In our final subsection we then gather together evidence that the atmospheric abundance scale problem may be tied to the limited resolution in the convection simulations or errors in the underlying model atmosphere treatment. The consequences for the solar beryllium abundance, which is a useful diagnostic of internal mixing, are also explored. The two main justifications for the superiority of the 3D hydro atmospheres are the treatment of line broadening and the inclusion of granulation. Both of these represent genuine improvements in the atmospheric physics. However, neither of these effects is actually primarily responsible for the difference in the solar abundance scale. Many of the abundance indicators are insensitive to the effective microturbulence. If temperature fluctuations are imposed on a semi-empirical Holweger-Mueller atmosphere, the resulting granulation corrections are usually smaller than the 3D convection effects reported by Asplund and collaborators, and frequently opposite in sign [@h2001]. The main driver behind the systematic reductions in abundance derived from the 3D models is a theoretically predicted change in the thermal structure, coupled with large assumed non-LTE corrections for atomic features. Neither of these changes is directly supported by observational tests. Instead, the argument for the superiority of the abundances derived from the newer model atmospheres is an indirect one, focused on the concordance of abundances derived from different indicators. A consistent chain of logic emerges from the comprehensive studies of oxygen (AGSAK04) and carbon (AGSAB05). Classical LTE model atmospheres tend to yield internally consistent, and high, carbon and oxygen abundances for atomic and molecular indicators. The application of a different thermal structure in the 3D hydro atmospheres drastically reduces the abundances inferred from highly temperature sensitive molecular indicators, but has a smaller effect on atomic features. Large NLTE corrections are then applied to the abundances derived from permitted atomic features for both 1D and 3D models. The net result is that the abundance estimates from 1D models become internally inconsistent (atomic indicators yield lower abundances than molecular ones), while abundances derived from the 3D models are internally consistent. The abundances derived from forbidden lines are insensitive to NLTE effects, but they are reduced in the newer generation of models by the inclusion of blending features. As a secondary argument, the fits to individual indicators are argued to be superior in the 3D models when compared to the fits to individual indicators in the 1D models. This approach is appealing on the surface, but when examined in detail the picture is decidedly more ambiguous. If anything, the hints from the data would lean towards the opposite conclusion. The abundances derived from forbidden lines have the smallest systematic errors, but errors in both the theoretical oscillator strengths and the treatment of blending features result in non-negligible random errors. More to the point, the internal consistency of abundances derived from forbidden and molecular lines is actually similar in the 3D and 1D cases. From Table 1, the forbidden and molecular oxygen abundances are (8.71, 8.64) for 3D and (8.77, 8.84) for the HM; the differences are identical. Given the errors, neither discrepancy is statistically significant with high confidence. The abundances reported for permitted atomic features in AGSAK04 and AGSAB05 are significantly lower for 1D models than the corresponding molecular abundances, while the reported 3D results are in agreement. In the case of oxygen, this rests completely on the assignment of large NLTE corrections. These corrections were obtained under the assumption that hydrogen collisions were unimportant. Detailed studies of the response of the triplet to limb-darkening indicate that models including hydrogen collisions are favored, and the inferred NLTE corrections decrease. As a result, the internal consistency of the oxygen indicators is comparable for the different classes of atmospheres. Nitrogen is consistent for HM74 models and inconsistent (but at less than $2 \sigma$) for the 3D case. In the case of carbon, the situation is made more complex by significant zero-point offsets between earlier studies of carbon abundances that are not explained. Again, the assignment of larger NLTE corrections is uncertain (and, unlike the case of oxygen, not directly tested against limb-darkening data). A clean distinction between models on the basis of consistency is not obtained. However, the 3D models do yield different molecular and atomic abundances for both N and O, and might also do so for C. One might then hope to find distinct differences in the quality of the fits to different molecular indicators. The usual patterns, unfortunately, manifest themselves as simple zero-point shifts. For every case where there are issues with the 1D models (e.g. small trends with excitation potential in the \[O/H\] derived from (v,r) OH transitions in the HM model) there are comparable or larger effects for the 3D models (e.g. substantial trends in the \[O/H\] derived from (r,r) OH transitions). In a recent preprint, @scott2006 examined CO indicators, and the resulting pattern is illustrative. The 3D models yielded similar results for two of the three features studied, while the 1D models performed better in a different pair of indicators. The $C^{12}/C^{13}$ ratio from the 1D models ranges from 69 to 84, while the same ratio for the 3D models ranges from 83 to 108. These values should be contrasted with the expected terrestrial ratio of 89. Scott et al. (2006) choose comparisons that favor the 3D models, while an advocate of the traditional models might reasonably stress the other cases. In our view, the best choice of models is not clearly distinguishable from the CNO abundance studies. We recommend caution when extrapolating these model results to other stars, where the differential effects can be even more drastic. Establishing the Absolute Photospheric Abundance Scale ------------------------------------------------------ The single most important test that is required for atmospheres theory is a discriminant between the different proposed thermal structures of the solar atmosphere. The recent paper by Ayres et al. (2006) makes an important contribution by making direct comparisons of solar data with the thermal properties of the simulations. They present evidence that the solar center-to-limb variations in continuum flux are inconsistent with the predictions of the 3D hydro simulations. They also note that the predicted magnitude of fluctuations in the upper atmosphere from the simulations appears to be larger than the observed pattern. Ayres et al. then construct an empirical model of the atmosphere and derive a high oxygen abundance (8.85) from CO molecular features under the assumption of a fixed C/O ratio. In retrospect this conclusion is not surprising. The HM model is not a purely theoretical exercise; it was constructed to reproduce the mapping of the source function as a function of optical depth inferred from limb darkening studies of continuum flux and strong lines [see also @arg1998]. The relative trends we have inferred from atomic and molecular abundance indicators support the conclusions of Ayres et al., but the current errors make our evidence in this matter suggestive but not conclusive. It would also be highly beneficial to repeat the HM exercise with the full 3D models as opposed to the restricted form of them that Ayres et al. (2006) had available to them. Ultimately, the absolute accuracy of photospheric abundances is directly tied to the absolute accuracy of the thermal structure. This suggests that an approach similar to that of @sh2002 may be the optimal one. In their paper they examined the impact of temperature fluctuations around an assumed mean empirical thermal structure, which in their case was the HM74 model. Interestingly, the abundance corrections that they derive would act in the sense of increasing the concordance between abundance indicators. Oxygen abundances from atomic indicators would be slightly increased; although they did not consider molecular features directly, the net effect would certainly have the same sign as that obtained from 3D hydro models, namely a decrease in the inferred abundance. In such a differential approach, deviations between the mean structure of the simulations and the empirical data would be used as guidance concerning the underlying physics. In contrast, the 3D model abundances assume that the ab initio profile is correct. A similar approach could be employed for the velocity field that replaces the microturbulence and macroturbulence in traditional 1D atmospheres. A second ingredient that must be tested empirically, rather than by theoretical assertion, is the magnitude of NLTE corrections. The available evidence suggests that NLTE corrections are in general small for the Sun, but for the level of precision required in the absolute abundance scale these small corrections are significant. Studies of different spectral features yield different conclusions about the physical model employed in NLTE studies. This implies that there are significant uncertainties in absolute theoretical calculations. Fortunately, NLTE corrections can be constrained by the response of line strength to limb darkening in the Sun. It should be possible to develop improved theoretical models with a sufficient database of information developed in this fashion. One other stringent test of NLTE effects may be to focus on the species whose relative abundances can be reliably inferred from meteoritic data. For example, NLTE effects may be significant for iron [@stb2001] but less so for Si [@w2001]. @h2001 noted that there may be a conflict between the photospheric and meteoritic Fe/Si ratio, albeit one of marginal significance. A similar situation may exist for Na [@ags2005]. Another tractable problem is the absolute error for the forbidden C and O lines. In these cases, uncertainties in the line profiles and continuum levels should be included. Better atomic data (such as oscillator strengths for both the lines and the blending features) would also be useful. The accuracy of the theoretically predicted turbulent velocity field as a function of optical depth should also be subjected to a more rigorous analysis. @scott2006 present evidence that the generation of simulations used for the abundance analysis yielded poor fits to the line bisectors of CO lines. Higher resolution simulations gave better line profile fits, but for (unspecified) unrealistically high C/O abundances. The higher resolution simulations were not employed in the CO abundance analysis in that paper. It is worth keeping in mind that line profiles are integral quantities, and as a result the uniqueness of the solutions is not established by individual cases of good fits. This is particularly true when the abundance itself is treated as a free parameter. It would be extremely useful if future papers on abundances derived using numerical simulations illustrated individual line fits, as well as quantifying the actual impact of the “effective microturbulence” on the abundance estimates. It is useful to separate out the impact of velocity broadening from the effect of granulation and temperature gradient changes. This can be done by using the mean thermal structure and temperature fluctuations from the simulations and a more traditional micro/macroturbulence model to infer abundances, and comparing the results with the full 3D models. @scott2006 constructed such a test case (their 1DAV model), and found only small abundance offsets, of order 0.04 dex for oxygen derived from IR OH lines. They also inferred carbon abundances from CO; in this case O was held fixed and the carbon was adjusted to fit different molecular indicators. The deviations in the derived carbon abundances relative to the 3D case ranged from small (0.01 dex for the LE lines) to modest (0.06 dex for the weak $\Delta \nu = 1$ lines) to large (0.14 dex for the $\Delta \nu = 2$ lines). These deviations may explain the changes in excitation potential that @ay2006 needed to obtain consistent abundances within a 1D framework. This exercise implies that the impact of the improved microphysics varies substantially for different indicators, and is worth quantifying across the board. An alternate exercise (using the revised velocity field and relative temperature fluctuations while adopting a HM74 mean thermal structure) might also be illuminating. Uncertainties in Numerical Convection Simulations ------------------------------------------------- First-principles theoretical model atmosphere calculations have undeniable strengths. The ability to naturally reproduce line widths and include granulation is a powerful addition to our ability to reliably interpret stellar and solar spectra. The principal difficulty with such models is that errors in the input physics generate absolute errors in the inferred atmospheric structure that cannot be calibrated away in the absence of explicit free parameters. This phenomenon is the major reason why numerical convection simulations have not replaced the simple mixing length theory in stellar interiors calculations. Interiors models that can reproduce observed stellar radii are simply more useful for most purposes than models with a better physical treatment of convection that fail to do so. Before the results from such models are adopted as the new abundance standard, it will be necessary to perform an extensive theoretical error analysis and to compare the models with the strongest observational constraints. We believe that accurate solar abundance calculations must reproduce the observed solar thermal structure, and from the Ayres et al. (2006) paper the Asplund models employing numerical convection simulations appear to yield a temperature gradient steeper than the real Sun. This could be caused by errors in the background (1D) stellar atmospheres treatment; for example, uncertainties in the equation of state and continuous opacities will induce absolute errors in the thermal structure. An approach similar to that employed in interiors models would be useful for assessing the uncertainties in the thermal structure and abundance predictions, and this should be included in the error budget for abundances. It is more likely, however, that the major error source in 3D hydro model atmospheres is related to uncertainties in the numerical convection simulations. The approximations in hydro simulations of giant planet atmospheres are have been demonstrated to be strongly affected by the quality of the assumed physics [@eg2004]. @zs2006 also provides a good summary of the uncertainties in the related problem of terrestrial and solar dynamo models. Another phenomenon that could be related is the issue of convective overshooting below surface convection zones. Numerical simulations have tended to favor extensive overshooting, and the early models had a substantial nearly adiabatic overshoot region, in conflict with the stringent limits set by seismology (less than $ 0.05 H_p$). More recent 3D @bct2002 and 2D [@rg2005; @rg2006] calculations found that the filling factor for plumes is smaller than previously thought, which led to an overestimate in earlier models of the changes induced by overshooting in the thermal structure. The newer simulations predict strongly subadiabatic overshooting (effectively, overmixing without changing the thermal structure), which is consistent with the seismic limits. However, they still produce a substantial mixed region below the surface convection zone of order $0.4 H_p$. Since even a small overmixing of $0.05 H_p$ drastically increases pre-MS lithium depletion [@mhp1997], which is already too efficient relative to stellar data [@ptc2002], it is likely that even this reduced overshooting is too large to be compatible with stellar constraints. We argue that there is a common pattern in both “undershooting” and “overshooting” above and below convective regions. In both cases, the numerical simulations may be overestimating the degree of mixing and the impact on the thermal structure of convection outside the formal bounds set by the Schwartzschild criterion. There are two plausible error sources that should be investigated. The treatment of heat transfer in the atmosphere convection simulations is necessarily simplified, and this may be leading to an artificial inhibition in energy transport between turbulent cells projected into the radiative atmosphere and their surroundings. Resolution effects, however, may be even more important. Even the highest resolution simulations available today are many orders of magnitude away from being able to reproduce the characteristic Reynolds numbers in the Sun. Scott et al. (2006) found significant changes in line bisectors for the outer layers of their solar model when they increased their resolution, and these changes were in the sense of reducing the temperature contrast in the upper atmosphere and improving the shape of the bisectors relative to data. Numerical tests with substantially increased resolution may shed some interesting light on the sensitivity of the predictions to the underlying numerics; 2D convection simulations may be useful in this regard. We are optimistic that the net effect of such testing will be a greatly improved understanding of the strengths and weaknesses of theoretical atmospheres models, just as we are confident that the net result of the solar abundance controversy will be a far more secure knowledge of stellar abundances. We would like to thank Martin Asplund for providing tables of the abundances derived from molecular oxygen abundance indicators. We would also like to thank Don Terndrup, Jennifer Johnson, Andreas Korn, Chris Sneden, and Hans Ludwig for helpful discussions on stellar abundance determinations. FD would like to thank Claude Zeippen for discussions on the uncertainties in the atomic data. Allende Prieto, C., Asplund, M., & Fabiani Bendicho, P. 2004, , 423, 1109 Allende Prieto,C., Lambert, D. L., & Asplund, M. 2001, , 556, L63 Allende Prieto,C., Lambert, D. L., & Asplund, M. 2002, , 573, L137 Allende Prieto, C., Ruiz Cobo, B., & Garcia Lopez, R. J. 1998, , 502, 951 Anders, E., & Grevesse, N. 1989, , 53, 197 Antia, H. M., & Basu, S. 2006, , in press (astro-ph/0603001) Asplund, M. 2004, , 417, 769 Asplund, M., Nordlund, [Å]{}., Trampedach, R., & Stein, R. F. 2000, , 359, 743 Asplund, M., Nordlund, [Å]{}., Trampedach, R., Allende Prieto, C., & Stein, R. F. 2000, , 359, 729 Asplund, M. 2000, , 359, 755 Asplund, M., Grevesse, N., & Sauval, A. J. 2005, ASP Conf. Ser. 336: Cosmic Abundances as Records of Stellar Evolution and Nucleosynthesis, 336, 25 Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C., & Kiselman, D. 2005, , 435, 339 Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C., & Blomme, R. 2005, , 431, 693 Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C., & Kiselman, D. 2004, , 417, 751 Ayres, T. R., Plymate, C. & Keller, C. U. 2006, , (in press) Balachandran, S. C., & Bell, R. A. 1998, , 392, 791 Basu, S., & Antia, H. M. 2004, , 606, L85 Biemont, E., Hibbert,A., Godefroid, M., & Vaeck, N. 1993, , 412, 431 Biemont, E., Hibbert, A., Godefroid, M., Vaeck, N., & Fawcett, B. C. 1991, , 375, 818 Brummell, N. H., Clune, T. L., & Toomre, J. 2002, , 570, 825 Carlsson,M., Uppsala Astronomical Observatory Repport No 33 Chmielewski, Y., Brault, J. W., & Mueller, E. A. 1975, , 42, 37 Christensen-Dalsgaard, J., Monteiro, M. J. P. F. G., & Thompson, M. J. 1995, , 276, 283 Delahaye, F., & Pinsonneault, M. 2006, , 647, in press Drawin, H. W., Z. Phys, 211,404 Evonak, M. & Glatzmaier, G. 2004 Geophysical and Astrophysical Fluid Dynamics 98, 241 Grevesse, N., & Noels, A. 1993, Physica Scripta Volume T, 47, 133 Grevesse, N., & Sauval, A. J. 1998, Space Science Reviews, 85, 161 Guzik, J. A., Watson, L. S., & Cox, A. N. 2005, , 627, 1049 Holweger, H. 2001, AIP Conf. Proc. 598: Joint SOHO/ACE workshop ”Solar and Galactic Composition”, 598, 23 Holweger, H., & Mueller, E. A. 1974, , 39, 19 Johansson, S., Litz[é]{}n, U., Lundberg, H., & Zhang, Z. 2003, , 584, L107 Kiselman, D. 1993, , 275, 269 Lodders, K. 2003, , 591, 1220 Mel[é]{}ndez, J. 2004, , 615, 1042 Piau, L., & Turck-Chi[è]{}ze, S. 2002, , 566, 419 Pinsonneault, M. 1997, , 35, 557 Reetz, J. 1998, PhD thesis, Ludwig-Maximilians Univ. Rogers, T. M., & Glatzmaier, G. A. 2005, , 620, 432 Rogers, T. M., & Glatzmaier, G. A. 2006 submitted to ApJ - astro-ph/0601668 Schmelz, J. T., Nasraoui, K., Roames, J. K., Lippner, L. A., & Garst, J. W. 2005, , 634, L197 Scott,P. C., Asplund, M., Grevesse, N. ,A., Sauval, J.  2006, in press, astro-ph/0605116 Shchukina, N., & Trujillo Bueno, J. 2001, , 550, 970 Steffen, M., & Holweger, H. 2002, , 387, 258 Stein, R. F., & Nordlund, A. 1998, , 499, 914 Storey, P. J., & Zeippen, C. J. 2000, , 312, 813 Stuerenburg, S., & Holweger, H. 1990, , 237, 125 Turck-Chi[è]{}ze, S., Couvidat, S., Piau, L., Ferguson, J., Lambert, P., Ballot, J., Garc[í]{}a, R. A., & Nghiem, P. 2004, Physical Review Letters, 93, 211102 Wedemeyer, S. 2001, , 373, 998 Zhang, K., & Schubert, G. 2006, Reports of Progress in Physics, 69, 1581 Table1.tex Table2.tex Table3.tex
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $F$ be a totally real field in which $p$ is unramified. Let ${\overline{r}}: G_F {\rightarrow}{\mathrm{GL}}_2(\overline{{\mathbb{F}}}_p)$ be a modular Galois representation which satisfies the Taylor–Wiles hypotheses and is generic at a place $v$ above $p$. Let ${\mathfrak{m}}$ be the corresponding Hecke eigensystem. Then the ${\mathfrak{m}}$-torsion in the mod $p$ cohomology of Shimura curves with full congruence level at $v$ coincides with the ${\mathrm{GL}}_2(k_v)$-representation $D_0({\overline{r}}|_{G_{F_v}})$ constructed by Breuil and Paškūnas. In particular, it depends only on the local representation ${\overline{r}}|_{G_{F_v}}$, and its Jordan–Hölder factors appear with multiplicity one. This builds on and extends work of the author with Morra and Schraen and independently of Hu–Wang, which proved these results when ${\overline{r}}|_{G_{F_v}}$ was additionally assumed to be tamely ramified. The main new tool is a method for computing Taylor–Wiles patched modules of integral projective envelopes using multitype tamely potentially Barsotti–Tate deformation rings and their intersection theory.' author: - Daniel Le bibliography: - 'multonewild.bib' title: Multiplicity one for wildly ramified representations --- Introduction ============ Let $F/{\mathbb{Q}}$ be a totally real field which is unramified at a rational prime $p$. Let ${\mathbb{F}}$ be a finite extension of ${\mathbb{F}}_p$. Suppose that ${\overline{r}}: G_F {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ is a Galois representation occuring in the ${\mathbb{F}}$-cohomology of a Shimura curve $X_{/F}$ with corresponding Hecke eigensystem ${\mathfrak{m}}$ (see §\[sec:main\]). Suppose that the corresponding quaternion algebra splits at $p$. Let $v$ is a place of $F$ dividing $p$, let $K^v$ be a compact open subgroup of $(D\otimes_F {\mathbb{A}}_F^{\infty,v})^\times$ and $K_v(n)$ the $n$-th principal congruence subgroup at $v$. One expects that the analogues of the mod $p$ local Langlands correspondence for ${\mathrm{GL}}_2({\mathbb{Q}}_p)$ and mod $p$ local-global compatibility for ${\mathrm{GL}}_2({\mathbb{Q}})$ describe the ${\mathrm{GL}}_2(F_v)$-representation $$\pi' = {\mathrm{Hom}}_{G_F}({\overline{r}},\varinjlim_n H^1(X(K^vK_v(n)),{\mathbb{F}})[{\mathfrak{m}}_{{\overline{r}}}])$$ in the completed cohomology of $X$, at least up to multiplicities, in terms of ${\overline{\rho}}{\stackrel{\textrm{\tiny{def}}}{=}}{\overline{r}}|_{G_{F_v}}$. In fact, we study a related representation $\pi = (M^{\mathrm{min}})^*$ (see §\[sec:main\]), which is minimal with respect to multiplicities. These analogues are unknown at present, although [@breuil; @EGS] show that if ${\overline{r}}$ satisfies the usual Taylor–Wiles hypotheses and ${\overline{\rho}}$ is generic, then $\pi$ contains one of infinitely many ${\mathrm{GL}}_2(F_v)$-representations constructed by [@BP]. The idea, as explained in [@breuil], behind the constructions in [@BP] is that if one can show that the restriction of $\pi$ to the maximal compact subgroup ${\mathrm{GL}}_2({\mathcal{O}}_{F_v})$ satisfies certain multiplicity one properties, then $\pi$ must contain a Diamond diagram of the form $D({\overline{\rho}},\iota)$. These multiplicity one properties, which one might view as minimalist conjectures, were established in [@EGS]. That the family of representations containing a diagram $D({\overline{\rho}},\iota)$ is infinite is unfortunate and warrants the further investigation of $\pi$. One part of a Diamond diagram $D({\overline{\rho}},\iota)$ is a ${\mathrm{GL}}_2(k_v)$-representation denoted $D_0({\overline{\rho}})$, which is a subrepresentation of $\pi|_{{\mathrm{GL}}_2({\mathcal{O}}_{F_v})}$ (see [@breuil Proposition 9.3]), and thus a subrepresentation of the invariants of $\pi$ under the first principal congruence subgroup $K_v(1)$ of ${\mathrm{GL}}_2({\mathcal{O}}_{F_v})$. Our main result is the following. \[intro:mainthm\] If ${\overline{r}}$ satisfies the Taylor–Wiles hypotheses and ${\overline{\rho}}$ is generic $($see Definition \[def:gen\]$)$, then the ${\mathrm{GL}}_2(k_v)$-representation $\pi^{K_v(1)}$ is isomorphic to $D_0({\overline{\rho}})$. In particular, it only depends on ${\overline{\rho}}$ and is multiplicity free. One can view this result as showing that $\pi$ satisfies a minimality property. A similar result has been announced by Hu–Wang. The main tool in the proof of Theorem \[intro:mainthm\] is the Taylor–Wiles patching method. Diamond and Fujiwara [@D; @F] discovered that the Cohen–Macaulay property of patched modules could be combined with local algebra results of Auslander, Buchsbaum, and Serre to rederive and generalize mod $p$ multiplicity one results of Mazur for modular forms with level away from $p$. [@EGS] proved similar results for modular forms with level at $p$ by introducing two gluing methods to calculate patched modules from smaller ones to which the Diamond–Fujiwara trick applied. The first method is a version of Nakayama’s lemma and uses the submodule structure of mod $p$ reductions of Deligne–Lusztig representations. The second method combines the submodule structure above with the intersection theory of special fibers of tamely potentially Barsotti–Tate deformation rings. When ${\overline{\rho}}$ is tamely ramified, [@HW; @LMS] show that the patched modules of projective envelopes of irreducible ${\mathbb{F}}[{\mathrm{GL}}_2(k_v)]$-modules are cyclic modules by describing the submodule structure of these projective envelopes and using the Nakayama method of [@EGS] (cf.  Proposition \[prop:oldcyc\]). However, the gluing methods of [@EGS] are insufficient when ${\overline{\rho}}$ is wildly ramified. Indeed, these methods only glue together characteristic $p$ patched modules, but there is more than one isomorphism class of ${\mathbb{F}}[{\mathrm{GL}}_2(k_v)]$-modules satisfying the multiplicity one properties for $\pi^{K_v(1)}$ established by [@EGS] when ${\overline{\rho}}$ is wildly ramified. We introduce a variant of the intersection theory method of [@EGS], which uses the intersection theory of integral tamely potentially Barsotti–Tate deformation rings. Let $W({\mathbb{F}})$ denote the Witt vectors of ${\mathbb{F}}$. The first step (Proposition \[prop:oldcyc\]) is to show that the methods of [@EGS] still apply to certain quotients of generic $W({\mathbb{F}})[{\mathrm{GL}}_2(k_v)]$-projective envelopes (which are projective envelopes in the abelian category of $W({\mathbb{F}})[{\mathrm{GL}}_2(k_v)]$-modules generated by lattices in some fixed set of Deligne–Lusztig representations). If such a quotient is not irreducible rationally, then it can be written as a submodule of the direct sum of two smaller quotients with $p$-torsion cokernel (see Proposition \[prop:exseq\]). This reflects a kind of transversality: while these subcategories do not give a direct product decomposition of the category of $W({\mathbb{F}})[{\mathrm{GL}}_2(k_v)]$-modules, if two subquotients of lattices in two distinct Deligne–Lusztig representations are isomorphic, they must be $p$-torsion. By exactness of patching and this exact sequence, it turns out that the patched modules of $W({\mathbb{F}})[{\mathrm{GL}}_2(k_v)]$-projective envelopes are then determined by the patched modules of these quotients (this depends crucially on the fact that all such patched modules turn out to be cyclic). It remains to actually compute these patched modules using intersection theory in a multitype Barsotti–Tate framed deformation space, which we define to be the Zariski closure in the unrestricted framed deformation space of ${\overline{\rho}}$ of potentially Barsotti–Tate Galois representations with tame inertial type in some fixed set. That the resulting patched module is cyclic comes from the fact that the multitype Barsotti–Tate deformation rings exhibit a similar kind of transversality: two lattices in potentially Barsotti–Tate Galois representations of two distinct generic tame inertial types can be congruent modulo $p$, but never modulo $p^2$. We now give a brief overview of the following sections. In §2, we generalize some of the results of [@LMS] and prove the key result (Proposition \[prop:exseq\]) gluing integral projective envelopes from their quotients. In §3, we define and calculate multitype Barsotti–Tate deformation rings—this is the other key technical input. To compare Kisin modules for varying tame types, it is much more convenient to choose eigenbases for Kisin modules which are not always gauge bases in the sense of [@EGS §7.3]. This requires generalizing [@LLLM Theorem 4.1]. In §4, we calculate the abstract patched modules of projective envelopes using the Nakayama method and our integral intersection theory method. In §5, we apply the results of §4 to the cohomology of Shimura curves using the Taylor–Wiles method. Acknowledgments --------------- Lemma \[lemma:ca\] originally appeared in [@LLM], and we thank Bao Le Hung and Stefano Morra for allowing us to reproduce it here. The idea to use multitype Barsotti–Tate deformation rings grew out of the joint work ([@LLM]). We thank Bao Le Hung and Stefano Morra for this collaboration and other useful discussions on Kisin modules and étale $\varphi$-modules. The author was supported by the Simons Foundation under an AMS-Simons travel grant, by the National Science Foundation under the Mathematical Sciences Postdoctoral Research Fellowship No.  1703182, and by the Centre International de Rencontres Mathématiques under the Research in Pairs program No.  1877. We thank CIRM for providing hospitality and excellent working conditions while part of this work was carried out. Notation {#subsec:not} -------- If $F$ is any field, we write $\overline{F}$ for a separable closure of $F$ and $G_F:= \mathrm{Gal}(\overline{F}/F)$ for the absolute Galois group of $F$. Let $f\in {\mathbb{N}}$ and $q = p^f$. Let ${\mathcal{O}}_K$ be the Witt vectors $W({\mathbb{F}}_q)$ of ${\mathbb{F}}_q$. Let $K = {\mathcal{O}}_K[p^{-1}]$ be the unramified extension of ${\mathbb{Q}}_p$ of degree $f$. Let $E$ be an extension of $K$ with ring of integers ${\mathcal{O}}$, uniformizer $\varpi$, and residue field ${\mathbb{F}}$. This induces embeddings ${\mathcal{O}}_K{\hookrightarrow}{\mathcal{O}}$ and $\iota_0: {\mathbb{F}}_q {\hookrightarrow}{\mathbb{F}}$. For $i\in {\mathbb{Z}}/f$, let $\iota_i = \iota_0\circ \varphi^i$ be the $i$-th Frobenius twist of $\iota_0$. We fix an embedding ${\mathbb{F}}{\hookrightarrow}\overline{{\mathbb{F}}}_q$. We will denote by $(\cdot)^*$ the ${\mathbb{F}}$-linear dual. Let $G$ (resp. $G^{\mathrm{der}}$) be the algebraic group ${\mathrm{Res}}_{{\mathbb{F}}_q/{\mathbb{F}}_p} {\mathrm{GL}}_2$ (resp. ${\mathrm{Res}}_{{\mathbb{F}}_q/{\mathbb{F}}_p} {\mathrm{SL}}_2$), and let $T\subset G$ (resp. $T^{\mathrm{der}} \subset G^{\mathrm{der}}$) be the diagonal torus. Let $X^*(T)$ (resp. $X^*(T^{\mathrm{der}})$) denote the group of characters of $T$ (resp.  $T^{\mathrm{der}}$). By the embeddings $\iota_i$, this group is identified with $X^*(T \times_{{\mathbb{F}}_p} {\mathbb{F}}) \cong X^*(\prod_{i\in {\mathbb{Z}}/f} \mathbb{G}_m^2)$, which is identified with $({\mathbb{Z}}^2)^{{\mathbb{Z}}/f}$ in the usual way. For a character $\mu\in X^*(T)$, we write $\mu_i$ as the $i$-th factor of $\mu$ so that $\mu = \sum_{i\in {\mathbb{Z}}/f} \mu_i$. Let $\eta^{(i)} \in X^*(T)$ (resp.  $\alpha^{(i)} \in X^*(T)$) be the dominant fundamental character (resp.  the positive root) represented by $(1,0)$ (resp.  $(1,-1)$) in the $i$-th factor and $0$ elsewhere. Let $\eta = \sum_{i\in {\mathbb{Z}}/f} \eta^{(i)}$. Let $\omega^{(i)}$ be the restriction of $\eta^{(i)}$ to $T^{\mathrm{der}}$. Let $W$ be the Weyl group of $G$ and $G^{\mathrm{der}}$, which is similarly identified with $S_2^{{\mathbb{Z}}/f}$. Here, $S_2$ denotes the permutation group on two elements. We denote the trivial element of $S_2$ by ${\mathrm{id}}$. Then $W$ acts naturally on $X^*(T)$ and $X^*(T^{\mathrm{der}})$. Let $F$ be the $p$-power Frobenius morphism which acts naturally on $X^*(T)$ and $W$. For a dominant character $\mu\in X^*(T)$ we write $V(\mu)$ for the Weyl module defined in [@JantzenBook II.2.13(1)]. It has a unique simple $G$-quotient $L(\mu)$. If $\mu = \sum_i \mu_i$ is $p$-restricted (i.e. $0\leq \langle \mu,\alpha^{(i)}\rangle \leq p$ for all $i$), then $L(\mu) = \otimes_i L(\mu_i)$ by the Steinberg tensor product theorem as in [@Herzig Theorem 3.9]. Let $F(\mu)$ be the restriction of $L(\mu)$ to ${\mathrm{GL}}_2({\mathbb{F}}_q)$, which remains irreducible by [@Herzig A.1.3]. Note that $F(\mu) \cong F(\lambda)$ if and only if $\mu \cong \lambda \mod{(p-\pi)X^0(T)}$, where $X^0(T)$ is the kernel of the restriction map $X^*(T){\rightarrow}X^*(T^{\mathrm{der}})$. Every irreducible ${\mathrm{GL}}_2({\mathbb{F}}_q)$-representation is of this form, and we call such a representation a [*Serre weight*]{}. Quotients of generic ${\mathrm{GL}}_2({\mathbb{F}}_q)$-projective envelopes {#sec:proj} =========================================================================== Suppose that $\mu \in X^*(T)$ and that $1\leq \langle \mu-\eta,\alpha^{(i)}\rangle \leq p-1$ for all $i\in {\mathbb{Z}}/f$. Let $\sigma$ be $F(\mu-\eta)$. Let ${\widetilde{R}}_\mu$ (resp.  $R_\mu$) be the projective ${\mathcal{O}}_K[{\mathrm{GL}}_2({\mathbb{F}}_q)]$-envelope (resp.  the projective ${\mathbb{F}}_q[{\mathrm{GL}}_2({\mathbb{F}}_q)]$-envelope) of $\sigma$. Let $S$ be the set $\{\pm\omega^{(i)}\}_i$ and let $I$ be a subset of $S$. Recall from [@LMS Definition 3.5] that we attach to a subset $J \subset S$ a Serre weight $\sigma_J$. Let $R_{\mu,I}$ be the universal object among quotients of $R_\mu$ that do not contain $\sigma_{\{\omega\} }$ as a Jordan–Hölder factor for all $\omega$ in $I$. Recall from [@LMS §3] that there is a filtration $\operatorname{Fil}^{\mathbf{k}}$ on $R_\mu$ which induces a filtration $\operatorname{Fil}^{\mathbf{k}}$ on $R_{\mu,I}$. Let $W_{{\mathbf{k}},I}$ be $\operatorname{gr}^{\mathbf{k}}R_{\mu,I}$. \[prop:Wk\] We have an isomorphism $W_{{\mathbf{k}},I} \cong \oplus_{J \subset S, {\mathbf{k}}(J) = {\mathbf{k}}, J \cap I = \emptyset} \sigma_J$. This follows from [@LMS Proposition 3.6 and Theorem 3.14]. If $I$ be a subset of $S$ such that $I \cap \{\pm \omega^{(i)}\}$ has size at most one for all $i$, let $T_{\sigma,I}$ be the set of Deligne–Lusztig representations over $K$ of the form $R_w(\mu-w\eta)$ where $w_i = {\mathrm{id}}$ (resp.  $w_i \neq {\mathrm{id}}$) if $\omega^{(i)} \in I$ (resp.  $-\omega^{(i)} \in I$). Fix an embedding ${\widetilde{R}}_\mu {\hookrightarrow}\oplus_{\sigma(\tau) \in T_{\sigma,\emptyset}} \sigma(\tau)$. Let ${\widetilde{R}}_{\mu,I}$ be the quotient of ${\widetilde{R}}_\mu$ isotypic for the set $T_{\sigma,I}$ (which does not depend on the above embedding). Note that ${\widetilde{R}}_{\mu,\emptyset}$ is equal to ${\widetilde{R}}_\mu$. \[prop:projred\] The reduction of ${\widetilde{R}}_{\mu,I}$ modulo $p$ is $R_{\mu,I}$. For each $\omega\in I$, $\sigma_{\{\omega\} } \notin {\mathrm{JH}}({\overline{\sigma}}(\tau))$ for all $\sigma(\tau) \in T_{\sigma,I}$. Thus, there is a canonical quotient map $R_{\mu,I} {\rightarrow}\overline{R}_{\mu,I}$, where $\overline{R}_{\mu,I}$ is the reduction of ${\widetilde{R}}_{\mu,I}$. By Proposition \[prop:Wk\], $R_{\mu,I}$ has length $2^{2f-\#I}$. Since $\overline{R}_{\mu,I}$ is the reduction of a lattice in the direct sum of $2^{f-\#I}$ types, each of whose reduction has length $2^f$ (see [@diamond]), it also has length $2^{2f-\#I}$. Since both objects have the same length, this surjection must be an isomorphism. Again, let $I\subset S$. Let $W_{{\mathbf{k}},{\mathbf{k}}+1,I}$ be $\operatorname{Fil}^{\mathbf{k}}R_{\mu,I}/(\operatorname{Fil}^{k+2} R_{\mu,I} \cap \operatorname{Fil}^{\mathbf{k}}R_{\mu,I})$. Note that $W_{{\mathbf{k}},{\mathbf{k}}+1,I}$ is multiplicity free since $W_{{\mathbf{k}},{\mathbf{k}}+1,\emptyset}$ (which is $W_{{\mathbf{k}},{\mathbf{k}}+1}$ in [@LMS §3]) is by [@LMS Proposition 3.6 and Lemma 3.7]. \[prop:ext\] Suppose that $J \subset J'$, $\#J'\setminus J = 1$, and $J' \cap I = \emptyset$. Let ${\mathbf{k}}$ and ${\mathbf{k}}'$ be ${\mathbf{k}}(J)$ and ${\mathbf{k}}(J')$, respectively. Then there is a subquotient of $W_{{\mathbf{k}},{\mathbf{k}}+1,I}$ which is the unique up to isomorphism nontrivial extension of $\sigma_J$ by $\sigma_{J'}$. This follows immediately from Proposition \[prop:Wk\] and [@LMS Proposition 3.8]. \[prop:exseq\] Suppose that the size of $I \cap \{\pm\omega^{(i)}\}$ is at most one for all $i$ and that $I \cap \{\pm\omega^{(j)}\} = \emptyset$ for some $j$. Then there is an exact sequence $$0 {\rightarrow}{\widetilde{R}}_{\mu,I} {\rightarrow}{\widetilde{R}}_{\mu,I\cup \{\omega^{(j)}\} } \oplus {\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} } {\rightarrow}R_{\mu,I\cup \{\pm\omega^{(j)}\} } {\rightarrow}0,$$ where the second (resp.  third) map is the sum (resp.  difference) of the natural projections. The second map is clearly injective since it is after inverting $p$ and ${\widetilde{R}}_{\mu,I}$ is ${\mathcal{O}}_K$-flat. We claim that the cokernel of this map is $p$-torsion. Let $\sigma_{\{\omega^{(j)}\} } = F(\mu'-\eta)$ and consider a map ${\widetilde{R}}_{\mu'} {\rightarrow}{\widetilde{R}}_{\mu,I}$ such that the composition with the projection ${\widetilde{R}}_{\mu,I} {\twoheadrightarrow}R_{\mu,I} {\twoheadrightarrow}R_{\mu,I}/\operatorname{Fil}^2 R_{\mu,I}$ is nonzero. The composition of ${\widetilde{R}}_{\mu'} {\rightarrow}{\widetilde{R}}_{\mu,I}$ with the natural surjection ${\widetilde{R}}_{\mu,I} {\twoheadrightarrow}{\widetilde{R}}_{\mu,I\cup \{\omega^{(j)}\} }$ is zero since $\sigma_{\{\omega\} }\notin {\mathrm{JH}}(R_{\mu,I\cup \{\omega^{(j)}\} })$. On the other hand, we claim that the image of the composition ${\widetilde{R}}_{\mu'} {\rightarrow}{\widetilde{R}}_{\mu,I}$ with the natural surjection ${\widetilde{R}}_{\mu,I} {\twoheadrightarrow}{\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} }$ contains $p{\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} }$. By symmetry, we would see that the image of ${\widetilde{R}}_{\mu,I} {\rightarrow}{\widetilde{R}}_{\mu,I\cup \{\omega^{(j)}\} } \oplus {\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} }$ contains $p{\widetilde{R}}_{\mu,I\cup \{\omega^{(j)}\} }$, and thus $p{\widetilde{R}}_{\mu,I\cup \{\omega^{(j)}\} } \oplus p{\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} }$. Fix a map ${\widetilde{R}}_\mu{\rightarrow}{\widetilde{R}}_{\mu'}$ such that the composition with the projection to $R_{\mu'}/\operatorname{Fil}^2 R_{\mu'}$ is nonzero. Then we claim that the image, denoted $S$, of the composition of ${\widetilde{R}}_\mu{\rightarrow}{\widetilde{R}}_{\mu'}$ with the above ${\widetilde{R}}_{\mu'} {\rightarrow}{\widetilde{R}}_{\mu,I} {\twoheadrightarrow}{\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} }$ is $p{\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} }$. On the one hand, we see that $S$ is in $p{\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} }$ by reducing modulo $p$ and using Propositions \[prop:projred\] and \[prop:ext\]. On the other hand, the projection of $S$ to $\sigma^\circ(\tau)$ contains $p \sigma^\circ(\tau)$ for any $\sigma(\tau)\in T_{\sigma,I \cup \{-\omega^{(j)}\}}$ by [@EGS Theorem 5.1.1]. Thus, the composition $S \subset p{\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} } {\twoheadrightarrow}p \sigma^\circ(\tau)$ is an isomorphism upon taking cosocles. We see that $S$ must equal $p{\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} }$. If we let $R$ be the cokernel of the second map, then the exact sequence $$\label{eqn:char0} 0 {\rightarrow}{\widetilde{R}}_{\mu,I} {\rightarrow}{\widetilde{R}}_{\mu,I\cup \{\omega^{(j)}\} } \oplus {\widetilde{R}}_{\mu,I\cup \{-\omega^{(j)}\} } {\rightarrow}R {\rightarrow}0$$ induces an exact sequence $$\label{eqn:charp} R_{\mu,I} {\rightarrow}R_{\mu,I\cup \{\omega^{(j)}\} } \oplus R_{\mu,I\cup \{-\omega^{(j)}\} } {\rightarrow}R {\rightarrow}0.$$ On cosocles, the second map in (\[eqn:char0\]) is the sum of two isomorphisms. Thus, on cosocles, the third map in (\[eqn:char0\]) is necessarily the sum of two isomorphisms. We conclude that the third map in (\[eqn:char0\]) is the sum of two quotient maps. By definition, the maximal representation which is a quotient of both $R_{\mu,I\cup \{\omega^{(j)}\} }$ and $R_{\mu,I\cup \{-\omega^{(j)}\} }$ is $R_{\mu,I\cup \{\pm\omega^{(j)}\} }$. Thus, there is a surjection $R_{\mu,I\cup \{-\omega^{(j)}\} }{\twoheadrightarrow}R$. On the other hand, it is easy to see that the composition $R_{\mu,I} {\rightarrow}R_{\mu,I\cup \{\omega^{(j)}\} } \oplus R_{\mu,I\cup \{-\omega^{(j)}\} } {\rightarrow}R_{\mu,I\cup \{\pm\omega^{(j)}\} }$ is zero, where the second map is the difference of the natural projections. Thus, there is a surjection $R {\twoheadrightarrow}R_{\mu,I\cup \{\pm\omega^{(j)}\} }$. Since $R$ and $R_{\mu,I\cup \{\pm\omega^{(j)}\} }$ are finite length objects, they must be isomorphic. Multitype Barsotti–Tate deformation rings {#sec:defring} ========================================= Some integral $p$-adic Hodge theory {#subsec:hodge} ----------------------------------- Let $K_\infty$ be the infinite extension $K((-p)^{1/p^\infty})$ of $K$. Let ${\mathcal{O}}_{{\mathcal{E}},K}$ denote the $p$-adic completion of ${\mathcal{O}}_K(\!(v)\!)$, and let ${\mathcal{O}}_{{\mathcal{E}}^{\mathrm{un}},K}$ denote a maximal connected étale extension of ${\mathcal{O}}_{{\mathcal{E}},K}$. Fontaine defined an exact anti-equivalence of tensor categories $$\mathbb{V}^*: \Phi\textrm{-}\mathrm{Mod}^{\operatorname{et}}(R) {\rightarrow}\mathrm{Rep}_{G_{K_{\infty}}}(R)$$ by $\mathbb{V}^*({\mathcal{M}}) = {\mathrm{Hom}}_{\Phi\textrm{-}\mathrm{Mod}}({\mathcal{M}},{\mathcal{O}}_{{\mathcal{E}}^{\mathrm{un}},K})$. Let $I({\overline{\rho}},\mu)$ be a subset of $S = \{\pm \omega^{(i)}\}_i$ with $\#(I({\overline{\rho}},\mu)\cap \{\pm\omega^{(i)}\}) \leq 1$ for all $i$. Let $\alpha$, $\alpha' \in {\mathbb{F}}^\times$ and $a_i \in {\mathbb{F}}$ for $i\in {\mathbb{Z}}/f$ such that $a_i = 0$ if and only if $\omega^{(i)} \in I({\overline{\rho}},\mu)$. Let $\mu\in X^*(T)$ be such that $\mu_i = (c_i,1)$ with $2 < c_i < p-1$. Let ${\mathcal{M}}= \prod_i {\mathbb{F}}((v)){\mathfrak{e}}^i \oplus {\mathbb{F}}((v)){\mathfrak{f}}^i$ be the $\varphi$-module defined by $$\begin{aligned} -\omega^{(f-i)}\notin I({\overline{\rho}},\mu) : & & \begin{cases} \varphi({\mathfrak{e}}^{i-1}) & = v^{c_{f-i}}{\mathfrak{e}}^i+a_{i-1}v^{c_{f-i}}{\mathfrak{f}}^i \\ \varphi({\mathfrak{f}}^{i-1}) & = v{\mathfrak{f}}^i \end{cases} \\ -\omega^{(f-i)}\in I({\overline{\rho}},\mu) : & & \begin{cases} \varphi(\mathfrak{e}^{i-1}) & =v^{c_{f-i}}{\mathfrak{f}}^i \\ \varphi(\mathfrak{f}^{i-1}) & = v{\mathfrak{e}}^i \end{cases} \\\end{aligned}$$ for $i \neq 0$ and $$\begin{aligned} -\omega^{(0)}\notin I({\overline{\rho}},\mu) : & & \begin{cases} \varphi({\mathfrak{e}}^{f-1}) & = \alpha v^{c_0}{\mathfrak{e}}^0+\alpha a_{f-1}v^{c_0}{\mathfrak{f}}^0 \\ \varphi({\mathfrak{f}}^{f-1}) & = \alpha' v{\mathfrak{f}}^0 \end{cases} \\ -\omega^{(0)}\in I({\overline{\rho}},\mu) : & & \begin{cases} \varphi(\mathfrak{e}^{f-1}) & = \alpha v^{c_0}{\mathfrak{f}}^0 \\ \varphi(\mathfrak{f}^{f-1}) & = \alpha' v{\mathfrak{e}}^0. \end{cases} \\\end{aligned}$$ To describe tamely potentially Barsotti–Tate deformation rings, we will use the theory of Kisin modules with descent datum. Let $\tau$ be the tame principal series type $\eta_1\oplus \eta_2:I_K {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}}_q)$ where $\eta_k = \omega_f^{-\mathbf{a}_k^{(0)}}$ for $k=1$ and $2$ and $$\mathbf{a}_k^{(j)} = \sum_{i=0}^{f-1} a_{k,-j+i}p^i,$$ where $a_{k,i} \in {\mathbb{Z}}$. We will suppose throughout that $2\leq |a_{1,i}-a_{2,i}| \leq p-3$ for all $i\in {\mathbb{Z}}/f$ and call such a tame principal series type [*generic*]{}. We will say a tame inertial type $\tau'$ is generic if its restriction to the quadratic unramified extension of $K$ is a generic principal series type. The [*orientation*]{} of $(\mathbf{a}_1,\mathbf{a}_2)$ is the element $s\in W$ such that $\mathbf{a}^{(j)}_{s_j(1)}>\mathbf{a}^{(j)}_{s_j(2)}$. By an abuse of notation, we say that the orientation of $(\mathbf{a}_1,\mathbf{a}_2)$ is an orientation for $\tau$ if $\tau$ can be expressed in terms of $(\mathbf{a}_1,\mathbf{a}_2)$ as above. Let $R$ be an ${\mathcal{O}}$-algebra. For a principal series type $\tau$, we will consider Kisin modules over $R$ with descent datum of type $\tau$ (see [@LLLM Definition 2.4]). We will say that such a Kisin module ${\mathfrak{M}}_R$ is in $Y^{(0,1),\tau}(R)$ if the cokernels of $\phi_{{\mathfrak{M}}_R}:\varphi^*({\mathfrak{M}}_R) {\rightarrow}{\mathfrak{M}}_R$ and $\phi_{\det {\mathfrak{M}}_R}:\varphi^*(\det {\mathfrak{M}}_R) {\rightarrow}\det {\mathfrak{M}}_R$ are annihilated by $E(u) = u^{q-1}+p$. Let $v$ be $u^{q-1}$. Let $s$ be an orientation for a generic tame principal series type $\tau$ and ${\mathfrak{M}}_R$ be an element of $Y^{(0,1),\tau}(R)$. Then ${\mathfrak{M}}_R$ can be described by the matrices $\operatorname{Mat}_{\beta}(\phi^{(i)}_{{\mathfrak{M}}_R \otimes_R {\mathbb{F}},s_{i+1}(2)})$ after choosing an eigenbasis $\beta$ (see [@LLLM Definition 2.11]). The following is a generalization of [@LLLM Theorem 4.1] in the case of ${\mathrm{GL}}_2$, where $\beta$ is allowed to have a slightly more general form than a gauge basis. \[thm:fhdef\] Let $\tau$ be a tame generic principal series type and let $s = (s_i)_i \in W$ be an orientation for $\tau$. Let $R$ be a complete local Noetherian ${\mathcal{O}}$-algebra with residue field ${\mathbb{F}}$. Let ${\mathfrak{M}}_R\in Y^{(0,1),\tau}(R)$ with $\operatorname{Mat}_{\overline{\beta}}(\phi^{(i)}_{{\mathfrak{M}}_R \otimes_R {\mathbb{F}},s_{i+1}(2)})$ given by $$\overline{A}_1 = \begin{pmatrix} v & \\ a_iv & 1 \\ \end{pmatrix}, \overline{A}_2 = \begin{pmatrix} & 1 \\ v & \\ \end{pmatrix}, \textrm{ or } \overline{A}_3 = \begin{pmatrix} & 1 \\ v & a_i \\ \end{pmatrix}$$ for $i\neq 0$ and $\overline{A}_j\begin{pmatrix} \alpha & \\ & \alpha' \\ \end{pmatrix}$ for $i=0$, where $\overline{\beta}$ is an eigenbasis for ${\mathfrak{M}}_R \otimes_R {\mathbb{F}}$. Then there is a unique eigenbasis $\beta$ of ${\mathfrak{M}}_R$ up to scaling lifting $\overline{\beta}$ such that $\operatorname{Mat}_\beta(\phi^{(i)}_{{\mathfrak{M}}_R,s_{i+1}(2)})$ is given by $$A_1 = \begin{pmatrix} v+p & \\ (X_i+[a_i])v & 1 \\ \end{pmatrix}, A_2 = \begin{pmatrix} -Y_i & 1 \\ v & X_i \\ \end{pmatrix}, \textrm{ or } A_3 = \begin{pmatrix} -p(X_i+[a_i])^{-1} & 1 \\ v & X_i+[a_i] \\ \end{pmatrix},$$ respectively, for $i\neq 0$ and $A_jD(\alpha,\alpha')$ for $i=0$, where $[\cdot]$ denotes the Teichmüller lift, $X_iY_i = p$ for $A_2$, and $D(\alpha,\alpha') = \begin{pmatrix} [\alpha]+ X_\alpha& \\ & [\alpha']+X_{\alpha'} \\ \end{pmatrix}$. The proof is similar to the proof of [@LLLM Theorems 4.1 and 4.16] which prove existence and uniqueness of $\beta$, respectively. We describe some of the key points. We modify [@LLLM Definition 4.2], defining $d_R(P) = \min_k 2v_R(r_k)+k$ if $P = \sum_k r_k v^k \in R[\![v]\!]$. Then the analogue of [@LLLM Proposition 4.3] holds (see [@LLLM Remark 4.4]). The entry in the middle column of [@LLLM Table 5] becomes $$\begin{pmatrix} 1^* & \\ v(\leq 0) & 0^* \\ \end{pmatrix}, \begin{pmatrix} \leq 0 & 0^* \\ 1^* & \leq 0 \\ \end{pmatrix}, \textrm{ or } \begin{pmatrix} \leq 0 & 0^* \\ 1^* & \leq 0 \\ \end{pmatrix},$$ respectively, and we modify [@LLLM Definition 4.5] appropriately. The analogues of [@LLLM Proposition 4.6, Lemma 4.10, and Proposition 4.11] hold with the following two caveats. 1. We define the pivots in the case of $\overline{A}_3$ to be the same as the pivots in the case of $\overline{A}_2$. 2. The second equation of [@LLLM Lemma 4.10] is changed to $A_{22}^{(i)} = vP_{22} + [a_i] + Q_{22}$ when $\overline{A}^{(i)} = \overline{A}_3$. Then the analogues of [@LLLM Proposition 4.13 and Lemma 4.14] give the eigenbasis $\beta$. We deduce the forms of $A_i$ from the condition that $v+p$ must divide the determinant. Finally, the analogue of [@LLLM Theorem 4.16] proves the uniqueness of $\beta$ up to scaling. \[prop:1defring\] Suppose that ${\overline{\rho}}:G_K {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ is a Galois representation such that the restriction ${\overline{\rho}}|_{G_{K_\infty}}$ is isomorphic to $\mathbb{V}^*({\mathcal{M}})$, and that $\tau$ is the tame generic inertial type with $\sigma(\tau) = R_s(\mu-s\eta)$. Let $R$ be the ring ${\mathcal{O}}[\![ (X_i,Y_i)_{i=0}^{f-1}, X_\alpha,X_{\alpha'}]\!]/(f_i)$ where $f_i = X_iY_i-p$ if $-s_i\omega^{(i)} \in I({\overline{\rho}},\mu)$ and $Y_i$ otherwise. Let ${\mathcal{M}}_R = \prod_i R((v)){\mathfrak{e}}^i \oplus R((v)){\mathfrak{f}}^i$ be the $\varphi$-module defined by $$\begin{aligned} s_{f-i} = {\mathrm{id}}, -\omega^{(f-i)} \notin I({\overline{\rho}},\mu) : & & \begin{cases} \varphi({\mathfrak{e}}^{i-1}) & = v^{c_{f-i}-1}(v+p){\mathfrak{e}}^i+(X_{i-1}+[a_{i-1}])v^{c_{f-i}}{\mathfrak{f}}^i \\ \varphi({\mathfrak{f}}^{i-1}) & = v{\mathfrak{f}}^j \end{cases} \\ s_{f-i} = {\mathrm{id}}, -\omega^{(f-i)}\in I({\overline{\rho}},\mu) : & & \begin{cases} \varphi(\mathfrak{e}^{i-1}) & = -Y_{i-1}v^{c_{f-i}-1}\mathfrak{e}^i + v^{c_{f-i}}{\mathfrak{f}}^i \\ \varphi(\mathfrak{f}^{i-1}) & = v{\mathfrak{e}}^i + X_{i-1}v {\mathfrak{f}}^i. \end{cases} \\ s_{f-i} \neq {\mathrm{id}}, \{\pm\omega^{(f-i)}\}\cap I({\overline{\rho}},\mu) = \emptyset : & & \begin{cases} \varphi(\mathfrak{e}^{i-1}) & = v^{c_{f-i}}\mathfrak{e}^i + (X_{i-1}+[a_{i-1}])v^{c_{f-i}}{\mathfrak{f}}^i \\ \varphi(\mathfrak{f}^{i-1}) & = -p(X_{i-1}+[a_{i-1}])^{-1}{\mathfrak{e}}^i+v\mathfrak{f}^i \end{cases} \\ s_{f-i} \neq {\mathrm{id}}, \omega^{(f-i)}\in I({\overline{\rho}},\mu) : & & \begin{cases} \varphi(\mathfrak{e}^{i-1}) & = v^{c_{f-i}}\mathfrak{e}^i + X_{i-1}v^{c_{f-i}}{\mathfrak{f}}^i \\ \varphi(\mathfrak{f}^{i-1}) & = -Y_{i-1}{\mathfrak{e}}^i + v {\mathfrak{f}}^i \end{cases} \\ s_{f-i} \neq {\mathrm{id}}, -\omega^{(f-i)}\in I({\overline{\rho}},\mu) : & & \begin{cases} \varphi(\mathfrak{e}^{i-1}) & = v^{c_{f-i}}{\mathfrak{f}}^i \\ \varphi(\mathfrak{f}^{i-1}) & = (v+p){\mathfrak{e}}^i + X_{i-1}v {\mathfrak{f}}^i, \end{cases} \\\end{aligned}$$ with the usual modification for $i=0$. Then $\mathbb{V}^*({\mathcal{M}}_R)$ is the restriction to $G_{K_\infty}$ of a versal potentially Barsotti–Tate deformation of ${\overline{\rho}}$ of type $\tau$. Define $w\in W$ and $s_\tau \in S_2$ to be the unique elements such that $w_0 = {\mathrm{id}}$ and $F^{-1}(w) s w^{-1} = (s_\tau,{\mathrm{id}},\ldots, {\mathrm{id}})$. Then the Deligne–Lusztig representations $R_s(\mu-s\eta)$ and $R_{(s_\tau,{\mathrm{id}},\ldots, {\mathrm{id}})}(F^{-1}(w)\cdot(\mu-s\eta))$ are isomorphic by [@Herzig Lemma 4.2]. Define $s' = (s'_i)_i$ by $s'_i = w_{f-i}$ for $i\in {\mathbb{Z}}/f$. Then one easily checks that $s'$ is an orientation for $F^{-1}(w)\cdot(\mu-s\eta)$. By Theorem \[thm:fhdef\] and the analogue of [@LLLM §5.2 and §6], the Kisin module ${\mathfrak{M}}_R$ (with quadratic unramified descent) of tame inertial type (the quadratic unramified base change of) $\tau(s_\tau,-F^{-1}(w)\cdot(\mu-s\eta))$ defined by $A^{(i-1)} = \operatorname{Mat}_\beta(\phi^{(i-1)}_{{\mathfrak{M}}_R,s_{i}'(2)}) = A_1$ (resp.  $A_2$ and $A_3$) in the first and fifth cases (resp.  in the second and fourth cases and in the third case) has the property that (the quadratic unramified descent of) $T_{dd}^*({\mathfrak{M}}_R)$ is the restriction to $G_{K_\infty}$ of a versal potentially Barsotti–Tate deformation of ${\overline{\rho}}$ of type $\tau$. Let $L = K((-p)^{\frac{1}{e}})$. We claim that ${\mathfrak{M}}_R \otimes_{{\mathcal{O}}_{{\mathcal{E}},K}} {\mathcal{O}}_{{\mathcal{E}},L} = {\mathcal{M}}_R$. Let $v^\lambda$ denote the torus element obtained by applying the coweight $\lambda$ to $v$. By [@LLLM2 Proposition 3.1.2], we see that a Kisin module (with quadratic unramified descent) of tame inertial type (the quadratic unramified base change of) $\tau$ with $\operatorname{Mat}_\beta(\phi^{(i)}_{{\mathfrak{M}},s_{i+1}'(2)})$ given by $A^{(i)}$ (resp. $A^{(i)}s_0^{-1} D(\alpha,\alpha')s_0$) for $i\neq f-1$ (resp. for $i=f-1$) gives a $\varphi$-module ${\mathcal{M}}= \prod_i {\mathbb{F}}((v)){{\mathfrak{e}}'}^i \oplus {\mathbb{F}}((v)){{\mathfrak{f}}'}^i$ with $\varphi({{\mathfrak{e}}'}^{i-1},{{\mathfrak{f}}'}^{i-1}) = M_{i-1}'({{\mathfrak{e}}'}^i,{{\mathfrak{f}}'}^i)$ where $$\begin{aligned} M_i' &= s_{i+1}' A^{(i)} v^{(s'_{i+1})^{-1}w_{f-i}(\mu_{f-1-i} - s_{f-1-i}\eta)}(s'_{i+1})^{-1} \\ &= w_{f-1-i} A^{(i)} v^{w_{f-1-i}^{-1}w_{f-i}(\mu_{f-1-i} - s_{f-1-i}\eta)}w_{f-1-i}^{-1} \\ &= w_{f-1-i} A^{(i)} v^{s_{f-1-i}^{-1}(\mu_{f-1-i} - s_{f-1-i}\eta)}w_{f-1-i}^{-1}\end{aligned}$$ for $i<f-1$ and $M_{f-1}' = A^{(f-1)}s_0^{-1} D(\alpha,\alpha')s_0s_\tau^{-1}v^{w_1(\mu_0 - s_0\eta)}$. Changing to the bases $({\mathfrak{e}}^i,{\mathfrak{f}}^i) = ({{\mathfrak{e}}'}^i,{{\mathfrak{f}}'}^i)w_{f-1-i}$, we see that ${\mathcal{M}}$ is given by $(M_i)_i$ where $$\begin{aligned} M_i &= A^{(i)} v^{s_{f-1-i}^{-1}(\mu_{f-1-i} - s_{f-1-i}\eta)}w_{f-1-i}^{-1} w_{f-i} \\ &= A^{(i)} v^{s_{f-1-i}^{-1}(\mu_{f-1-i} - s_{f-1-i}\eta)}s_{f-1-i}^{-1} \\ &= A^{(i)} s_{f-1-i}^{-1} v^{\mu_{f-1-i} - s_{f-1-i}\eta}\end{aligned}$$ for $i < f-1$ and $$\begin{aligned} M'_{f-1} &= A^{(f-1)}s_0^{-1} D(\alpha,\alpha')s_0s_\tau^{-1}v^{w_1(\mu_0 - s_0\eta)}w_1 D(\alpha,\alpha')\\ &= A^{(f-1)}s_0^{-1} D(\alpha,\alpha')s_0s_\tau^{-1}w_1 v^{\mu_0 - s_0\eta} D(\alpha,\alpha')\\ &= A^{(f-1)}s_0^{-1} v^{\mu_0 - s_0\eta}D(\alpha,\alpha').\end{aligned}$$ The proposition is now deduced from substituting for $A^{(i)}$, $s$, and $\mu$. Let ${\overline{\rho}}:G_K {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ is a Galois representation. If $\tau$ is an inertial type, let $R^\tau$ parameterize potentially Barsotti–Tate liftings of ${\overline{\rho}}$ of type $\tau$. If $T$ is a set of inertial types for $K$, then we let ${\mathrm{Spec}\ }R^T$ be the Zariski closure of $\cup_{\tau \in T} {\mathrm{Spec}\ }R^\tau[p^{-1}]$ in the universal lifting space ${\mathrm{Spec}\ }R_{{\overline{\rho}}}^\square$ of ${\overline{\rho}}$. \[thm:defring\] Suppose that ${\overline{\rho}}:G_K {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ is a Galois representation such that the restriction ${\overline{\rho}}|_{G_{K_\infty}}$ is isomorphic to $\mathbb{V}^*({\mathcal{M}})$. There is an isomorphism from $R^{T_{\sigma,\emptyset}}$ to a formal power series ring over ${\mathcal{O}}[\![ (X_i,Y_i)_{i=0}^{f-1}]\!] /(Y_ig_i)_i$, where $g_i = Y_i-p$ $($resp.  $g_i = X_iY_i-p)$ if $I({\overline{\rho}},\mu)\cap \{\pm\omega^{(i)} \} = \emptyset$ $($resp.  $I({\overline{\rho}},\mu)\cap \{\pm\omega^{(i)} \} \neq \emptyset)$ such that if $I \subset S$ with $\# (I \cap \{\pm \omega^{(i)}\})\leq 1$ for all $i$, then $R^{T_{\sigma,I}}$ is the quotient of $R^{T_{\sigma,\emptyset}}$ by the ideal $(f_i(I))_i$ where $$\begin{aligned} f_i(I) = & & \begin{cases} 0 &\textrm{ if }\{\pm \omega^{(f-1-i)}\} \cap I = \emptyset \\ X_iY_i-p &\textrm{ if } \{\pm \omega^{(f-1-i)}\} \subset I\cup I({\overline{\rho}},\mu) \\ Y_i &\textrm{ if } \omega^{(f-1-i)} \in I \cap I({\overline{\rho}},\mu) \\ Y_i-p &\textrm{ if } -\omega^{(f-1-i)} \in I \cap I({\overline{\rho}},\mu) \end{cases}\end{aligned}$$ Since $R^{T_{\sigma,I}}$ is naturally a quotient of $R_{{\overline{\rho}}|_{G_{K_\infty}}}^\square$ by [@EGS Lemma 7.4.3], it suffices to compute the Zariski closure of $\cup_{\tau \in T_{\sigma,I}} {\mathrm{Spec}\ }R^\tau[p^{-1}]$ in ${\mathrm{Spec}\ }R_{{\overline{\rho}}|_{G_{K_\infty}}}^\square$. Let $R$ be the ring ${\mathcal{O}}[\![ (X_i,Y_i)_{i=0}^{f-1}, X_\alpha,X_{\alpha'}]\!] /(Y_ig_i)_i$ and consider the deformation ${\mathcal{M}}_R = \prod_i R((v)){\mathfrak{e}}^i \oplus R((v)){\mathfrak{f}}^i$ of ${\mathcal{M}}$ defined by $$\begin{aligned} \{\pm\omega^{(f-i)}\}\cap I({\overline{\rho}},\mu) = \emptyset : & & \begin{cases} \varphi({\mathfrak{e}}^{i-1}) & = v^{c_{f-i}-1}(v+p-Y_{i-1}){\mathfrak{e}}^i+v^{c_{f-i}}(X_{i-1}+[a_{i-1}]){\mathfrak{f}}^i \\ \varphi({\mathfrak{f}}^{i-1}) & = -Y_{i-1}(X_{i-1}+[a_{i-1}])^{-1}{\mathfrak{e}}^i+v{\mathfrak{f}}^i \end{cases} \\ \omega^{(f-i)}\in I({\overline{\rho}},\mu) : & & \begin{cases} \varphi(\mathfrak{e}^{i-1}) & = v^{c_{f-i}-1}(v+p-X_{i-1}Y_{i-1})\mathfrak{e}^i + X_{i-1}v^{c_{f-i}}{\mathfrak{f}}^i \\ \varphi(\mathfrak{f}^{i-1}) & = -Y_{i-1}{\mathfrak{e}}^i + v\mathfrak{f}^i \end{cases} \\ -\omega^{(f-i)}\in I({\overline{\rho}},\mu) : & & \begin{cases} \varphi(\mathfrak{e}^{i-1}) & = -Y_{i-1}v^{c_{f-i}-1}{\mathfrak{e}}^i + v^{c_{f-i}}\mathfrak{f}^i \\ \varphi(\mathfrak{f}^{i-1}) & = (v + p-X_{i-1}Y_{i-1}){\mathfrak{e}}^i + X_{i-1} v {\mathfrak{f}}^i. \end{cases} \\\end{aligned}$$ Define the deformation functor $D^\square$ by $D^\square(A) = \{(f:R{\rightarrow}A,b_A)\}/{\cong}$ for $A$ a complete local Noetherian ${\mathcal{O}}$-algebra, where $b_A$ is a basis for the free rank two $A$-module $\mathbb{V}^*(f^*({\mathcal{M}}_R))$ whose reduction modulo ${\mathfrak{m}}_A$ gives ${\overline{\rho}}$. Then the natural map $D^\square {\rightarrow}{\mathrm{Spf}}R$ is formally smooth. Let $D^\square$ be ${\mathrm{Spf}}R^\square$. One can rescale ${\mathfrak{e}}^0$ and ${\mathfrak{f}}^0$ by units, and rescale the other basis vectors appropriately so that the coefficients in the definition of $\varphi$ which are $1$ remain $1$. This gives a $\widehat{\mathbf{G}}_m^2$-action on $R$ where the diagonal acts trivially, and orbits give isomorphic $\varphi$-modules. We claim that the natural map ${\mathrm{Spf}}R^\square/\widehat{\mathbf{G}}_m^2 {\rightarrow}{\mathrm{Spf}}R_{{\overline{\rho}}|_{G_{K_\infty}}}^\square$ is a closed embedding. It suffices to show injectivity on reduced tangent spaces. Suppose that $t$ is a tangent reduced vector of ${\mathrm{Spf}}R^\square/\widehat{\mathbf{G}}_m^2$ which maps to zero in ${\mathrm{Spf}}R_{{\overline{\rho}}|_{G_{K_\infty}}}^\square$. By formal smoothness, we can extend this to a map $t: R^\square {\rightarrow}{\mathbb{F}}[\varepsilon]/(\varepsilon^2)$. Let ${\mathcal{M}}_t$ be ${\mathcal{M}}_R \otimes_R {\mathbb{F}}[\varepsilon]/(\varepsilon^2)$ so that ${\mathcal{M}}_t$ and ${\mathcal{M}}$ are isomorphic. Let $M_i$ (resp.  $M_{t,i}$) be the matrices such that $\varphi({\mathfrak{e}}^i\otimes_R {\mathbb{F}},{\mathfrak{f}}^i\otimes_R {\mathbb{F}}) = M_i({\mathfrak{e}}^{i+1}\otimes_R {\mathbb{F}},{\mathfrak{f}}^{i+1}\otimes_R {\mathbb{F}})$ (resp.  $\varphi({\mathfrak{e}}^i\otimes_R {\mathbb{F}}[\varepsilon]/(\varepsilon^2),{\mathfrak{f}}^i\otimes_R {\mathbb{F}}[\varepsilon]/(\varepsilon^2)) = M_{t,i}({\mathfrak{e}}^{i+1}\otimes_R {\mathbb{F}}[\varepsilon]/(\varepsilon^2),{\mathfrak{f}}^{i+1}\otimes_R {\mathbb{F}}[\varepsilon]/(\varepsilon^2))$). Then there are matrices $D_i \in {\mathrm{GL}}_2({\mathbb{F}}(\!(v)\!) )$ such that $$({\mathrm{id}}_3+\varepsilon D_i)M_i \varphi({\mathrm{id}}_3-\varepsilon D_{i-1}) = M_{t,i}$$ for all $i \in {\mathbb{Z}}/f$, where ${\mathrm{id}}_3$ is the $3 \times 3$ identity matrix (we can assume without loss of generality that the terms without $\epsilon$ are ${\mathrm{id}}_3$ by multiplying by their inverses). We first claim that $D_i\in {\mathrm{GL}}_2({\mathbb{F}}[\![v]\!])$ for all $i \in {\mathbb{Z}}/f$. For each $i$, let $k_i \in {\mathbb{Z}}$ be the minimal integer such that $v^{k_i}D_i \in \operatorname{Mat}_3({\mathbb{F}}[\![v]\!])$. Then $v^{c_{f-1-i}+k_i}\varphi({\mathrm{id}}_3-\varepsilon D_{i-1}) = v^{c_{f-1-i}+k_i}M_i^{-1} ({\mathrm{id}}_3-\varepsilon D_i) M_{t,i}\in \operatorname{Mat}_3({\mathbb{F}}[\![v]\!])$, and thus $c_{f-1-i}+k_i\geq pk_{i-1}$. Since $c_{f-1-i} < p-1$, $k_i\geq 2 + p(k_{i-1}-1)$. If $k_{i-1} \geq n\geq 1$, then $k_i\geq n+1$, from which we derive the contradiction that $k_i\geq n$ for every $n\in {\mathbb{N}}$. Hence $k_i \leq 0$ for all $i$. We next claim that if $\{\pm \omega^{f-1-i}\} \cap I({\overline{\rho}},\mu) = \emptyset$ for some $i\in {\mathbb{Z}}/f$, then $t(Y_i) = 0$. Suppose for the sake of contradiction that $\{\pm \omega^{f-1-i}\} \cap I({\overline{\rho}},\mu) = \emptyset$ and $t(Y_i) \neq 0$. Let $N_i \in \operatorname{Mat}_3({\mathbb{F}}[\![v]\!])$ be such that $\varepsilon N_i = M_{t,i} - M_i$. Then by the formulas for $M_i$ and $M_{t,i}$, the first (resp.  second) entry in the top row of $N_i$ is exactly divisible by $v^{c_{f-1-i}-1}$ (resp.  $v^0$). On the other hand, since $D_i M_i - M_i \varphi(D_{i-1}) = N_i$, the first (resp.  second) entry in the top row of $N_i$ is divisible by $v^{c_{f-1-i}}$ (resp.  $v$), which is a contradiction. Thus $t$ is a reduced tangent vector of $$({\mathrm{Spf}}R^\square/(Y_i:\{\pm \omega^{f-1-i}\} \cap I({\overline{\rho}},\mu) = \emptyset))/\widehat{\mathbf{G}}_m^2.$$ Let $\tau$ be the tame intertial type such that $\sigma(\tau) = R_s(\mu-s\eta)$ where $s\in W$ with $s_i \neq {\mathrm{id}}$ if and only if $\omega^{(i)} \in I({\overline{\rho}},\mu)$. Then the natural map from the quotient of $${\mathrm{Spf}}R^\square/(\varpi,\{Y_i:\{\pm \omega^{f-1-i}\} \cap I({\overline{\rho}},\mu) = \emptyset\},\{X_iY_i:\{\pm \omega^{f-1-i}\} \cap I({\overline{\rho}},\mu) \neq \emptyset\})$$ by $\widehat{\mathbf{G}}_m^2$ to ${\mathrm{Spf}}R^\tau/\varpi$ is formally smooth by Proposition \[prop:1defring\]. In fact, it is an isomorphism since the domain and codomain both have the dimension $3f+1$ over ${\mathcal{O}}$ by [@kisin Theorem 3.3.4]. Since the map $$\begin{aligned} {\mathrm{Spf}}R^\square/(\varpi,\{Y_i:\{\pm \omega^{f-1-i}\} \cap I({\overline{\rho}},\mu) = \emptyset\},\{X_iY_i:\{\pm \omega^{f-1-i}\} \cap I({\overline{\rho}},\mu) \neq \emptyset\}) \\ {\rightarrow}{\mathrm{Spf}}R^\square/(Y_i:\{\pm \omega^{f-1-i}\} \cap I({\overline{\rho}},\mu) = \emptyset)\end{aligned}$$ is an isomorphism on reduced tangent spaces, $t$ is a reduced tangent vector of ${\mathrm{Spf}}R^\tau$. Since ${\mathrm{Spf}}R^\tau {\rightarrow}{\mathrm{Spf}}R_{{\overline{\rho}}|_{G_{K_\infty}}}^\square$ is injective on reduced tangent spaces, $t$ is zero. Finally, since $R$ is $p$-flat, it suffices to show that for $s\in W$ and $I = \{s_i\omega^{(i)}\}$, $\mathbb{V}^*({\mathcal{M}}/(f_i(I))_i)$ is the restriction to $G_{K_\infty}$ of a versal potentially Barsotti–Tate deformation of ${\overline{\rho}}$ of type $\tau$ where $\sigma(\tau) = R_s(\mu-s\eta)$. This follows from Proposition \[prop:1defring\]. Modular Serre weights {#subsec:serre} --------------------- Let ${\overline{\rho}}:G_K {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ be a Galois representation. [@BDJ] attaches to ${\overline{\rho}}$ a set $W({\overline{\rho}})$ of Serre weights (see also [@breuil §4] with the notation $\mathcal{D}({\overline{\rho}})$). \[def:gen\] We say that ${\overline{\rho}}:G_K {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ is generic if for all $F(\mu-\eta)\in W({\overline{\rho}})$, $1 < \langle \mu-\eta,\alpha^{(i)} \rangle < p-2$ for all $i\in {\mathbb{Z}}/f$. Note that if ${\overline{\rho}}$ is generic, then it is generic in the sense of [@EGS Definition 2.1.1]. Let ${\overline{\rho}}:G_K {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ be a generic Galois representation with $F(\mu-\eta) \in W({\overline{\rho}})$. Then there is a subset $I'({\overline{\rho}},\mu)$ of $S=\{\pm\omega^{(i)}\}_i$ with $\#(I'({\overline{\rho}},\mu)\cap \{\pm\omega^{(i)}\}) \leq 1$ for all $i$ such that $W({\overline{\rho}}) = \{\sigma_J:J \subset I'({\overline{\rho}},\mu)\}$. There is a tame inertial type $\tau$ such that $W({\overline{\rho}}^{\mathrm{ss}}) = {\mathrm{JH}}(\overline{\sigma}(\tau))$ by the proof of [@EGS Proposition 3.5.2]. Then $\sigma(\tau)$ is of the form $R_s(\lambda-s\eta)$ with $s_i = {\mathrm{id}}$ for $i\neq 0$. Then by [@EGS Theorem 7.2.1(1)], $W({\overline{\rho}})$ is $\overline{\sigma}_J(\tau)$ where $J_{\min} \subset J \subset J_{\max}$ (using notation therein). The result now follows from [@LMS Proposition 2.4], noting that $\sigma_J(\tau)$ (in the notation of [@EGS §5.1]) is $\sigma_{J'}$ as defined with respect to $\lambda$ in §\[sec:proj\] with $-s_i\omega^{(i)}\in J'$ if and only if $i-1 \in J$. \[prop:phi\] Let ${\overline{\rho}}:G_K {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ be a generic Galois representation with $F(\mu-\eta) \in W({\overline{\rho}})$ such that for each $i\in {\mathbb{Z}}/f$, $\mu_i = (c_i,1)$. Then there exists an étale $\varphi$-module ${\mathcal{M}}$ as in §\[subsec:hodge\] such that $\mathbb{V}^*({\mathcal{M}}) \cong {\overline{\rho}}$. Moreover, $I({\overline{\rho}},\mu)$ is $I'({\overline{\rho}},\mu)$. Let $s\in W$ be such that $s_i \neq {\mathrm{id}}$ if and only if $\omega^{(i)} \in I'({\overline{\rho}},\mu)$, and let $\tau$ be the generic tame inertial type such that $\sigma(\tau) = R_s(\mu-s\eta)$. Then $W({\overline{\rho}})$ is a subset of ${\mathrm{JH}}(\overline{\sigma}(\tau))$ (this follows from [@Herzig Theorem 5.1] or the proof of [@LLLM2 Proposition 2.2.7]). Let $s_\tau\in S_2$ and $w\in W$ be as in the proof of Proposition \[prop:1defring\], so that we again have that $R_s(\mu-s\eta)$ is isomorphic to $R_{(s_\tau,{\mathrm{id}},\ldots,{\mathrm{id}})}(F^{-1}(w)\cdot (\mu-s\eta))$. Then ${\overline{\rho}}$ has a potentially Barsotti–Tate lift of type $\tau = \tau(s_\tau,F^{-1}(w)\cdot (\mu-s\eta))$ by [@EGS Theorem 7.2.1(1)]. Thus there exists a Kisin module ${\mathfrak{M}}$ (with quadratic unramified descent data) of type (the quadratic unramified base change of) $\tau$. In the notation of [@EGS], $F^{-1}(w)\cdot (\mu-s\eta) = ({\mathbf{m}},{\mathbf{m}}')$ defines an ordered pair of characters $(\eta,\eta')$ with $\eta = \omega_f^{-\sum_{i=0}^{f-1} m_i p^i}$ and $\eta' = \omega_f^{-\sum_{i=0}^{f-1} m'_i p^i}$ if $s_\tau = {\mathrm{id}}$ and $\eta = \omega_{2f}^{-\sum_{i=0}^{f-1} m_i p^i - p^f\sum_{i=0}^{f-1} m'_i p^i}$ and $\eta' = \eta^{p^f}$ otherwise. If $s' = (s'_i)_i \in W$ is defined by $s'_i = w_{f-i}$, then $s'$ is the orientation of $F^{-1}(w)\cdot (\mu-s\eta)$. Then as in the proof of Proposition \[prop:1defring\], we let $A^{(i-1)}$ be $\operatorname{Mat}_\beta(\phi^{(i-1)}_{{\mathfrak{M}}_R,s_{i}'(2)})$ for an eigenbasis $\beta$, so that if ${\mathcal{M}}= \prod_i {\mathbb{F}}((v)){\mathfrak{e}}^i \oplus {\mathbb{F}}((v)){\mathfrak{f}}^i$ is the $\varphi$-module with $\varphi({{\mathfrak{e}}}^{i-1},{{\mathfrak{f}}}^{i-1}) = M_{i-1}({{\mathfrak{e}}}^i,{{\mathfrak{f}}}^i)$ and $$\label{eqn:phimat} M_i = A^{(i)}s_{f-1-i}^{-1}v^{\mu_i-s_i\eta^{(i)}},$$ then ${\overline{\rho}}\cong \mathbb{V}^*({\mathcal{M}})$ by a calculation similar to the one in the second paragraph of the proof of Proposition \[prop:1defring\]. There exists an eigenbasis $\beta$, which we now fix, such that for $A^{(i-1)}$ is $\overline{A}_j$ for $1\leq j\leq 3$ as in Theorem \[thm:fhdef\]. This follows from the analogue of [@LLLM Theorem 2.21] and the fact that the matrices of the form $\overline{A}_j$ form a set of representatives for the double coset $I{\widetilde{w}}_jI$ where $I$ is the upper triangular Iwahori subgroup and ${\widetilde{w}}_j$ is $$\begin{pmatrix} v & \\ & 1 \\ \end{pmatrix}, \begin{pmatrix} & 1 \\ v & \\ \end{pmatrix}, \textrm{ and } \begin{pmatrix} 1 & \\ & v \\ \end{pmatrix},$$ for $j=1$, $2$, and $3$, respectively. Moreover, using [@EGS Lemma 7.4.1] and its analogue for cuspidal $\tau$, $A^{(i-1)}$ is $\overline{A}_2$ if and only if $\#(I'({\overline{\rho}},\mu) \cap \{\pm\omega^{(f-i)}\}) = 1$. We claim that for all $i$, $A^{(i-1)}$ is not $\overline{A}_1$. We would then have that $A^{(i-1)}$ is $\overline{A}_2$ if $\#(I'({\overline{\rho}},\mu) \cap \{\pm\omega^{(f-i)}\}) = 1$ and $\overline{A}_3$ otherwise. Both claims would then follow from (\[eqn:phimat\]) and a direct calculation. That $A^{(i-1)}$ is not $\overline{A}_1$ comes from the fact that $F(\mu-\eta) \in W({\overline{\rho}})$. Again using [@EGS Lemma 7.4.1] and its analogue for cuspidal $\tau$, since $F(\mu-\eta) \in W({\overline{\rho}})$, if $w_{i+1} = {\mathrm{id}}$, then $i-1\notin I_\eta$ in the notation of [@EGS]. Since $s'_{f-1-i} = w_{i+1} = {\mathrm{id}}$, this means that $ A^{(f-1-i)} \neq \overline{A}_1$. Similarly, if $w_{i+1} \neq {\mathrm{id}}$, then $i-1\notin I_{\eta'}$ in the notation of [@EGS]. Since $s'_{f-1-i} = w_{i+1} \neq {\mathrm{id}}$, this also means that $ A^{(f-1-i)} \neq \overline{A}_1$. Patching functors and multiplicity one ====================================== Let ${\overline{\rho}}:G_K{\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ be a generic Galois representation and $\sigma {\stackrel{\textrm{\tiny{def}}}{=}}F(\mu-\eta)$ be a Serre weight in $W({\overline{\rho}})$ ($W({\overline{\rho}})$ is recalled in §\[subsec:serre\]). Define $w\in W$ and $I({\overline{\rho}},\mu) \subset S_w$ as in §\[sec:defring\]. Let $M_\infty(\cdot)$ be a minimal fixed determinant patching functor over ${\mathcal{O}}$ (see [@EGS Definition 6.1.3]). For a ${\mathcal{O}}_K[{\mathrm{GL}}_2({\mathcal{O}}_K)]$-module $N$, we will denote $M_\infty(N\otimes_{{\mathcal{O}}_K} {\mathcal{O}})$ by $M'_\infty(N)$, where tensor product is over the map ${\mathcal{O}}_K{\hookrightarrow}{\mathcal{O}}$ in §\[subsec:not\]. \[lemma:patch2\] The $R_\infty$-module $M'_\infty(R_{\mu}/\operatorname{Fil}^2 R_{\mu})$ is a cyclic. Let $\tau$ be the tame type such that $\sigma(\tau) = R_w(\mu-w\eta)$. Let $\sigma^\circ(\tau) \subset \sigma(\tau)$ be the unique lattice up to homothety with cosocle isomorphic to $\sigma$ (see [@EGS Lemma 4.1.1]). Let $\overline{\sigma}^\circ(\tau)$ be the reduction of $\sigma^\circ(\tau)$. Then the natural map $R_\mu {\twoheadrightarrow}\overline{\sigma}^\circ(\tau)$ induces a map $R_\mu/\operatorname{Fil}^2 R_\mu {\twoheadrightarrow}\overline{\sigma}^\circ(\tau)/{\mathrm{rad}}^2 \overline{\sigma}^\circ(\tau)$. By [@EGS Theorem 5.1.1] and [@LMS Proposition 3.2], the kernel of this map contains no Jordan–Hölder factors in $W({\overline{\rho}})$. Thus, the induced map $M'_\infty(R_\mu/\operatorname{Fil}^2 R_\mu) {\twoheadrightarrow}M'_\infty(\overline{\sigma}^\circ(\tau)/{\mathrm{rad}}^2 \overline{\sigma}^\circ(\tau))$ is an isomorphism. As $M'_\infty(\overline{\sigma}^\circ(\tau))$ is a cyclic $R_\infty$-module by [@EGS Theorem 10.1.1], so is $M'_\infty(\overline{\sigma}^\circ(\tau)/{\mathrm{rad}}^2 \overline{\sigma}^\circ(\tau))$. \[lemma:covering\] Suppose that $I \subset S$ such that $\#(I \cap \{\pm \omega^{(i)}\}) + \#(I({\overline{\rho}},\mu) \cap \{\pm \omega^{(i)}\}) = 1$. Let $N$ be a submodule of $\operatorname{Fil}^k R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I}$ such that the cokernel of the projection of $N$ onto $\operatorname{gr}^k R_{\mu,I}$ contains no Serre weights in $W({\overline{\rho}})$. Then the quotient $(\operatorname{Fil}^k R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I})/N$ contains no Jordan–Hölder factors in $W({\overline{\rho}})$. It suffices to show that $\operatorname{gr}^{k+1} R_{\mu,I}/\operatorname{gr}^{k+1} N$ contains no Jordan–Hölder factors in $W({\overline{\rho}})$, since by assumption $\operatorname{gr}^k R_{\mu,I}/\operatorname{gr}^k N$ contains no Jordan–Hölder factors in $W({\overline{\rho}})$. In fact, it suffices to show that $\operatorname{gr}^{k+1} W_{{\mathbf{k}},{\mathbf{k}}+1,I}/(N \cap \operatorname{gr}^{k+1} W_{{\mathbf{k}},{\mathbf{k}}+1,I})$ contains no Jordan–Hölder factors in $W({\overline{\rho}})$ since $\sum_{|{\mathbf{k}}| = k} \operatorname{gr}^{k+1} W_{{\mathbf{k}},{\mathbf{k}}+1,I} = \operatorname{gr}^{k+1} R_{\mu,I}$. By Proposition \[prop:Wk\], a Jordan–Hölder factor of $\operatorname{gr}^{k+1} W_{{\mathbf{k}},{\mathbf{k}}+1,I}$ has the form $\sigma_{J'}$ where $J' \cap I = \emptyset$ and there is a $j\in {\mathbb{Z}}/f$ such that if ${\mathbf{k}}(J') = {\mathbf{k}}'$ then $k'_i = k_i$ for all $i\neq j$ and $k'_j = k_j+1$. Suppose that $\sigma_{J'}\in W({\overline{\rho}})$. If $k'_j = 2$, then let $J = J' \setminus \{-w_j\omega^{(j)}\}$. Otherwise, $J' \cap \{\pm \omega^{(j)}\} = \{w_j\omega^{(j)}\}$ since we assumed that $\sigma_{J'} \in W({\overline{\rho}})$. In this case, let $J = J' \setminus \{w_j\omega^{(j)}\}$. Then $\sigma_J \in W({\overline{\rho}})$ and is thus a Jordan–Hölder factor of $N \cap W_{{\mathbf{k}},{\mathbf{k}}+1,I}$. By Proposition \[prop:ext\], $\sigma_{J'}$ is a Jordan–Hölder factor of $N$. The following lemma generalizes [@EGS Lemma 10.1.13], one of the methods used to compute patched modules. \[lemma:ca\] Let $R$ be a local ring, and $M'' \subset M' \subset M$ be $R$-modules such that $M'/M''$ and $M'$ are minimally generated by the same finite number of elements. Then $M'' \subset \mathfrak{m}M$. If, moreover, $M$ is finitely generated over $R$, then $M/M''$ and $M$ are minimally generated by the same number of elements. By Nakayama’s lemma, that $M'/M''$ and $M'$ are minimally generated by the same finite number of elements implies that $M'' \subset \mathfrak{m}M'$ and thus $M'' \subset \mathfrak{m}M$. If $M$ is finitely generated, then another application of Nakayama’s lemma implies that $M/M''$ and $M$ are minimally generated by the same number of elements. The following proposition generalizes the results and methods of [@HW; @LMS] by combining Lemmas \[lemma:patch2\], \[lemma:covering\], and \[lemma:ca\]. \[prop:oldcyc\] Suppose that $I \subset S$ such that $\#(I \cap \{\pm \omega^{(i)}\}) + \#(I({\overline{\rho}},\mu) \cap \{\pm \omega^{(i)}\}) = 1$. Then $M'_\infty({\widetilde{R}}_{\mu,I})$ is a cyclic $R_\infty$-module. By Nakayama’s lemma, it suffices to show that $M'_\infty(R_{\mu,I})$ is a cyclic $R_\infty$-module. We will show that $M'_\infty(R_{\mu,I}/\operatorname{Fil}^{k+1} R_{\mu,I})$ is a cyclic $R_\infty$-module by induction on $k$. If $k=1$, then the result follows from Lemma \[lemma:patch2\]. Now suppose that $M'_\infty(R_{\mu,I}/\operatorname{Fil}^{k+1} R_{\mu,I})$ is a cyclic $R_\infty$-module. Let $\mathfrak{J}$ be $\{J\subset S:k(J) = k,J \cap I = \emptyset,\sigma_J \in W({\overline{\rho}})\}$. For each $J\in \mathfrak{J}$, let $\overline{V}_{J,I}$ be the image of $\overline{V}_J$ in $R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I}$ where $\overline{V}_J$ is defined before [@LMS Proposition 3.9]. Note that $M'_\infty(\overline{V}_{J,I})$ is a cyclic $R_\infty$-module by Lemma \[lemma:patch2\]. Let $\overline{V}$ be $\sum_{J\in \mathfrak{J}} \overline{V}_{J,I} \subset \operatorname{Fil}^k R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I}$. By Lemma \[lemma:covering\], the quotient $(\operatorname{Fil}^k R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I})/\overline{V}$ does not contain any Jordan–Hölder factors in $W({\overline{\rho}})$. Thus the natural inclusion $M'_\infty(\overline{V}) \subset M'_\infty(\operatorname{Fil}^k R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I})$ is an equality. In particular, $M'_\infty(\operatorname{Fil}^k R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I})$ is generated by no more than $\# \mathfrak{J}$ elements. On the other hand, $M'_\infty(\operatorname{gr}^k R_{\mu,I}) \cong \oplus_{J\in \mathfrak{J}} M'_\infty(\sigma_J)$ is generated by (at least) $\#\mathfrak{J}$ elements. By Lemma \[lemma:ca\] with $M = M'_\infty(R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I})$, $M' = M'_\infty(\operatorname{Fil}^k R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I})$, and $M'' = \operatorname{gr}^{k+1} R_{\mu,I}$, $M'_\infty(R_{\mu,I}/\operatorname{Fil}^{k+2} R_{\mu,I})$ is a cyclic $R_\infty$-module. \[prop:supp\] The scheme-theoretic support of $M'_\infty({\widetilde{R}}_{\sigma,I})$ is ${\mathrm{Spec}\ }(R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I}})$. Since $M'_\infty({\widetilde{R}}_{\sigma,I})[p^{-1}]$ is isomorphic to $\oplus_{\sigma(\tau) \in T_{\sigma,I}} M'_\infty(\sigma(\tau))$, the scheme-theoretic support of $M'_\infty({\widetilde{R}}_{\sigma,I})[p^{-1}]$ is $\cup_{\sigma(\tau) \in T_{\sigma,I}} {\mathrm{Spec}\ }(R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^\tau)[p^{-1}]$ by the proof of [@EGS Theorem 9.1.1]. Since $M'_\infty({\widetilde{R}}_{\sigma,I})$ is ${\mathcal{O}}$-flat by definition of a patching functor, the scheme-theoretic support of $M'_\infty({\widetilde{R}}_{\sigma,I})$ is the Zariski closure of that of $M'_\infty({\widetilde{R}}_{\sigma,I})[p^{-1}]$. The result now follows from the definition of ${\mathrm{Spec}\ }R^{T_{\sigma,I}}$. In order to weaken the hypotheses on $I$ in Proposition \[prop:oldcyc\], we compute an integral scheme intersection, of which the following lemma is the key example. \[lemma:glue\] There is an exact sequence $$0 {\rightarrow}{\mathcal{O}}[\![Y]\!]/(Y(Y-p)) {\rightarrow}{\mathcal{O}}[\![Y]\!]/(Y) \oplus {\mathcal{O}}[\![Y]\!]/(Y-p) {\rightarrow}{\mathcal{O}}[\![Y]\!]/(Y,p) {\rightarrow}0,$$ where the second and third maps are the sum and difference, respectively, of the natural projections. Given a ring $R$ and ideals $I$ and $J\subset R$, the sequence $$0 {\rightarrow}R/ (I \cap J) {\rightarrow}R/I \oplus R/J {\rightarrow}R/(I+J) {\rightarrow}0,$$ where the second and third maps are the sum and difference, respectively, of the natural projections, is exact. The lemma follows from this exact sequence and the relations $(Y) \cap (Y-p) = (Y(Y-p))$ and $(Y)+(Y-p) = (Y,p)$ in ${\mathcal{O}}[\![Y]\!]$. The following is our main result in the setting of patching functors. \[thm:multone\] Suppose that $I \subset S$ such that $\#(I \cap \{\pm \omega^{(i)}\}) + \#(I({\overline{\rho}},\mu) \cap \{\pm \omega^{(i)}\}) \leq 1$. Then $M'_\infty({\widetilde{R}}_{\mu,I})$ is a cyclic $R_\infty$-module. We proceed by induction on $k := f - \#I({\overline{\rho}},\mu) - \#I$. The case $k=0$ follows from Proposition \[prop:oldcyc\]. Suppose that $k>0$ and that $(I\cup I({\overline{\rho}},\mu))\cap \{\pm\omega^{(j)}\} = \emptyset$. Then there is an exact sequence $$0 {\rightarrow}{\widetilde{R}}_{\mu,I} {\rightarrow}{\widetilde{R}}_{\mu,I \cup \{\omega^{(j)}\} } \oplus {\widetilde{R}}_{\mu,I \cup \{-\omega^{(j)}\} } {\rightarrow}R_{\mu,I \cup \{\pm\omega^{(j)}\} } {\rightarrow}0,$$ which induces an exact sequence $$0 {\rightarrow}M'_\infty({\widetilde{R}}_{\mu,I}) {\rightarrow}M'_\infty({\widetilde{R}}_{\mu,I \cup \{\omega^{(j)}\} }) \oplus M'_\infty({\widetilde{R}}_{\mu,I \cup \{-\omega^{(j)}\} }) {\rightarrow}M'_\infty(R_{\mu,I \cup \{\pm\omega^{(j)}\} }) {\rightarrow}0,$$ where the third map is the sum of two surjections by exactness of $M'_\infty(\cdot)$. By the inductive hypothesis and Proposition \[prop:supp\], $M'_\infty({\widetilde{R}}_{\mu,I \cup \{\omega^{(j)}\} })$ and $M'_\infty({\widetilde{R}}_{\mu,I \cup \{-\omega^{(j)}\} })$ are cyclic $R_\infty$-modules with scheme-theoretic support ${\mathrm{Spec}\ }R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{\omega^{(j)}\} }}$ and ${\mathrm{Spec}\ }R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square}R^{T_{\sigma,I \cup \{-\omega^{(j)}\} }}$, respectively. The scheme-theoretic support of $M'_\infty(R_{\mu,I \cup \{\pm\omega^{(j)}\} })$ is thus a closed subscheme of the intersections of ${\mathrm{Spec}\ }R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{\omega^{(j)}\} }}$ and ${\mathrm{Spec}\ }R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{-\omega^{(j)}\} }}$, which is ${\mathrm{Spec}\ }R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{\omega^{(j)}\} }}/p$ by Theorem \[thm:defring\] and Proposition \[prop:phi\] (we can assume without loss of generality that $\mu$ has the form in §\[sec:defring\] by twisting). Since $M'_\infty(R_{\mu,I \cup \{\pm\omega^{(j)}\} })$ is a cyclic $R_\infty$-module, there is a surjection $$R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{\omega^{(j)}\} }}/p {\twoheadrightarrow}M'_\infty(R_{\mu,I \cup \{\pm\omega^{(j)}\} }).$$ Since $\{\pm \omega^{(j)} \} \cap I({\overline{\rho}},\mu) = \emptyset$, from Proposition \[prop:Wk\] we see that $M'_\infty(R_{\mu,I \cup \{\omega^{(j)}\} })$ and $M'_\infty(R_{\mu,I \cup \{\pm\omega^{(j)}\} })$ have the same Hilbert–Samuel multiplicity. Thus, both sides of the map $R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{\omega^{(j)}\} }}/p {\twoheadrightarrow}M'_\infty(R_{\mu,I \cup \{\pm\omega^{(j)}\} })$ have the same Hilbert–Samuel multiplicity. Since $R^{T_{\sigma,I \cup \{\omega^{(j)}\} }}/p$ contains no embedded primes, this map is an isomorphism (see the argument of [@le Lemma 6.1.1]). In summary, there is an exact sequence $$0 {\rightarrow}M'_\infty({\widetilde{R}}_{\mu,I}) {\rightarrow}R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{\omega^{(j)}\} }} \oplus R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{-\omega^{(j)}\} }} {\rightarrow}R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I \cup \{\omega^{(j)}\} }}/p {\rightarrow}0,$$ where the third map is the sum of two surjections. Any lift of a generator under a surjection between two cyclic modules over a local ring is again a generator by Nakayama’s lemma. Hence, we can assume that the third map is the difference of the natural projections. Then by Theorem \[thm:defring\] and Proposition \[prop:phi\], this exact sequence is obtained from taking a completed tensor product with the exact sequence in Lemma \[lemma:glue\]. Hence, we see that $M'_\infty({\widetilde{R}}_{\mu,I}) \cong R_\infty \widehat{\otimes}_{R_{{\overline{\rho}}}^\square} R^{T_{\sigma,I}}$, and in particular that $M'_\infty({\widetilde{R}}_{\mu,I})$ is a cyclic $R_\infty$-module. Global results {#sec:main} ============== Let $F$ be a totally real field in which $p$ is unramified. Let $D_{/F}$ be a quaternion algebra which is unramified at all places dividing $p$ and at most one infinite place, and let ${\overline{r}}:G_F {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ be a Galois representation. If $D_{/F}$ is indefinite and $K = \prod_w K_w \subset (D\otimes_F {\mathbb{A}}_F^\infty)^\times$ is an open compact subgroup, then there is a smooth projective curve $X_K$ defined over $F$ and we define $S(K,{\mathbb{F}})$ to be $H^1((X_K)_{/\overline{F}},{\mathbb{F}})$. If $D_{/F}$ is definite, then we let $S(K,{\mathbb{F}})$ be the space of $K$-invariant continuous functions $$f: D^\times\backslash (D\otimes_F \mathbb{A}_F^\infty)^\times {\rightarrow}{\mathbb{F}}.$$ Let $S$ be the union of the set of places in $F$ where ${\overline{r}}$ is ramified, the set of places in $F$ where $D$ is ramified, and the set of places in $F$ dividing $p$. Let $\mathbb{T}^{S,\mathrm{univ}}$ be the commutative polynomial algebra over ${\mathcal{O}}$ generated by the formal variables $T_w$ and $S_w$ for each $w \notin S\cup \{w_1\}$ where $w_1$ is chosen as in [@EGS §6.2]. Then $\mathbb{T}^{S,\mathrm{univ}}$ acts on $S({\mathbb{F}})$ with $T_w$ and $S_w$ acting by the usual double coset action of $$\big[ {\mathrm{GL}}_2({\mathcal{O}}_{F_w}) \begin{pmatrix} \varpi_w & \\ & 1 \\ \end{pmatrix}{\mathrm{GL}}_2({\mathcal{O}}_{F_w}) \big]$$ and $$\big[ {\mathrm{GL}}_2({\mathcal{O}}_{F_w}) \begin{pmatrix} \varpi_w & \\ & \varpi_w \\ \end{pmatrix}{\mathrm{GL}}_2({\mathcal{O}}_{F_w}) \big],$$ respectively. Let $\mathbb{T}^{S,\mathrm{univ}}{\rightarrow}{\mathbb{F}}$ be the map such that the image of $X^2 - T_w X + (\mathbb{N}w) S_w$ in ${\mathbb{F}}[X]$ is the characteristic polynomial of ${\overline{\rho}}({\mathrm{Frob}}_w)$, and let the kernel be ${\mathfrak{m}}_{{\overline{r}}}$. For the rest of the section, suppose that 1. ${\overline{r}}$ is modular, i.e. that there exists $K$ such that $S(K,{\mathbb{F}})_{{\mathfrak{m}}_{{\overline{r}}}}$ is nonzero; 2. ${\overline{r}}|_{G_{F(\zeta_p)}}$ is absolutely irreducible; 3. if $p=5$ then the image of ${\overline{r}}(G_{F(\zeta_p)})$ in ${\mathrm{PGL}}_2({\mathbb{F}})$ is not isomorphic to $A_5$; 4. ${\overline{r}}|_{G_{F_w}}$ is generic (Definition \[def:gen\]) for all places $w|p$; and 5. ${\overline{r}}|_{G_{F_w}}$ is non-scalar at all finite places where $D$ ramifies. Let $v|p$ be a place of $F$, and let ${\overline{\rho}}$ be ${\overline{r}}|_{G_{F_v}}$. We define $S^{\mathrm{min}}$ to be $S(K^v,\otimes_{w\in S,w\neq v} L_w)_{{\mathfrak{m}}'_{{\overline{r}}}}$ as in [@EGS §6.5]. We define $M^{\mathrm{min}}$ to be the linear dual of $S^{\mathrm{min}}$, factoring out the Galois action in the indefinite case (see [@EGS §6.2]). \[thm:K1multone\] Suppose that ${\overline{r}}:G_F {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ is a Galois representation satisfying $(1)$-$(5)$. If $\sigma \in W({\overline{\rho}})$ and $R_\sigma$ is the ${\mathcal{O}}[{\mathrm{GL}}_2({\mathbb{F}}_q)]$-projective envelope of $\sigma$, then ${\mathrm{Hom}}_{{\mathcal{O}}[{\mathrm{GL}}_2({\mathbb{F}}_q)]}(R_\sigma,(M^{\mathrm{min}})^*)$ is one-dimensional. Let $\sigma = F(\mu-\eta) \in W({\overline{\rho}})$. Then $R_\sigma$ is $R_\mu \otimes_{{\mathbb{F}}_q} {\mathbb{F}}$. Let $M_\infty$ be the minimal fixed determinant patching functor defined in [@EGS §6.5]. By construction, if ${\mathfrak{m}}_{R_\infty}$ is the maximal ideal of $R_\infty$, then ${\mathrm{Hom}}_{{\mathrm{GL}}_2({\mathbb{F}}_q)}(R_\sigma,(M^{\mathrm{min}})^*)$ is the dual of $M_\infty(R_\sigma)/{\mathfrak{m}}_{R_\infty} = M_\infty'(R_\mu)/{\mathfrak{m}}_{R_\infty}$, which is one dimensional since $M'_\infty(R_\mu)$ is a cyclic $R_\infty$-module by Theorem \[thm:multone\]. Let $M^{\mathrm{min}}(K_v(1))$ denote the coinvariants $(M^{\mathrm{min}})_{K_1}$. Note that $M^{\mathrm{min}}(K_v(1))$ is isomorphic to the dual of $S(K^vK_v(1),\otimes_{w\in S,w\neq v} L_w)_{{\mathfrak{m}}'_{{\overline{r}}}}$, factoring out the Galois action in the indefinite case, by a standard spectral sequence argument using that ${\mathfrak{m}}_{{\overline{r}}}'$ is non-Eisenstein. \[cor:main\] Suppose that ${\overline{r}}:G_F {\rightarrow}{\mathrm{GL}}_2({\mathbb{F}})$ is a Galois representation satisfying $(1)$-$(5)$. Then the ${\mathrm{GL}}_2({\mathbb{F}}_q)$-representation $(M^{\mathrm{min}}(K_v(1)))^*$ is isomorphic to $D_0({\overline{\rho}})$. In particular, $(M^{\mathrm{min}}(K_v(1)))^*$ depends only on ${\overline{\rho}}$ and is multiplicity free. There is an injection $D_0({\overline{\rho}}) {\hookrightarrow}(M^{\mathrm{min}}(K_v(1)))^*$ by [@breuil Proposition 9.3]. Fix an ${\mathbb{F}}[{\mathrm{GL}}_2({\mathbb{F}}_q)]$-injective hull $(M^{\mathrm{min}}(K_v(1)))^* {\hookrightarrow}I$. Since $${\mathrm{Hom}}_{{\mathrm{GL}}_2({\mathbb{F}}_q)}(R_\sigma,(M^{\mathrm{min}}(K_v(1)))^*)$$ is one-dimensional for all $\sigma \in W({\overline{\rho}})$ by Theorem \[thm:K1multone\], this injective hull factors through $D_0({\overline{\rho}})$ by [@BP Theorem 1.1(i)]. Since $D_0({\overline{\rho}})$ and $(M^{\mathrm{min}}(K_v(1)))^*$ are finite length ${\mathbb{F}}[{\mathrm{GL}}_2({\mathbb{F}}_q)]$-modules, they must be isomorphic. Finally, note that $D_0({\overline{\rho}})$ is multiplicity free by [@BP Theorem 1.1(ii)].
{ "pile_set_name": "ArXiv" }
--- abstract: | Summary: ======== Biospectrogam is an open-source software for the spectral analysis of DNA and protein sequences. The software can fetch (from NCBI server), import and manage biological data. One can analyze the data using Digital Signal Processing (DSP) techniques since the software allows the user to convert the symbolic data into numerical data using $23$ popular encodings and then apply popular transformations such as Fast Fourier Transform (FFT) etc. and export it. The ability of exporting (both encoding files and transform files) as a MATLAB^^ .m file gives the user an option to apply variety of techniques of DSP. User can also do window analysis (both sliding in forward and backward directions and stagnant) with different size windows and search for meaningful spectral pattern with the help of exported MATLAB^^ file in a dynamic manner by choosing time delay in the plot using Biospectrogram. Random encodings and user choice encoding allows software to search for many possibilities in spectral space. Availability: ============= Biospectrogam is written in Java^^ and is available to download freely from http://www.guptalab.org/biospectrogram. Software has been optimized to run on Windows, Mac OSX and Linux. User manual and you-tube (product demo) tutorial is also available on the website. We are in the process of acquiring open source license for it. Contact: ======== [mankg@computer.org](mankg@computer.org) address: '$^{1}$ Laboratory of Natural Information Processing, Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, 382007 India. ' author: - | Nilay Chheda$^{1,\footnote{These authors contributed equally to the project and should be considered co-first authors}}$, Naman Turakhia $^{1,\footnotemark[1]}$, Manish K. Gupta $^1$[^1], Ruchin Shah$^{1}$ and Jigar Raisinghani$^{1}$ bibliography: - 'dspgen.bib' nocite: '[@1195219; @SilvermanLinsker; @zhangzhang; @citeulike:4180094; @939833; @1346354; @liao; @Yau2003; @citeulike:6778043; @4365821; @cristea; @rosen; @chakravarthy; @1227391; @DanCristea:2003:LSF:774474.774488; @Vaidyanathan05genomicsand]' title: 'Biospectrogram: a tool for spectral analysis of biological sequences' --- Introduction ============ Molecular biology has shown tremendous progress in the last decade because of various genome projects producing vast amount of biological data. This has resulted in Encode project (http://encodeproject.org) that classifies all the basic DNA elements of Human genome. This also gives us new insight into numerous molecular mechanism. In order to understand the digital biological data, people use different techniques from mathematics, computer science, etc. Digital signal processing (DSP) is a fundamental concept in information and communication technology (ICT). A natural question arises “Can DSP techniques help us to understand the digital biology?" It turns out that the DSP techniques are playing a major role in biology and have given birth to a new branch called genomic signal processing [@shmulevich2007genomic]. To analyse the genomic data, researchers first convert the symbolic data (example DNA or protein data) into numerical data by applying a suitable map [@Kwan; @Arniker2012] and then by applying signal processing transforms such as Fourier etc. to study the desired biological properties [@citeulike:3895919]. In this work, we present a tool, Biospectrogram, which can help researchers to apply different encodings on the biological data and apply certain transformations to do the spectral analysis. User can also export the files (encoded or transformed) to popular MATLAB^^ software [@MATLAB:2010] to do the direct analysis. Implementation and Features =========================== The tool Biospectrogram has $4$ major components viz. data collector, encode, transforms and export $\&$ plot. One can use the tool in DNA or Protein mode by using the switch button. The tool has two main windows viz display window for displaying the data (collected or encoded data) and work window (encoded or transform data) to show the work. Data collector module provides a direct fetching of DNA data (both fasta and genebank file formats) from National Center of Biotechnology Information (NCBI) server by taking accession number from user which can be encoded using encode button. User can also import the files from his own machine/network. One can also select a portion of the data from the window and do further processing. One popular encoding map is the Voss representation [@1195219] which maps the nucleotides $A, C, G,$ and $T$ from DNA space into the four binary indicator sequences $x_A[n]$, $x_C[n]$, $x_G[n]$, and $x_T[n]$ showing the presence (e.g. $1$) or absence (e.g. $0$) of the respective nucleotides. Similar indicator maps are available for protein space. ![image](biospecwitharrows.png) Different possible encodings ($23$ available in our tool) and transformations ($6$ available in our tool) are shown in Figure  \[fig:02\]. While applying encoding user has to select the fetched file from the first dropdown list and encoding scheme from the second dropdown list. The fasta file of the DNA sequence is shown in the display window and the encoded output is shown in the work window. After encoding the fetched DNA sequence or protein sequence, one can apply suitable transforms (see Figure  \[fig:02\]) available in the tool. To apply other transforms (not available in our tool) and filters etc. one can export the encoded files to MATLAB^^ .m files and do the further analysis. For exhaustive search of a pattern, a window analysis can be done with our tool. The window button allows the user to set a window size while moving the window in both directions (forward and backward) using sliding window option whereas using stagnant window option user can select a portion of the sequence for the power spectrum from all its indicator sequences. By choosing appropriate delay time in the preferences, one can plot the transformation’s output of our tool by exporting the transformation files to MATLAB^^ .m files and observe the signal in a smooth automatic manner with a delay of time set by user. [^1]: to whom correspondence should be addressed
{ "pile_set_name": "ArXiv" }
--- abstract: 'The optical conductivity of graphene nanoribbons is analytical and exactly derived. It is shown that the absence of translational invariance along the transverse direction allows considerable intra-band absorption in a narrow frequency window that varies with the ribbon width, and lies in the THz range domain for ribbons 10–100nm wide. In this spectral region the absorption anisotropy can be as high as two orders of magnitude, which renders the medium strongly dichroic, and allows for a very high degree of polarization (up to $\sim85\%$) with just a single layer of graphene. Using a cavity for impedance enhancement, or a stack of few layer nanoribbons, these values can reach almost $100\%$. This opens a potential prospect of employing graphene ribbon structures as efficient polarizers in the far IR and THz frequencies.' author: - 'F. Hipolito' - 'A. J. Chaves' - 'R. M. Ribeiro' - 'M. I. Vasilevskiy' - 'Vitor M. Pereira' - 'N. M. R. Peres' bibliography: - 'ribbon\_dichroism.bib' title: Enhanced Optical Dichroism of Graphene Nanoribbons --- =1 Introduction {#introduction .unnumbered} ============ Dichroism refers to the ability of some materials to absorb light differently, depending on the polarization state of the incoming wave, and leads to effects such as the rotation of the plane of polarization of light transmitted through them [@BW]. This characteristic is the basis of several elementary optical elements like polarizers, wave retarders, etc., which are essential building blocks in optics, photo-electronics and telecommunications. Dichroism, as an intrinsic property of certain materials and substances, is also widely relevant for substance characterization in fields ranging from spectroscopy, to chemistry, to life sciences. A grid of parallelly aligned metallic wires is a well known textbook example of a dichroic system, where unpolarized radiation becomes polarized perpendicularly to the wires, if the wavelength is much larger than the wire separation [@Fizeau:1861]. This example shows how geometrical anisotropy can be engineered to induce dichroism in otherwise isotropic media. Here we unveil the *intrinsic* dichroic properties of graphene nanoribbons (GNR), and assess how effectively grids of GNRs can be used as polarizing elements. To our knowledge, the intrinsic anisotropic absorption characteristics of GNR have not been explored as we discuss here. ![ Illustration of the geometry under consideration, and potential device application, consisting of a grid of parallel GNRs perpendicular to the incoming wave. The grid can be in vacuum, at the interface between two different dielectric media (1,2), or even inside a metallic waveguide with sectional area $a\times b$. A plane-polarized incoming wave has its polarization rotated by an angle $\theta$ upon crossing the nanoribbon grating or, alternatively, an unpolarized wave emerges linearly polarized. []{data-label="fig:Illustration"}](1.pdf){width="48.00000%"} The motivation to explore GNRs in this context comes from a convergence of several critical properties. *First*, the optical absorption spectrum of pristine graphene is roughly constant over an enormous band of frequencies[@Science_Nair; @RMP10], from the THz to the near UV. This opens the unprecedented prospect of exploring its optical response to develop optical elements that can operate predictably and consistently in such broad frequency bands. Broadband polarizers, for example, are a much needed element in photonic circuits for telecommunications, and graphene can play here an important role [@KianPing:2011]. *Second*, the optical absorption of graphene is easily switched on and off by varying the electronic density, which can be easily achieved by electrostatic gating [@Basov:2008]. *Third*, due to the record breaking stiffness of the crystal lattice, one can suspend a graphene sheet and cut a grating of the thinnest nanowires (currently of the order of 10nm [@Lemme:2009]), which opens new avenues in ultra-narrow gratings, and upon which we base the system depicted in [Fig. \[fig:Illustration\]]{}. *Fourth*, since graphene is metallic and possesses no bulk (it is a pure surface), the rich phenomenology associated with surface plasmons-polaritons (SPP) is certainly unavoidable, further broadening the horizon of possibilities for optical applications [@Ju2011]. *Finally*, the atomic thickness of graphene results in a transparency of 97.7%. Hence, even if one is able to induce strong absorption along one direction, the overall transmissivity will still be large, which is important to maintain losses under control. Dichroism mechanism {#dichroism-mechanism .unnumbered} =================== The natural first step towards such possibilities consists in analyzing the intrinsic optical response of GNRs, to which we dedicate the remainder of this paper. We are interested in how the finite transverse dimension affects the optical absorption spectrum at low frequencies (IR and below), which is rather featureless in bulk graphene[@Science_Nair] (except for the $\omega=0$ Drude peak), but turns out to be much richer in nanoribbons. The situation we envisage is depicted in [Fig. \[fig:Illustration\]]{}, and consists in passing an electromagnetic wave across a grid of GNRs. For definiteness and technical simplicity we restrict our analysis to armchair (AC) nanoribbons, although our results do not depend on the specific chirality, as will be clear later. An important aspect to consider in GNRs has to do with how large edge disorder is expected to be, and to what extent it might mask the phenomena under discussion. To address this, while at the same time keeping as much analytical control over the results as possible, our calculations involve two steps. First, the frequency-dependent conductivity tensor $\sigma_{\alpha\beta}(\omega), \,(\alpha,\beta = x,y)$ of an AC GNR is derived exactly for free electrons governed by a nearest-neighbor tight-binding Hamiltonian (see below). We then perform ensemble averages of such $\sigma_{\alpha\beta}(\omega)$, where the ribbon width is the fluctuating parameter, and thus extract the overall response of the system accounting for “disorder”. This procedure hinges on the assumption that the leading impact of disorder in the optical response is captured by the broadening of the quasi 1D electronic bands, which is also achieved with an ensemble average of ribbons with fluctuating width. As discussed below, other generic disorder mechanisms (such as carrier density inhomogeneity or strain) are supposed to produce only small (of the order of a few percent) relative fluctuations of the observable properties of the ribbons. Moreover, such ensemble averaging over a distribution of ribbon’s widths is also close to the experimental situation, insofar as even state-of-the-art fabrication cannot control ribbon widths with atomic precision [@Xu:2011]. Thus, an array of ribbons cut out of a graphene sheet will always display a distribution of widths around a predefined target value ${\ensuremath{\langle{W}\rangle}}=W_0$. Technically, the conductivity of such an array of GNRs is given by ${\ensuremath{\langle{\sigma_{\alpha\beta}(\omega)}\rangle}}=\sum_{W}f(W)\sigma_{\alpha\beta} ^W(\omega)$, where $f(W)$ is the normal distribution for the ribbon width $W$, and $\sigma_{\alpha\alpha}^W(\omega)$ is the conductivity of a single ribbon of width $W$. Overall parametrizations are as follows. The natural energy scale is the hopping amplitude in bulk graphene: $t\simeq 2.7$eV [@RevModPhys_RevModPhys.81.109]. The chemical potential, $\mu$, determines the free carrier response and also sets the spectral limit for [*inter*-band ]{}transitions (at $T=0$K). Non-zero free carrier densities are the norm, and their amount depends on the fabrication and sample treatment procedure. They can range from $n_e\sim 10^{10}\,\text{cm}^{-2}$ to a few $10^{12}\,\text{cm}^{-2}$. Such densities correspond to $\mu$ varying roughly between $0.01t$ to $0.1t$, which is the interval we focus on below. Gating allows the carrier density to be easily tuned via field effect [@Novoselov:2004]. Ribbons are interchangeably characterized by their absolute width $W$, or by $N$, which counts the number of dimer rows along the transverse direction, and $W=\sqrt{3}(N-1)a/2\simeq 0.12\,N$nm, where $a\simeq 1.42$Årepresents the C–C distance. For the purposes of ensemble averaging, ribbon widths are uniformly distributed with a standard deviation that we take as constant: $\langle N^2 - {\ensuremath{\langle{N}\rangle}}^2 \rangle^{1/2}=10\,(\simeq1.2\,\text{nm})$. This is done to mimic the experimental limitations associated with the minimum feature size that can be achieved by lithographic means, and is presumably a constant number. All the calculations discussed below have been done for $T=300$K. We use the terms *intra-* or *inter-*band in reference to transitions occurring among subbands with the *same* or *opposite* sign of energy, respectively. The hopping amplitude sets the energy scale, and all quantities with dimensions of energy will be expressed in terms of $t$. For $\mu>0.1t$, and $N>100$ (18nm), the finite width of the ribbon does not significantly alter the relation between $\mu$ and $n_e$ from the one in bulk graphene. Hence, $n_e \simeq 7\times10^{14} (\mu/t)^2 \,\text{cm}^{-2}$. To be definite, for illustration purposes we will take $\mu=0.1t$ in most of the plots[@EndNote-5]. Conductivities are normalized to the universal value $\sigma_0=\pi e^2 / 2h$ of clean 2D graphene at low frequencies, and the incoming radiation has a wavelength much larger than the ribbon width $W$. Derivation of the Conductivity Tensor {#derivation-of-the-conductivity-tensor .unnumbered} ===================================== The derivation of the conductivity tensor of an armchair graphene ribbon starts with the consideration of the nearest neighbor tight-binding Hamiltonian describing the $\pi$ bands of graphene, and characterized by a hopping amplitude $t\simeq 2.7$eV [@RevModPhys_RevModPhys.81.109]. The ribbon eigenstates have the analytical form [@Katsunori; @Zozoulenko] $ {\left|}\Psi_{\ell,q,\lambda} {\right\rangle}= \mathcal{N} \sum_{n,m}e^{-iq(m+n/2)}\sin \left(k_\ell n\right) \times \left({\left|}A,n,m{\right\rangle}+ \lambda e^{-i\theta_{\ell,q}} {\left|}B,n,m{\right\rangle}\right) $, where $k_\ell=\pi\ell/(N+1)$ is the quantum number associated with transverse quantization ($\ell=1,2,\ldots,N$), $\mathcal{N}=1/\sqrt{N+1}$, $\lambda=\pm 1$ defines the valence ($\lambda=$-1) or conduction ($\lambda=$+1) bands, ${\left|}A,n,m{\right\rangle}$ is the Wannier state at sub-lattice $A$ of the unit cell at position $\bm R=n\,\bm{n}+m\,\bm{m}$ (see [Fig. \[fig:Illustration\]]{}), $N$ is the number of unit cells along the finite $\bm{n}$ direction, and $q$ is the dimensionless momentum along $\bm{m}$, whose value is within the range $-\pi<q\le\pi$. The phase difference between sub-lattice amplitudes is $$\theta_{\ell,q}=\arctan\frac{2\cos k_\ell\sin\left(q/2\right)} {1+2\cos k_\ell\cos\left(q/2\right)} \label{eq:theta} ,$$ This is sufficient to determine the optical conductivity from Kubo’s formula [@PRL10]: $$\begin{gathered} \sigma_{\alpha\beta} \!=\! \frac{2ie^2}{\omega S} \! \sum_{\ell_1,\ell_2,q}\sum_{\lambda_1,\lambda_2}\! \frac{f(E_{\ell_1,q,\lambda_1})-f(E_{\ell_2,q,\lambda_2})} {\hbar\omega-(E_{k_2,q,\lambda_2}+E_{k_1,q,\lambda_1})+i0^+} \\ \times {\left\langle}\Psi_{\ell_1,q,\lambda_1}{\right|}v_{\alpha}{\left|}\Psi_{\ell_2,q,\lambda_2}{\right\rangle}{\left\langle}\Psi_{\ell_2,q,\lambda_2}{\right|}v_{\beta}{\left|}\Psi_{\ell_1,q,\lambda_1}{\right\rangle}\label{eq:Kubo} ,\end{gathered}$$ where $S$ is the area of the ribbon, $f(x)$ the Fermi distribution function, and ${\left\langle}\Psi_{\ell,q,\lambda}{\right|}v_{\alpha}{\left|}\Psi_{\ell',q,\lambda'}{\right\rangle}$ is the matrix element of the $\alpha$ component of the velocity operator [@PhysRevB.78.085432]. Since the energy scale is determined by $t$, let us introduce a dimensionless energy parameter $\Omega = \hbar\omega/t$. Translation invariance along the longitudinal direction dictates that the matrix elements of the velocity $v_x$ are diagonal in $q$ and $\ell$, leading to $\sigma_{xx}$ of the form $$\Re\frac{\sigma_{xx}}{\sigma_0} = \mathcal{N}_x \sum_{\ell_0}\delta f_{q_0,\ell_0} M^2_x(q_0,\ell_0) , \label{eq:sigxx}$$ where $\delta f_{q_0,\ell_0} = f(E_{\ell_0,q_0,-})- f(E_{\ell_0,q_0,+})$, $\mathcal{N}_x = 4/3\sqrt{3}(N-1)$, and $q_0$ is given by $$q_0 = 2\arccos\frac{(\Omega/2)^2-1-4\cos^2k_{\ell_0}}{4\cos k_{\ell_0}} . \label{eq:q0xx}$$ The sum in [Eq. (\[eq:sigxx\])]{} is restricted to those values of $\ell_0$ such that $q_0 \in \mathbb{R}$. Finally, $M^2_x(q_0,\ell_0)$ reads $$M^2_x(q_0,\ell_0) = \frac{ \bigl[\cos\theta_{\ell_0,q_0} \!\!-\! \cos(\theta_{\ell_0,q_0} \!\!\! - \!q_0/2) \cos k_{\ell_0} \bigr]^2 }{ \sin(q_0/2)\cos k_{\ell_0} } \label{eq:Mx} .$$ Only *inter*-band transitions (from the sub-bands with $\lambda=-1$ to $\lambda=+1$) contribute to $\sigma_{xx}$. The analytical expression for $\sigma_{yy}$ is slightly more cumbersome than the previous one, due to the absence of translation invariance along that direction. As a result, (i) the matrix elements of the operator $v_y$ are non-diagonal in the sub-band index $\ell$, and (ii) there are both *intra*-band ($\lambda=\lambda'$) and *inter*-band ($\lambda\ne\lambda'$) contributions to the transverse conductivity. The calculation is, nevertheless, straightforward, yielding $$\Re\frac{\sigma_{yy}}{\sigma_0} = \mathcal{N}_y \sum_{\ell_1,\ell_2}\sum_{\lambda,\lambda'} \mathcal{P}_{\ell_1,\ell_2} \, \delta f_{q_0,\ell_1,\ell_2}^{\lambda,\lambda'} \, M^2_y(q_0,\ell_1,\ell_2) \label{eq:sigyy} ,$$ where $\mathcal{N}_y=4/\sqrt{3}(N+1)(N^2-1)$, $\delta f_{q_0,\ell_1,\ell_2}^{\lambda,\lambda'}=n_F(E_{\ell_1,q_0,\lambda })- n_F(E_{\ell_2,q_0,\lambda'})$, and $\mathcal{P}_{\ell_1,\ell_2}=1-(-1)^{\ell_1+\ell_2}$. This latter factor entails the selection rule for transitions among sub-bands $\ell_1+\ell_2=$ odd. The last factor is $$\begin{gathered} M^2_y(q_0,\ell_1,\ell_2) = \frac{ \sin^2k_{\ell_1}\sin^2k_{\ell_2} }{ \sin^2[(k_{\ell_1}+k_{\ell_2})/2]\sin^2[(k_{\ell_1}-k_{\ell_2})/2] }\\ \times \frac{\epsilon_{\ell_1,q_0}\epsilon_{\ell_2,q_0} \vert\sin(q_0/2)\vert^{-1} (\hbar\omega)^{-1} } {\left\vert \cos k_{\ell_1}\epsilon_{\ell_2,q_0}+\lambda\lambda' \cos k_{\ell_2}\epsilon_{\ell_1,q_0} \right\vert} \times \mathcal{C}_{q_0,\ell_1,\ell_2} \label{eq:My} ,\end{gathered}$$ where $\mathcal{C}_{q_0,\ell_1,\ell_2} = 1 + \lambda\lambda'\cos(\theta_{\ell_1,q_0}+\theta_{ \ell_2,q_0}-q_0)$, and $$q_0 = 2 \arccos \frac{(a_2-a_1)Q_b+\Omega^2(b_1+b_2)\pm Q_c}{(b_1-b_2)^2} , \label{eq:q0_yy}$$ with $Q_c = 2\sqrt{\Omega^4 b_1 b_2+\Omega^2 Q_b Q_a}$, $Q_b = b_1 - b_2$, $Q_a = b_1 a_2 - b_2 a_1$, $a_i = 1 + 4\cos^2k_{\ell_i}$ and $b_i = 4\cos k_{\ell_i}$. The sum in [Eq. (\[eq:sigyy\])]{} is also restricted to those $\ell_1,\ell_2$ such that $q_0\in\mathbb{R}$, and to $\lambda\le\lambda'$ (photon absorption only). The expressions in Eqs. (\[eq:sigxx\]) and (\[eq:sigyy\]) are our central result, and from them follow all the averages and other physical quantities described and analyzed below. ![ The three non-zero contributions for ${\ensuremath{\langle{\sigma_{\alpha\alpha}(\omega)}\rangle}}$ discussed in the text, showing a very strong anisotropy in the infrared. In this example the optical conductivities are calculated for an ensemble of ribbons having $ {\ensuremath{\langle{N}\rangle}} = 150$ ($\simeq 18.5$nm), $\sqrt{{\left\langle}N^2 -{\ensuremath{\langle{N}\rangle}}^2{\right\rangle}} = 10$ ($\simeq 1.2$nm). We further used $T=300K$ and $\mu = 0.1$ ($\simeq 0.3$eV). The [*inter*-band ]{} contributions essentially follow the bulk 2D behavior, with the expected temperature-broadened step onset at $\hbar\omega = 2\mu$. In contrast, the [*intra*-band ]{}contribution for the transverse conductivity ($\sigma_{yy}$) is strongly peaked at low energies. Also note that the vertical axis is *truncated* for clarity, and that $\sigma_{yy}$ peaks at nearly $28\sigma_0$ for this ensemble. The inset shows the same three quantities, but for a single ribbon of $N=150$, rather than the ensemble. []{data-label="fig:Sigma"}](3.pdf){width="50.00000%"} Anisotropic Optical Absorption {#anisotropic-optical-absorption .unnumbered} ============================== Lateral confinement, reduces the energy spectrum of GNRs to a set of subbands, each reflecting the dispersion of an effective 1D mode $\ell$ ($\ell=1,2,\ldots,N$), propagating longitudinally with momentum $q$: , where $\lambda=\pm1$, defines the valence and conduction subbands, $$\epsilon_{\ell,q} = \sqrt{1+4\cos k_\ell\cos(q/2)+4\cos^2k_\ell} \label{eq:Dispersion} ,$$ and $k_\ell$ is transverse quantized momentum: $k_\ell=\pi\ell/(N+1)$. Consequently, the density of states is dominated by Van Hove singularities (VHS) that develop at $q=0$ for each subband [@Fujita:1996; @Katsunori; @Wakabayashi:1999]. Such sharp spectral features translate into strong optical absorption for ideal GNRs, but are readily smoothed out by edge or bulk disorder and/or temperature in real systems [@EndNote-4]. Our ensemble averaging has the same effect. In [Fig. \[fig:Sigma\]]{} we show the averages ${\ensuremath{\langle{\sigma_{xx}}\rangle}}$ and ${\ensuremath{\langle{\sigma_{yy}}\rangle}}$ for an ensemble with ${\ensuremath{\langle{N}\rangle}}=150$, and finite chemical potential: $\mu=0.1$. This particular value of chemical potential was chosen to allow a clear distinction between the [*inter*-band ]{}and [*intra*-band ]{}contributions to the conductivity, so as to better illustrate the main features of the absorption spectrum. As a consequence of time reversal symmetry, only the diagonal components of $\sigma_{\alpha\beta}$ in the coordinate system of [Fig. \[fig:Illustration\]]{} are non-zero. Translation invariance along the longitudinal ($x$) direction implies that only [*inter*-band ]{}transitions contribute to $\sigma_{xx}(\omega)$, as derived explicitly above. Consequently, ${\ensuremath{\langle{\sigma_{xx}(\omega)}\rangle}}$ reproduces the bulk 2D behavior, as is clearly seen in [Fig. \[fig:Sigma\]]{}. For the analysis of the transverse conductivity, ${\ensuremath{\langle{\sigma_{yy}(\omega)}\rangle}}$, it is convenient to isolate the *inter-* and *intra-*band contributions: ${\ensuremath{\langle{\sigma_{yy}(\omega)}\rangle}} = {\ensuremath{\langle{\sigma_{yy}^\text{inter}(\omega)}\rangle}} + {\ensuremath{\langle{\sigma_{yy}^\text{intra}(\omega)}\rangle}}$ (the latter is allowed since along the transverse direction the electron scatters off the ribbon edges). Whereas ${\ensuremath{\langle{\sigma_{yy}^\text{inter}}\rangle}}$ featurelessly follows ${\ensuremath{\langle{\sigma_{xx}}\rangle}}$ (and hence the bulk 2D behavior), its [*intra*-band ]{}counterpart displays a rather strong feature at low energies which, for this specific example, nearly reaches 30 times the universal value $\sigma_0$. Some aspects of [Fig. \[fig:Sigma\]]{} are worth underlying. Firstly, it is evident that, despite averaging to the same step-wise $\omega$-dependence, ${\ensuremath{\langle{\sigma_{yy}^\text{inter}(\omega)}\rangle}}$ is much smoother than ${\ensuremath{\langle{\sigma_{xx}^\text{inter}(\omega)}\rangle}}$, even though the averages are over the same ensemble. This can be traced to the fact that, for each individual ribbon, only $N$ symmetric transitions ($-E \to +E$) contribute to $\sigma_{xx}^\text{inter}(\omega)$, whereas $\sigma_{yy}^\text{inter}(\omega)$ includes $\mathcal{O}(N^2)$ transitions among almost all pairs of subbands. Consequently, the latter has many more absorption singularities, but much weaker, by conservation of spectral weight (this is explicitly shown in the inset of [Fig. \[fig:Sigma\]]{}). The averaging is thus more efficient in washing out the structure of VHSs in ${\ensuremath{\langle{\sigma_{yy}^\text{inter}(\omega)}\rangle}}$. Secondly, the low-energy peak in ${\ensuremath{\langle{\sigma_{yy}^\text{intra}(\omega)}\rangle}}$ can be already identified from a single ribbon (inset). Its origin is simple to understand with reference to [Fig. \[fig:Spectrum\]]{}. Since the band structure consists of a set of discrete subbands, the chemical potential will always be straddled by two of them at $q=0$, such that $E_{\ell,q=0,\lambda}<\mu<E_{\ell+1,q=0,\lambda}$. Given that transitions $\ell\to\ell+1$ are allowed in $\sigma_{yy}^\text{intra}$, one expects an absorption peak at $\hbar\omega \approx |E_{\ell,q,\lambda}-E_{\ell+1,q,\lambda}|$. Moreover, as per [Eq. (\[eq:My\])]{} the matrix element decays rapidly with the difference in band index, so that the transitions between the two bands closest to $\mu$ completely dominate $\sigma_{yy}^\text{intra}$. From [Fig. \[fig:Spectrum\]]{} it is clear that there are always two pairs of such bands, whose energy difference at $q=0$ is $\hbar\,\delta\omega_{1,2}\approx\pi \sqrt{3-\mu^2\pm2\mu}/(N+1)$. Since we are interested in situations where $\mu\ll 1$, the [*intra*-band ]{}peaks are solely determined by the ribbon geometry: $\hbar\omega_\text{max}\approx\pi \sqrt{3}/(N+1)$. This can be confirmed in the inset of [Fig. \[fig:P-vs-N\]]{} for ensembles with different ${\ensuremath{\langle{N}\rangle}}$, and introduces an element of *predictability* and *tunability* with respect to the frequency band where the optical absorption is highly enhanced. In other words, given the frequency of operation desired for a given application, one can select the appropriate average ribbon width that yields the strongest optical anisotropy at that target frequency. Another relevant detail to notice is that, as seen in [Fig. \[fig:Sigma\]]{}, the absorption peak in $\sigma_{yy}^\text{intra}$ is much more resilient to the ensemble averaging (or level broadening) than all the other transitions coming from [*inter*-band ]{}processes: the averaging readily washes out the VHS features, but leaves the peak in ${\ensuremath{\langle{\sigma_{yy}^\text{intra}}\rangle}}$ quite well defined and intense. In the case shown in [Fig. \[fig:Sigma\]]{}, ${\ensuremath{\langle{\sigma_{yy}^\text{intra}}\rangle}}$ peaks at a few dozen times the value of the longitudinal ${\ensuremath{\langle{\sigma_{xx}}\rangle}}$ The reason for this is very simple to understand qualitatively, and is twofold. On the one hand, since there are always two resonant conditions very close in frequency (for example, $\delta\omega_1$ and $\delta\omega_2$ in [Fig. \[fig:Spectrum\]]{}), the shape of the feature in $\sigma_{yy}^\text{intra}$ has a double peak structure. To show this explicitly, in [Fig. \[fig:DoublePeak\]]{} we present a close-up of $\sigma_{yy}^\text{intra}$ for the single ribbon with $N=150$ previously shown in the inset of [Fig. \[fig:Sigma\]]{}: the double peak structure is self-evident. In addition to that, the transition processes contributing to $\sigma_{yy}^\text{intra}$ are quite different from the ones that contribute to $\sigma_{xx}$, or $\sigma_{yy}^\text{inter}$. In a single independent ribbon, the longitudinal conductivity is dominated by [*inter*-band ]{}transitions among sub-bands which have an inverted dispersion with respect to each other (see [Fig. \[fig:DoublePeak\]]{}(b) for an illustration). Consequently the resonant condition occurs only at the van Hove point, leading to the very sharp van Hove absorption peaks in $\sigma_{xx}$ that we see in the inset of [Fig. \[fig:Sigma\]]{}. In contrast, the processes contributing the most to $\sigma_{yy}^\text{intra}$ involve transitions among *nearly parallel* sub-bands \[[Fig. \[fig:DoublePeak\]]{}(b)\], thus allowing a finite density of momentum states to contribute to the resonance, and implying a larger joint density of states. This makes the absorption feature in $\sigma_{yy}^\text{intra}$ broader than the van Hove-type peaks associated with $\sigma_{xx}$. The consequence of this is that, when one considers the ensemble averaging, the sharp van Hove peaks in the longitudinal conductivity will be slightly displaced with the changing $N$ within the ensemble, and are rapidly washed out. The double-peak structure, combined with the broader *parallel*-dominated absorption, *protects* the transverse absorption peak with respect to the level broadening, thereby resulting in an absorption feature that is much more robust. ![ The degree of polarization $\cal P(\omega)$ in the low energy region, for ribbons of different average width (in unit cells) ${\ensuremath{\langle{N}\rangle}}$ and $\mu=0.1$, $T=300K$ (for reference, ${\ensuremath{\langle{N}\rangle}}=\{75,\,150,\,375,\,750,\,1500\}\;\Leftrightarrow {\ensuremath{\langle{W}\rangle}}=\{9,\,18,\,46,\,92,\,184\}$nm). The inset shows the position of the most prominent peak in ${\ensuremath{\langle{\sigma_{yy}^\text{intra}}\rangle}}$ as a function of ${\ensuremath{\langle{N}\rangle}}$ and $\mu$. The $\mu$ dependence is expectedly weak, while the peak position is seen to follow the analytical form described in the text. []{data-label="fig:P-vs-N"}](5.pdf){width="50.00000%"} To assess the polarizing efficiency of a single graphene ribbon we calculate the optical transmission amplitude, which is the ratio of the electric field amplitudes of the incoming and transmitted fields: $t_\alpha(\omega) = E^{(t)}_\alpha / E^{(i)}_\alpha $, ($\alpha=x,y$). For radiation impinging normally upon an ensemble of GNRs separating medium 1 and medium 2 ([Fig. \[fig:Illustration\]]{}), the transmission amplitude reads explicitly $$t_\alpha(\omega) = \frac{2\,Z^{(2)}} {Z^{(1)}+Z^{(2)} [1 + Z^{(1)}{\ensuremath{\langle{\sigma_{\alpha\alpha}(\omega)}\rangle}} ] } \label{eq:Transmission} ,$$ where $Z=\sqrt{\mu_0\mu/\epsilon_0\epsilon}$ is the impedance of each medium. This result is obtained in the conventional way, by assuming that the system of graphene ribbons is a metallic sheet of zero thickness, and imposing the boundary conditions of the electromagnetic field at the interface. Knowledge of $t_\alpha(\omega)$ allows for the calculation of the *degree of polarization* (DP, $\mathcal{P}(\omega)$), or the rotation of the plane of linear polarization ($\theta = \theta_f-\theta_i$): $$\mathcal{P}(\omega) = \frac{|t_x|^2 - |t_y|^2}{|t_x|^2 + |t_y|^2} ,\qquad \tan\theta_f = \frac{t_y(\omega)}{t_x(\omega)} \tan\theta_i \label{eq:Polarizability} ,$$ This definition is useful for unpolarized incoming light where $\mathcal{P} = \pm 1$ reflects full polarization of the incoming wave. For an already polarized incoming wave, the second equation shows that the effect naturally depends on the orientation of the incoming polarization with respect to the ribbon principal directions. With $\mathcal{P}(\omega)$ we can immediately identify the degree of dichroism by how close $|\mathcal{P}(\omega)|$ is to unity (i.e. how close to an ideal polarizer are we). In [Fig. \[fig:P-vs-N\]]{} we plot $\mathcal{P}(\omega)$ for different ribbon widths. It can be clearly seen that, DP in excess of 50% can be achieved already with ribbons $45$nm wide. We underline that *this is the degree of polarization produced by an atomically thin ensemble of ribbons*, which makes the magnitude of the effect even more striking! Even though the transparency of infinite 2D graphene is as large as 97.7%, the confinement-induced anisotropy can be so large as to almost completely suppressing one of the field projections. The same figure also confirms that the optimum DP is achieved at a width-dependent frequency $\omega_\text{max}$ which, as discussed above, has a simple form (inset of [Fig. \[fig:P-vs-N\]]{}). However it is also clear that this tunability is at the expense of the absolute amount of DP (narrower ribbons $\to$ larger $\omega_\text{max}$ $\to$ smaller $\mathcal{P}(\omega_\text{max})$). Nevertheless, it has been experimentally confirmed that the optical absorption of $N$-layer graphene is simply proportional to $N$, from the bilayer to graphite[@EndNote-3] for most of the low energy range [@Kuzmenko:2008; @Science_Nair; @Heinz:2010]. This means that the effect reported here can be significantly magnified by using few-layer graphene ribbons, or simply superimposing a few independent layers onto each other. In addition, the form of [Eq. (\[eq:Transmission\])]{} given in terms of the impedance of the media suggests that additional parameter freedom can be achieved if the wave propagates inside a metallic waveguide. As is well known, electromagnetic propagation in waveguides is restricted to normal TEM, TM or TE modes. Each of the latter two has a characteristic dispersion that is different from the free-space relation $\omega = c k / n$. For the purpose of analyzing transmission and reflection amplitudes in a situation as depicted in [Fig. \[fig:Illustration\]]{}, the effect of the waveguide can be absorbed in a renormalized and frequency-dependent impedance, $Z(\omega)$. For example, the mode TE$_{mn}$ has a characteristic impedance [@Jackson] $Z_{mn}(\omega)=Z \omega / \sqrt{\omega^2 - \omega_{mn}^2}$, where $\omega_{mn}^2 = (c^2\pi^2 / \mu\epsilon) [(m/a)^2+(n/b)^2]$. Hence each mode can only propagate if $\omega$ is beyond the mode cut-off frequency $\omega_{mn}$, and this is frequently used to select/restrict the propagating modes by adapting the geometry of the waveguide. In our example we could take a square cross section ($a=b$), in which case the two degenerate modes TE$_{10}$ and TE$_{01}$ can be combined into an arbitrary incoming plane polarization [@EndNote-1]. In that case, if $\omega_{10} < \omega < \omega_{11}$, only the modes TE$_{10,01}$ propagate in the waveguide, and $Z_{10}(\omega)=Z \omega / \sqrt{\omega^2 - \omega_{10}^2}$. The cavity setup is interesting and useful for two reasons, which can be understood by inspection of [Fig. \[fig:P-cavity\]]{}: (i) on one hand, by tuning the cavity dimensions so that $\omega_{10}\lesssim\omega_\text{max}(N)$ one can precisely cut-off the DP below $\omega_{10}$, creating a well defined band of frequencies where the system displays high DP; (ii) on the other hand, since $Z_{10}(\omega) > Z$ (and, in particular $Z_{10}(\omega\gtrsim\omega_{10}) \gg Z$), the cavity highly magnifies the DP, even for a monolayer system. Taking as illustration the ribbon ensemble with ${\ensuremath{\langle{N}\rangle}} = 750$ shown in [Fig. \[fig:P-cavity\]]{}, proper tuning of the cut-off frequency can introduce a clear and well defined band filter for $\mathcal{P}(\omega)$, while simultaneously amplifying the magnitude of $\mathcal{P}(\omega)$ in comparison with the value for a free wave. ($\mathcal{P}(\omega)$ climbs beyond 80% in the entire frequency window). Lastly, this enhancement of the impedance can also make $\mathcal{P}(\omega)$ more *step-like* within the strongly amplified regime, rather than *peak-like*, as implied by the right panel of [Fig. \[fig:P-cavity\]]{}. ![ The effect of a metallic waveguide of square cross-section in the degree of polarization $\mathcal{P}(\omega)$ for two ensembles of ribbons (${\ensuremath{\langle{N}\rangle}}=150,\,750$). Each panel shows $\mathcal{P}(\omega)$ for an incoming wave made of a combination [@EndNote-1] of the two lowest degenerate modes TE$_{10,01}$, in vacuum (black) and in waveguides (colors) with different geometries, *i.e.* different cut-off frequency $\omega_{10}$. Each $\omega_{10}$ is marked by a dot at the corresponding w in the horizontal axis and a unique color. []{data-label="fig:P-cavity"}](6.pdf){width="50.00000%"} Discussion {#discussion .unnumbered} ========== The optical absorption of a ribbon is seen here to be highly anisotropic on account of the new [*intra*-band ]{}channel made possible by the finite transverse direction, and the resulting electron scattering at the ribbon edges. Recent experiments do show that the transmission spectrum of graphene ribbon arrays is rather different for light polarized parallel and perpendicularly to the ribbon length, with the latter dominated by a plasmon absorption resonance at $\sim 3$THz [@Ju2011]. However, these experiments pertain to ribbons much wider ($\gtrsim1\,\mu\text{m}$) than the ones envisaged here ($\lesssim 50\,\text{nm}$), such that their spectrum is effectively continuous. Naturally, in the limit of wide ribbons ($N\to\infty$), the peak of $\sigma_{yy}$ in [Fig. \[fig:Sigma\]]{} simultaneously narrows and moves towards $\omega=0$, where it becomes the Drude singularity that we expect for an infinite and disorder-free system. Indeed, the easiest way to understand the sharp feature of $\sigma_{yy}$ at low energies is to see it as a usual Drude peak that has been shifted to finite $\omega$ by making the system finite along the transverse direction, thus allowing [*intra*-band ]{}transitions of finite frequency. The issue of how to actually manufacture a grid of narrow GNRs with consistent and predictable width has been addressed earlier. It can be achieved by means of high precision patterning using a He-ion beam microscope in lithography mode [@Lemme:2009], or more standard etch masks able to cut down to the 10nm scale [@Bai:2009]. An alternative to cutting ribbons out of graphene sheets is the recently developed technique of unzipping carbon nanotubes (CNTs) [@Kosynkin:2009; @Hongjie:2010; @Crommie:2011]. Nowadays it is possible to produce batches of CNTs with similar radius [@HongjieDai:2007], and so this would allow for the production of high quality ribbons without edge disorder. Another alternative, that completely bypasses patterning, consists in inducing effective nanoribbons by engineering a periodic distribution of strain in a bulk graphene sheet, such that the strain-induced confinement mimics the ribbon quantization features [@vitorbreak]. As always in the context of GNRs, the role of disorder needs to be addressed, and perhaps electron-electron interactions as well [@PRL10]. It is known that disorder can affect and even destroy many intrinsic features, such as the edge modes in zig-zag (ZZ) GNRs [@Fujita:1996], the spontaneous spin polarization expected for ideal ZZ ribbons [@Wakabayashi:1999; @Son:2006], the width scaling of the gap [@Stampfer:2009; @GoldhaberGordon:2010], or their conductance [@Mucciolo:2009]. In our case, disorder can modify the intrinsic optical anisotropy in different ways, depending on the causes: (i) inhomogeneities of the free carrier density caused by various external effects (e.g., substrate inhomogeneities, asdorbates, charged impurities); (ii) spatial fluctuations of the site energy and hopping parameters leading to broadening of mini-bands and carrier scattering, which in turn broadens and shifts the [*intra*-band ]{}absorption peaks; (iii) adsorbates and other impurities can introduce spurious features in the absorption spectrum; (iv) edge disorder can lead to localization of some electronic states [@Mucciolo:2009]. Concerning (i), typical electron density fluctuations in graphene on representative substrates, such as SiO$_2$, have been evaluated experimentally [@Martin2008], and seen to be of the order of $\delta n_e\sim4\times10^{10}\mathrm{cm}^{-2}$ in relatively clean systems. Such effects will presumably have little impact when the overall carrier density is between $10^{11}-10^{12}$, which are the densities targeted in our study. The effects of diagonal and non-diagonal disorder (ii) are expected to be less important for narrower ribbons, simply because the anisotropy is induced by *intra*-subband absorption, and the separation of the subbands scales as $\propto1/N$ (and so the narrower the ribbon the less significant become local fluctuations of the potential energy, or the hopping amplitudes). Therefore, it is expected that the necessary anisotropy in $\sigma(\omega)$ might be achieved in practice. Regarding (iii), post-patterning annealing techniques have been progressively improved, and proven quite efficient in removing such sources of disorder [@Moser:2007]; alternatively, encapsulation of graphene has been shown to significantly reduce environmental contamination and to reduce electronic scattering [@Geim:2011]. With respect to (iv), much depends on the fabrication technique, and the CNT unzipping method (or perhaps the strain-engineering route) would be preferred to mitigate edge disorder. If present in a strong degree, however, edge disorder might bring about new effects not considered here. In particular, experiments show that edge disorder arising from conventional lithographic procedures leads to strong electron localization, and the emergence of a system of effective coupled quantum dots, where charging and interaction effects can be important [@Sols:2007; @Stampfer:2009]. The extent to which these features modify the absorption spectrum is not known experimentaly and, theoretically, a realistic approach to the problem is out of range of a fully analytical approach, as we seek and use here. These effects will be addressed in future work. Another issue to consider is the low frequency absorption characteristic of any metal, associated with disorder-induced [*intra*-band ]{}transitions, and accounted for by the Drude model. In the case of graphene, the Drude conductivity is given by $$\label{Drude} \frac{\sigma_D}{\sigma_0}=\frac{4\left|\mu\right|}{\pi}\frac{1}{ \hbar\left(\gamma-i\omega\right) }$$ where $\gamma$ is the Drude scattering rate. For nanoribbons, such a term would have to be added to $\sigma_{xx}$. The appearance of a Drude peak at $\omega = 0$ is not expected to drastically affect the absorption peaks discussed so far, which occur at $\omega=\omega_\text{max}$ (finite). A similar conclusion was drawn in recent experiments measuring optical absorption in nanoribbons much wider than our target widths (and so quantization effects disappear there), which show anisotropic absorption features dominated by plasmon absorption, which are vastly insensitive to the Drude component [@Ju2011]. At any rate, to be more quantitative, the typical Drude scattering rate lies in the vicinity of $100\,\text{cm}^{-1}$ ($= 0.005 t$) [@Horng:2011]. Thus, in view of the results of [Fig. \[fig:P-vs-N\]]{}, the Drude regime should only dominate for ribbons of average width above 184nm (${\ensuremath{\langle{N}\rangle}}\gtrsim 1500$). Such ribbons are too wide anyway for the sort of dimensions we are primarily interested in, which lie around 50nm or below $\bigl({\ensuremath{\langle{N}\rangle}}\lesssim 375\bigr)$, and for which we find DP in excess of 50% already. In addition, [Fig. \[fig:Sigma\]]{} shows that the magnitude of the peak in the transverse conductivity easily reaches 10-20 times the value $\sigma_0$. For wider ribbons than the one shown (184nm) the peak easily surpasses a factor of 100, even after an ensemble average has been performed (see, e.g., [Fig. \[fig:MuDependence\]]{}). The Drude peak, on the other hand, has a magnitude given by $\Re[\sigma_D(\omega =0) / \sigma_0] = 4|\mu|/(\hbar\gamma) \approx 800\,|\mu|/t$. For $\mu=0.1t$ this means that $\Re[\sigma_D(\omega=0) / \sigma_0] \approx 80 $. However, if we lower the Fermi energy by a factor of 10 to $\mu=0.01t$, its magnitude will be 10 times smaller, of course, but the change in the transverse absorption peak is not so significant. An example of this is shown in [Fig. \[fig:MuDependence\]]{}, where we show the effect of decreasing $\mu$ (i.e. lowering the carrier density), both on the degree of polarization, and on ${\ensuremath{\langle{\sigma_{yy}^\text{intra}(\omega)}\rangle}}$. At $\mu=0.01t$ ($n_e \simeq 7\times10^{10}\,\text{cm}^{-2}$) both the polarizability and the transverse conductivity peak remain significant. In other words, one can suppress the amplitude of the Drude peak at lower densities while not suppressing much the anisotropy and polarizability. As we pointed out already, by considering the response of an ensemble of GNRs with fluctuating widths, we are introducing considerable broadening effects already \[compare, for example, the peak in ${\ensuremath{\langle{\sigma_{yy}(\omega)}\rangle}}$ for an ensemble in [Fig. \[fig:Sigma\]]{}, with the five times more intense peak of a single ribbon (inset)\]. For these reasons, we believe that the dichroism of GNRs remains considerably enhanced in the presence of realistic moderate disorder. It is worth highlighting also the fact that, since the dichroism stems here from purely spectral considerations, the chirality of the GNRs should be immaterial. In fact, all ribbons have the same scaling of the spectral features with $N$, irrespective of their chirality, and so we expect the dichroism to remain when the ensemble comprises GNRs of arbitrary chirality. Finally, having in mind the scheme depicted in [Fig. \[fig:Illustration\]]{} where we propose a grating of GNRs, we point out that the dichroism discussed here so far is intrinsic to each element of the grating, as it were. This is a departure from the conventional situation where the grating is made from a normal (isotropic) metal, and the polarizing effect arises from the geometry only, not from some intrinsic anisotropy of the metallic comb itself. In fact, it might have been noted that, whereas a conventional metallic grating polarizes *perpendicularly* to the slit direction, the dichroism of the individual GNRs favors polarization *along* the ribbon direction. The actual overall polarizing characteristics of a periodic grating based on GNRs would have to be determined by the combination of this intrinsic dichroism with the geometrical effect (just as in a conventional grating), and for which the surface plasmon-polariton (SPP) physics may play an important role [@Pendry:1999]. However, SPP excitations contribute to the optical absorption only if: (i) the incoming wave’s frequency coincides with the band where those excitations are allowed, and not damped; (ii) the grating is strictly periodic; (iii) all elements of the grating are metallically connected so as to maintain coherence of the excitations across the system as a whole; (iv) the incoming wave impinges the grating at oblique incidence. Given that we consider only *normal-incidence* (which is the one typically most straightforward and efficient from an experimental/applications point of view), the last condition (iv) is violated from the outset, and corrections to the DP arising from SPP are not expected. Moreover, one crucial reason for the existence frequency bands of strong SPP absorption (or transmission) in 3D metallic gratings arises from the coupling between those modes at the two opposing surfaces [@Pendry:1999]. Being a strict 2D metallic system (in effect a metallic boundary condition for the propagation of electromagnetic waves), SPP cannot decay into the (non-existent) bulk of graphene. This points to the peculiarities of the SPP physics in this 2D Dirac metal, which have been addressed in detail in reference . In particular, this reference identifies the conditions for the existence of SPP modes, concluding that they are only allowed in a the range of frequencies close to the DC limit, where the optical response is dominated by the Drude peak. Hence, with respect to point (i) above, *even if one considers the possibility of oblique incidence*, the conditions for excitation of SPP are rather narrow, and not expected to play a role at the finite frequencies where the DP effect of the ribbon system is most effective (see more below). Points (ii) and (iii) strongly depend on the fabrication process leading to the ribbons and/or their integration in the final gratings, and are easily controllable. The main message we wish to underline in this context is then that, effects associated with increased absorption within certain frequency bands arising from SPP are not expected in the context of our proposed setup, and will not influence the DP. But they could as well be explored by enforcing the conditions enumerated above, and possibly allow even more versatility and richness to the polarizing characteristics of nanoribbon-based gratings. Such considerations are, however, out of the scope of this report. Conclusions {#conclusions .unnumbered} =========== Having derived the exact optical conductivity tensor of GNRs, we studied the optical absorption response of ensembles of ribbons with fluctuating width. One verifies that the optical absorption can be made highly anisotropic within a frequency band that is tunable via the ribbon average width, and/or via the impedance characteristics of the embedding medium. Physically, the origin of such strong anisotropy lies in a resonant feature that is simultaneously very strong and resilient to level broadening, in comparison with the conventional van Hove-type absorption singularities, which quickly wash out in the presence of width fluctuations and/or disorder. Quantitative analysis reveals that an ensemble of monolayer GNRs can show a very high degree of polarization, $\sim85\%$. This value can be enhanced by placing the ribbon in a cavity, so that the real part of the impedance is increased in the appropriate region of the spectrum. In such situations the degree of polarization can be close to 100%, which is quite remarkable given the atomic thickness of the polarizing element. The current analysis focuses on the intrinsic absorption anisotropy of GNRs, where disorder effects are mimicked by the fluctuating ribbon widths. We are currently exploring routes to study the influence of more specific disorder models, and combining the intrinsic absorption response of GNRs with the geometric effects expected to arise in a GNR grating setup. Likewise, the interplay of the anisotropy induced here by space quantization and plasmons likely to be excited in such finite-sized geometries should be addressed in the future. Given the recent developments in precision patterning and growth of narrow GNRs, and given the technological interest in optical elements operating in the IR and THz bands, we trust these results can motivate further theoretical and experimental investigation of GNRs and other graphene-derived structures towards such applications. We acknowledge insightful discussions with A. H. Castro Neto and J. M. B. Lopes dos Santos. FH acknowledges partial support from grant UMINHO/BI/001/2010. AJC, RMR, MIV, and NMRP acknowledge support from FEDER-COMPETE, and from FCT grant PEst-C/FIS/UI0607/2011. VMP acknowledges the support of NRF-CRP award “Novel 2D materials with tailored properties: beyond graphene” (R-144-000-295-281).
{ "pile_set_name": "ArXiv" }
--- abstract: | The function $S_n (t) =_{} \pi \left( \frac{3}{2} - {\ensuremath{\operatorname{frac}}} \left( \frac{\vartheta (t)}{\pi} \right) + \left( \lfloor \frac{t \ln \left( \frac{t}{2 \pi e} \right)}{2 \pi} + \frac{7}{8} \rfloor - n \right) \right)$ is conjectured to be equal to $S (t_n)_{} = \arg \zeta \left( \frac{1}{2} + i t_n \right)$ when $t = t_n$ is the imaginary part of the $n$-th zero of $\zeta$ on the critical line. If $S (t_n) = S_n (t_n)$ then the exact transcendental equation for the Riemann zeros has a solution for each positive integer $n$ which proves that Riemann’s hypothesis is true since the counting function for zeros on the critical line is equal to the counting function for zeros on the critical strip if the transcendental equation has a solution for each $n$. author: - 'Stephen Crowley &lt;stephencrowley214@gmail.com&gt;' title: 'An Expression For The Argument of $\zeta$ at Zeros on the Critical Line' --- Introduction ============ The Riemann-Siegel $\vartheta (t)$ Function ------------------------------------------- The Riemann-Siegel $\vartheta$ function is defined by $$\begin{array}{cl} \vartheta (t) & = - \frac{i}{2} \left( \ln \Gamma \left( \frac{1}{4} + \frac{i t}{2} \right) - \ln \Gamma \left( \frac{1}{4} - \frac{i t}{2} \right) \right) - \frac{\ln (\pi) t}{2} \text{}\\ & = \arg \left( \Gamma \left( \frac{1}{4} + \frac{i t}{2} \right) \right) - \frac{\ln (\pi) t}{2} \end{array}$$ Let $$\tilde{\vartheta} (t) = \frac{t \ln \left( \frac{t}{2 \pi e} \right)}{2} - \frac{\pi}{8}$$ be the approximate $\vartheta$ function where the $\Gamma$ function has been replaced with its Stirling approximation. $$\Gamma (s) \simeq \sqrt{2 \pi} s^{s - \frac{1}{2}} e^{- s}$$ The $\vartheta (t)$ function is not invertible but the inverse of its approximation $\tilde{\vartheta} (t)$ is defined by a linear function of the Lambert W function given by $$\tilde{\vartheta}^{- 1} (t) = \frac{\pi + 8 t}{4 W \left( \frac{\pi + 8 t}{8 \pi e} \right)}$$ Let ${\ensuremath{\operatorname{frac}}} (x) = \left\{ \begin{array}{ll} x - \lfloor x \rfloor & x \geqslant 0\\ x - \lceil x \rceil & x < 0 \end{array} \right. \forall x \in \mathbbm{R}$ be the function which gives the fractional part of a real number by subtracting either the floor $\lfloor x \rfloor$ or the ceiling $\lceil x \rceil$ of $x$ from $x$, depending upon its sign. Furthermore, let $$S (t) = \arg \left( \zeta \left( \frac{1}{2} + {\ensuremath{\operatorname{it}}} \right) \right) = \lim_{\varepsilon \rightarrow 0} \frac{1}{2} ((S {}(\rho + i \varepsilon) - S (\rho - i \varepsilon))$$ be the argument of $\zeta$ on the critical line. \[ec\]The for the imaginary part of the $n$-th zero of $\zeta \left( \frac{1}{2} + i t \right)$[[@z0t Equation 20]]{} $$\vartheta (t_n) + S (t_n) = \left( n - \frac{3}{2} \right) \pi \label{ee}$$ has a solution for each integer $n \geqslant 1$ where $t_n$ enumerate the zeros of $Z$ on the real line and the zeros of $\zeta$ on the critical line $$\zeta \left( \frac{1}{2} + i t_n \right) = 0 \forall n \in \mathbbm{Z}^+$$ where $\mathbbm{Z}^+$ denotes the positive integers. [[@z0t Equation 14]]{} The Gram Points --------------- The $n$-th Gram point is defined to be the solution of the equation $$\vartheta (t) = (n - 1) \pi$$ A very accurate approximation $\tilde{g} (n)$ to the Gram points is $g (n)$ is found by inverting $\tilde{\vartheta} (t)$ to get the exact solution $$\begin{array}{cl} \tilde{g} (n) & = \{ t : \tilde{\vartheta} (t) - (n - 1) \pi = 0 \}\\ & = \left\{ t : \left( \frac{t \ln \left( \frac{t}{2 \pi e} \right)}{2} - \frac{\pi}{8} \right) - (n - 1) \pi = 0 \right\}\\ & = \frac{(8 n - 7) \pi}{4 W \left( \frac{8 n - 7}{8 e} \right)}\\ & = g (n) + O (\delta_n) \end{array}$$ where $W$ is the Lambert W function, and the approximation bounds $\delta_n$ when $n = 1$ is $\delta_1 = 0.00223698 \ldots$, followed by $\delta_2 = 0.00137812 \ldots$ and decreases monotincally with increasing $n$, that is, $\delta_{n + 1} < \delta_n$.  The inverse of $\tilde{g} (n)$ is given by $$\begin{array}{ll} \tilde{g}^{- 1} (n) & = \{ t : \tilde{g} (n) = 0 \}\\ & = \frac{t \ln \left( \frac{t}{2 \pi e} \right)}{2 \pi} + \frac{7}{8} \end{array}$$ Now define the infinite sequence of functions indexed by $n \in \mathbbm{Z}^+$ $$\begin{array}{cl} T_n (t) & = 1 + \lfloor \tilde{g}^{- 1} (n) \rfloor - n\\ & = 1 + \lfloor \frac{t \ln \left( \frac{t}{2 \pi e} \right)}{2 \pi} + \frac{7}{8} \rfloor - n \end{array}$$ Near each “bad” Gram point there will be a corresponding zero on the critical line which has an argument not on the principal branch. The function $T_n (t)$ determines how many multiples of $\pi$ to add or subtract to $- \frac{1}{2} - \lfloor \frac{\vartheta (s)}{\pi} \rfloor$ so that it agrees with the argument of $\zeta$ at a zero on the critical line where it is discontinuous, having the value $\lim_{\varepsilon \rightarrow 0} \frac{1}{2} ((S {}(\rho + i \varepsilon) - S (\rho - i \varepsilon))$ when $\zeta (\rho) = 0$. Let $$\begin{array}{ll} S_n (t_{}) & = \pi \left( \frac{1}{2} - {\ensuremath{\operatorname{frac}}} \left( \frac{\vartheta (t)}{\pi} \right) - T_n (t) \right)\\ & = \pi \left( \frac{1}{2} - {\ensuremath{\operatorname{frac}}} \left( \frac{\vartheta (t)}{\pi} \right) - (\lfloor \tilde{g}^{- 1} (n) \rfloor - n + 1) \right)\\ & =_{} \pi \left( \frac{3}{2} - {\ensuremath{\operatorname{frac}}} \left( \frac{\vartheta (t)}{\pi} \right) - (\lfloor \tilde{g}^{- 1} (n) \rfloor - n) \right)\\ & =_{} \pi \left( \frac{3}{2} - {\ensuremath{\operatorname{frac}}} \left( \frac{\vartheta (t)}{\pi} \right) + \left( \lfloor \frac{t \ln \left( \frac{t}{2 \pi e} \right)}{2 \pi} + \frac{7}{8} \rfloor - n \right) \right) \end{array}$$ Let $s_{\vartheta} (t) = \frac{\vartheta (t)}{| \vartheta (t) |}$ be the sign of $\vartheta (t)$. \[c\]The argument $S (t)$ of $\zeta \left( \frac{1}{2} + i t \right)$ at the n-th non-trivial zero  $\zeta \left( \frac{1}{2} + i t_n \right) = 0 \forall n \geqslant 1$ on the critical strip is equal to $s_{\vartheta} (t) S_n (t)$, that is $$\begin{array}{cl} S (t_n) & = s_{\vartheta} (t) S_n (t_n) = \frac{1}{2} (\lim_{t \rightarrow t^-_n} S (t_n) + \lim_{t \rightarrow t^+_n} S (t_n))\\ & = \frac{\vartheta (t_n)}{| \vartheta (t_n) |}_{} \pi \left( \frac{3}{2} - {\ensuremath{\operatorname{frac}}} \left( \frac{\vartheta (t_n)}{\pi} \right) + \left( \lfloor \frac{t \ln \left( \frac{t_n}{2 \pi e} \right)}{2 \pi} + \frac{7}{8} \rfloor - n \right) \right) \end{array} \label{een}$$ If Conjecture \[c\] is true then Conjecture \[ec\] is true and, due to ${\ensuremath{\operatorname{Lemma}}}$ \[fl\], so is Conjecture \[RH\], the Riemann hypothesis. If $s_{\vartheta} (t) S_n (t_n)$=$S (t_n)$ then $S (t_n)$ is well-defined $\forall n \geqslant 1$ since $s_{\vartheta} (t) S_n (t_n)$ is well-defined $\forall n \geqslant 1$. $\arg \left( \zeta \left( \frac{1}{2} + i g (n) \right) \right) = 0 \forall n \in \mathbbm{Z}^+$ The argument of any positive number $x$ with ${\ensuremath{\operatorname{Im}}} (x) = 0$ is equal to $0$ and ${\ensuremath{\operatorname{Im}}} \left( \zeta \left( \frac{1}{2} + i g (n) \right) \right) = 0$ $S (t) - f_0 (t) \in \{ - 1, 0, 1 \} \forall t \in \mathbbm{R}$. That is $${\ensuremath{\operatorname{frac}}} \left( \frac{\vartheta (t)}{\pi} \right) + \frac{1}{\pi} \arg \left( \zeta \left( \frac{1}{2} + {\ensuremath{\operatorname{it}}} \right) \right) \in \{ - 1, 0, 1 \} \forall 0 < t \in \mathbbm{R}$$ Appendix ======== Transcendental Equations Satisifed By The Nontrivial Riemann Zeros ------------------------------------------------------------------ The is the line in the complex plane defined by ${\ensuremath{\operatorname{Re}}} (t) = \frac{1}{2}$. The is the strip in the complex plane defined by $0 < {\ensuremath{\operatorname{Re}}} (t) < 1$. The for the $n$-th zero of the Hardy $Z$ function $$\frac{t_n}{2 \pi} \ln \left( \frac{t_n}{2 \pi t} \right) + S (t_n) = n - \frac{11}{8} \label{ae}$$ [[@z0t Equation 20]]{} \[le\]If the limit $$\lim_{\delta \rightarrow 0^+} \arg \left( \zeta \left( \frac{1}{2} + \delta + i t \right) \right)$$ is exists and is well-defined $\forall t$ then the left-hand side of Equation (\[ae\]) is well-defined $\forall t$, and due to monotonicity, there must be a unique solution for every $n \in \mathbbm{Z}^+$. [[@z0t II.A]]{} The number of solutions of Equation (\[ae\]) over the interval $[0, t]$ is given by $$N_0 (t) = \frac{t}{2 \pi} \ln \left( \frac{t}{2 \pi e} \right) + \frac{7}{8} + S (t) + O (t^{- 1}) \label{N0}$$ which counts the number of zeros . \[RH\](The Riemann hypothesis) All solutions $t$ of the equation $$\zeta (t) = 0$$ besides the trivial solutions $t = - 2 n$ with $n \in \mathbbm{Z}^+$ have real-part $\frac{1}{2}$, that is, ${\ensuremath{\operatorname{Re}}} (t) = \frac{1}{2}$ when $\zeta (t) = 0$ and $t \neq - 2 n$. The Riemann-von-Mangoldt formula makes use of Cauchy’s argument principle to count the number of zeros $0 < {\ensuremath{\operatorname{Im}}} (\rho_n) < t$ where $\zeta (\sigma + i \rho_n)$ with $0 < \sigma < 1$ $$N (t) = \frac{t}{2 \pi} \ln \left( \frac{t}{2 \pi e} \right) + \frac{7}{8} + S (t) + O (t^{- 1})$$ and this definition does not depend on the Riemann hypothesis(Conjecture \[RH\]). This equation has exactly the same form as the asymptotic Equation \[ae\]. [[@z0t Equation 15]]{} \[fl\]If the exact Equation (\[ee\]) has a unique solution for each $n \in \mathbbm{Z}^+$ then Conjecture \[RH\], the Riemann hypothesis, follows. If the exact equation has a unique solution for each $n$, then the zeros obtained from its solutions on the can be counted since they are enumerated by the integer $n$, leading to the counting function $N_0 (t)$ in Equation (\[N0\]). The number of solutions obtained on the would saturate counting function of the number of solutions on the so that $N (t) = N_0 (t)$ and thus all of the non-trivial zeros of $\zeta$ would be enumerated in this manner. If there are zeros off of the critical line, or zeros with multiplicity $m \geqslant 2$, then the exact Equation (\[ee\]) would fail to capture all the zeros on the critical strip which would mean $N_0 (t) < N (t)$.  [[@z0t IX]]{} [1]{} Guilherme Fran[ç]{}a and Andr[é]{} LeClair. Transcendental equations satisfied by the individual zeros of riemann zeta, dirichlet and modular l-functions. [[*[Communications in Number Theory and Physics]{}*]{}]{}, 2015.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a theory to predict the structure and thermodynamics of mixtures of colloids of different diameters, building on our earlier work \[J. Chem. Phys. 145, 074904 (2016)\] that considered mixtures with all particles constrained to have the same size. The patchy, solvent particles have short-range directional interactions, while the solute particles have short-range isotropic interactions. The hard-sphere mixture without any association site forms the reference fluid. An important ingredient within the multi-body association theory is the description of clustering of the reference solvent around the reference solute. Here we account for the physical, multi-body clusters of the reference solvent around the reference solute in terms of occupancy statistics in a defined observation volume. These occupancy probabilities are obtained from enhanced sampling simulations, but we also present statistical mechanical models to estimate these probabilities with limited simulation data. Relative to an approach that describes only up to three-body correlations in the reference, incorporating the complete reference information better predicts the bonding state and thermodynamics of the physical solute for a wide range of system conditions. Importantly, analysis of the residual chemical potential of the infinitely dilute solute from molecular simulation and theory shows that whereas the chemical potential is somewhat insensitive to the description of the structure of the reference fluid the energetic and entropic contributions are not, with the results from the complete reference approach being in better agreement with particle simulations.' author: - Artee Bansal - Arjun Valiya Parambathu - 'D. Asthagiri' - 'Kenneth R. Cox' - 'Walter G. Chapman' title: 'Thermodynamics of mixtures of patchy and spherical colloids of different sizes: a multi-body association theory with complete reference fluid information' --- [^1] Introduction ============ In thermodynamic perturbation theory of association involving short range interactions between molecules, the properties of the reference fluid plays a central role. In the typical situation when the reference is a hard-sphere fluid, perturbation theories usually use information about two body, and at times three body, correlations in the reference fluid to describe the physical (associating) system. For example, Wertheim’s theory [@wertheim_fluids_1984; @wertheim_fluids_1984-1] and its extensions based on the statistical associating fluid theory (SAFT) [@chapman_new_1990] use pair correlation information at contact to estimate extent of association between pairs of molecules. In SAFT, for the hard-sphere reference either the Carnahan-Starling [@carnahan_equation_1969] equation for a single component fluid or the Boublik-Mansoori-Carnahan-Starling-Leland [@boublik_hardsphere_1970; @mansoori_equilibrium_1971] equation for a mixture are used to describe the pair-correlation information at contact. The structure of Wertheim’s theory or SAFT is such that one can obtain accurate extent of association and thermodynamics even for systems with strong inter-particle interactions provided the representation of the reference is adequate. However, as the complexity of the interaction increases in the physical system, such as may result from multiple bonding and size asymmetries, information about two or three body correlations in the reference no longer suffices. In our previous work [@bansal_structure_2016], we studied the multi-body correlation functions of a symmetric hard sphere reference fluid in terms of the probabilities of observing $n$ molecules in the bonding region. These occupancy probabilities were obtained from enhanced sampling Monte Carlo simulations for the hard sphere fluid. We developed a procedure to use this information within the Marshall-Chapman formalism [@marshall_molecular_2013; @marshall_thermodynamic_2013] to describe multiple association of solvent molecules to a solute molecule. This *complete reference* approach proved successful in predicting the bonding state and thermodynamics of a colloidal solute in a patchy solvent for a wide range of system conditions [@bansal_structure_2016]. Here we study mixtures where the solute diameter is as small as half to as large as twice the diameter of the solvent. The solvent particles are spheres with directional interaction sites and the solute particles are spheres with isotropic interactions, and the solute is capable of bonding with multiple solvent particles. The structure and thermodynamics of mixture of hard spheres with different diameters has been studied in detail before [@torquato_microstructure_1986; @reiss_statistical_1959; @mayer_integral_1947; @torquato_microstructure_1982; @torquato_microstructure_1983; @torquato_microstructure_1985], but a compact form for the correlations beyond the contact value is still unavailable. Further, for systems with large size asymmetries even the pair-correlation information obtained using the Boublik-Mansoori-Carnahan-Starling-Leland equation is inadequate [@feng_contact_2011]. Our approach of including multi-body correlations rests on using the occupancy statistics [@reiss_upper_1981; @pratt_quasichemical_2001; @pratt_selfconsistent_2003; @bansal_structure_2016] of the hard-sphere solvent around the hard-sphere solute. We find that representing multi-body correlation functions in terms of occupancy statistics in physically reasonable observation volumes accurately captures the multi-bonding effects in asymmetric mixtures. These occupancy statistics are obtained from particle simulations. Importantly, we also present a physically transparent, statistical mechanical model to describe the occupancy probabilities in symmetric and asymmetric hard sphere fluids for different packing fractions. This model corrects multi-body effects obtained for isolated clusters by incorporating the role of the cluster-bulk interface and the bulk medium effects. We also investigate the energy-entropy decomposition of the chemical potential of the solute in a model system with only solute-solvent interactions to better appreciate the role of the reference fluid. Throughout, theoretical results are validated versus molecular simulations The rest of the paper is organized in the following way. In Section \[sc:bentheory\] we discuss the association potential of the system and describe how packing effects are important for the given potential. The Marshall-Chapman [@marshall_thermodynamic_2013] theory is briefly introduced to show the multi-density representation of the free energy, and based on our previous work [@bansal_structure_2016], an improved representation of multi-body correlations(*complete reference*) is presented. In Section \[sc:HStheory\], we examine hard sphere packing around a reference particle and develop models based on statistical mechanics [@reiss_upper_1981] and hard sphere simulation data for different densities. We apply our complete reference approach for different asymmetric mixtures of solute and solvent and present results in Section \[sc:res\_asso\]. In Section \[sc:res\_hs\] we present results for the hard sphere reference system (symmetric and asymmetric mixtures) based on the correlation developed in section \[sc:HStheory\]. We also provide simulation results for isolated cluster probabilities in asymmetric hard sphere mixtures in the appendix (Section \[sc:appen\]). Theory ====== Asymmetric mixtures with different association geometries {#sc:bentheory} ---------------------------------------------------------- The focus of our study is asymmetric mixtures containing molecules with short range attractive interactions. The short range association potential is the same as that in previous work [@bansal_structure_2016]: the solute molecule can associate with multiple solvent molecules isotropically and the patchy solvent has directional interactions. The total potential is a sum of hard sphere and association contributions $$u{(r)}=u_{HS}{(r)}+u_{AS}{(r)} \label{eq:potT}$$ The association potential for patchy-patchy $(p,p)$ and spherical-patchy $(s,p)$ particles is: $$u_{AB}^{(p,p)}{(r)}= \begin{cases} -\epsilon_{AB}^{(p,p)}, r<r_c \,\text{and}\, \theta_A\leq \theta_c^{(A)}\,\text{and}\,\theta_B \leq \theta_c^{(B)} \\ 0 \text{ \ \ \ \ \ otherwise} \\ \end{cases} \label{eq:potential1}$$ $$u_A^{(s,p)}{(r)}= \begin{cases} -\epsilon_A^{(s,p)}, r<r_c\, \text{and}\, \theta_A\leq \theta_c^{(A)} \\ 0 \text{ \ \ \ \ \ otherwise} \\ \end{cases} \label{eq:potential2}$$ where the subscripts $A$ and $B$ represent the type of site and $\epsilon$ is the association energy; $r$ is the distance between the particles; and $\theta_A$ is the angle between the vector connecting the centers of two molecules and the vector connecting association site $A$ to the center of that molecule (Fig. \[fig:1\]). The critical distance beyond which particles do not interact is $r_c$ and $\theta_c$ is the solid angle beyond which sites cannot bond. Fig. \[fig:1\] shows examples of solute-solvent and solvent-solvent short range interaction geometries for different sizes of solute particles. ![Association between solute and solvent (a) and solvent molecules (b). Different Cases with solute larger (middle) and smaller (left) than solvent molecules are studied. $r$ is the center-to-center distance and $\theta_A$ and $\theta_B$ are the orientation of the attractive patches $A$ and $B$ relative to line connecting the centers. The solute (colored red) can only interact with patch $A$ (colored red).[]{data-label="fig:1"}](figure1) Since the solute can associate with multiple solvent molecules (Eq. \[eq:potential2\]), it is important to study the multi-body correlations that determine the packing of solvent particles around the solute in the reference fluid [@bansal_structure_2016]. The difficulty in determining these interactions arises due to the limited knowledge in describing multi-body correlation functions for $n\ge 3$. But the volume integral of the multi-body correlation has a clear physical meaning in terms of average number of $n$-solvent clusters ($F^{(n)}$, Fig. \[fig:Fn\]). In particular, for the distinguished solute, $$\begin{aligned} {F^{(n)}} & = &\frac{{{\rho_p^n}}}{{n!}}\int\limits_{v} {d{{\vec r}_1} \cdots \int\limits_{v} d{{\vec r}_n}{g_{HS}}\left( {{{\vec r}_1} \cdots {{\vec r}_n}|0} \right)} \nonumber \\ & = & {\sum\limits_{m = n}^{{n^{\max }}} {C^m_n p_m}} \, , \label{eq:Fn}\end{aligned}$$ where $\rho_p$ is the density of solvent particles, $p_n$ is the probability of observing exactly $n$ solvent particles in the observation volume of the solute ($v$) defined by the spherical region of radius $r_c$, $C^m_n$ ($=m! / (m-n)!\cdot n!$), and $g_{HS}({\vec r}_1 \cdots {\vec r}_n|0)$ is the distribution function of the $n$-solvent particles around the solute at the center of the observation volume, indicated by $(\ldots | 0)$. $n^{max}$ is the maximum number of solvent molecules that can occupy the observation volume around the reference solute. ![image](figure2) In Wertheim’s multi-density formalism [@wertheim_fluids_1986; @wertheim_fluids_1986-1], the free energy due to association ($A^{AS}$) is expressed as $$\frac{{A^{AS}}}{{V{k_{\rm B}}T}} = \sum {\left( {{\rho_k}\ln \frac{{\rho ^{(0)}_{k }}}{{{\rho_{ k }}}} + {Q^{\left( k \right)}} + {\rho _{ k }}} \right)} - \frac{ \Delta c^{(0)}}{V} % - {{\Delta {c^{(0)}}} \mathord{\left/ % {\vphantom {{\Delta {c^{(0)}}} V}} \right. % \kern-\nulldelimiterspace} V} \label{eq:3}$$ where $k_{\rm B}$ is the Boltzmann constant, $T$ is the temperature, the summation is over the species ($k={s,p}$), $\rho$ is the number density, $\rho^{(0)}$ is the monomer density, $Q^{(k)}$ is obtained from the Marshall-Chapman development [@marshall_thermodynamic_2013] and $\Delta{c^{0}}$ is the contribution to the graph sum due to association between the solvent-solvent $(p,p)$ and solute-solvent $(s,p)$ molecules, i.e. $$\Delta c_{}^{\left( 0 \right)} = \Delta c_{pp}^{\left( 0 \right)} + \Delta c_{sp}^{\left( 0 \right)} \label{eq:4}$$ Marshall and Chapman [@marshall_molecular_2013; @marshall_thermodynamic_2013] extended Wertheim’s theory beyond the single bonding condition to incorporate multi-body effects in a solution consisting of an isotropic solute and solvent with directional interactions. The contribution to free energy due to association between solute and solvent molecules was obtained as $$\frac {\Delta c_{sp}^{(0)}}{V} = \sum\limits_{n = 1}^{{n^{\max }}} {\frac{\Delta c_n^{( 0)}}{V} } \label{eq:5}$$ where the sum is over different coordination states of the solute and $\Delta c_n^{(0)}$ is given by: $$\begin{aligned} \Delta c_n^{(0)} & = & \frac{\rho ^{(0)}_s {( {\rho_p X_A^{(p)}})}^n} {\Omega^{n + 1} n!} \int d(1)\cdots d(n + 1) \, g_{HS}( 1 \cdots n + 1) \cdot \prod\limits_{k = 2}^{n + 1} {( f_{A}^{(s,p)} ( 1,k))} \, . \label{eq:14} \end{aligned}$$ In Eq. \[eq:14\], $\rho_p=\rho\cdot x^{(p)}$ is the density of solvent molecules obtained from the mole fraction of solvent($x^{(p)}$) and the total density($\rho$), $X_A^{(p)}$ is the fraction of solvent molecules not bonded at site A, $\Omega =4\pi$ is the total number of orientations, $f_{A}^{( {s,p})}(1,k) = (\exp (\varepsilon_A^{(s,p)}/k_BT) - 1)$ is the Mayer function for association between $p$ and $s$ molecules corresponding to potential in Eq. \[eq:potential2\] and the integral is over all the orientations and positions of the $n+1$ particles. By taking the average association strength and acceptable orientations out of the integral and fixing the solute at the origin, the above integral can be rewritten as $$\begin{aligned} \frac{ \Delta c_n^{(0)}}{V} & = & \frac{\rho _s^{(0)} {( {{\rho}_{p}X_A^{(p)}}f_A^{(s,p)}\sqrt {\kappa _{AA}})}^n} {n!} \int_{v} d\vec r_1 \cdots \int_{v} d\vec r_n \, g_{HS}(\vec r_1 \cdots \vec r_n |0) \, . \label{eq:3} \end{aligned}$$ Marshall and Chapman[@marshall_molecular_2013; @marshall_thermodynamic_2013] approximated the integral in Eq. \[eq:3\] as $$\begin{aligned} \int_{v} d\vec r_1 \cdots \int_{v} d\vec r_n\, g_{HS}(\vec r_1 \cdots \vec r_n |0) \approx y_{HS}^n( \sigma) \delta ^{(n)} \Xi ^{(n)} \, , \label{eq:MCA} \end{aligned}$$ where $\Xi^{(n)}$ is the partition function for an isolated cluster of $n$ solvent hard-spheres around a solute hard-sphere, $y_{HS}( \sigma)$ is (pair) cavity correlation function at contact, and $\delta^{\left(n\right)}$ corrects the superposition of cavity correlation functions for three body interactions. We will hereafter refer to Eq. \[eq:MCA\] as the Marshall-Chapman approximation (MCA). As shown earlier [@bansal_structure_2016], MCA fails for high densities and high association energies, conditions where multi-body interactions are important. But recognizing that the integral in Eq. \[eq:3\] is related to $F^{(n)}$ (Eq. \[eq:Fn\]) we have [@bansal_structure_2016] $$\begin{aligned} \frac{ \Delta c_n^{(0)}}{V} = {\rho_s^{(0)} {( {x ^{(p)}X_A^{(p)}}f_A^{(s,p)}\sqrt {\kappa _{AA}})}^n} F^{(n)} \, . \label{eq:Cn_new}\end{aligned}$$ It can be observed that all the multi-body correlation information is subsumed in $F^{(n)}$ which is obtained from the occupancy distribution $\{p_n\}$. We follow our earlier work [@bansal_structure_2016] to estimate this distribution. Importantly, since $\{p_n\}$ forms the basis of our *complete reference* approach, we also develop an analytical model to describe these distribution functions. Finally, with the above information, and based on the Marshall-Chapman theory [@marshall_thermodynamic_2013], the fraction of solute associated with $n$ solvent molecules is $$X_n^{\left( s \right)} = \frac{ {( {x ^{(p)}X_A^{(p)}}f_A^{(s,p)}\sqrt {\kappa _{AA}})}^n F^{(n)} }{{1 + \sum\limits_{n = 1}^{{n^{\max }}} {( {x ^{(p)}X_A^{(p)}}f_A^{(s,p)}\sqrt {\kappa _{AA}})}^n} F^{(n)} } \, , \label{eq:301}$$ and the fraction of solute not bonded to any solvent molecule is $$X_0^{\left( s \right)} = \frac{1}{{1 + \sum\limits_{n = 1}^{{n^{\max }}} {( {x ^{(p)}X_A^{(p)}}f_A^{(s,p)}\sqrt {\kappa _{AA}})}^n} F^{(n)} } \, . \label{eq:300}$$ Using these distributions for associating mixture, the average number of solvent associated with the solute is given by: $$n_{avg} = \sum\limits_n {n\cdot {X^{(s)}_n}} \, , \label{eq:81}$$ The fraction of solvent not bonded at site $A$ and site $B$ can be obtained by simultaneous solution of the following equations: $$X_A^{\left( p \right)} = \frac{1}{{1 + \xi {\kappa _{AB}}f_{AB}^{\left( {p,p} \right)}{\rho_p}X_B^{(p)} + \frac{{{\rho_s}}}{{{\rho_p}}}\frac{{ n_{avg} }}{{X_A^{(p)}}}}} \, ,$$ $$X_B^{\left( p \right)} = \frac{1}{{1 + \xi {\kappa _{AB}}f_{AB}^{\left( {p,p} \right)}{\rho_p}X_A^{(p)}}} \, . \label{eq:82}$$ where $$\begin{aligned} \xi & = & 4\pi {\sigma^2}\left( {{r_c} - \sigma} \right){y_{HS}}(\sigma) \\ \kappa_{AB} & = &\left[1-cos(\theta_c)\right]^2/{4} \\ f_{AB}^{({p,p})} & = & \exp ( \varepsilon _{AB}^{({p,p})}/k_{\rm B}T)-1 \, .\end{aligned}$$ Occupancy distribution $\{p_n\}$ for the hard-sphere fluid {#sc:HStheory} ---------------------------------------------------------- Consider a hard sphere fluid with one solute and $N$ solvent particles in a volume $V$ and temperature $T$. We are interested in the occupancy statistics $\{p_n\}$ of the solvent in the coordination volume around the solute. To this end consider the reaction $$S{P_{n = 0}} + {P_n} \rightleftharpoons S{P_n} \, , \label{eq:20}$$ with the equilibrium constant $${K_n} = \frac{{{\rho _{S{P_n}}}}}{{{\rho _{S{P_{n = 0}}}}\rho _p^n}} \, , \label{eq:21}$$ where $\rho_{SP_n}$ is the density of species $SP_n$ and $\rho_p$ is the density of the solvent. Clearly, we have [@pratt_quasichemical_2001; @lrp:book; @lrp:cpms] $$\frac{p_n}{p_0} = K_n \rho_p^n \, , \label{eq:pnp0}$$ where $p_0$ is the probability that the coordination volume is empty of solvent particles. Following earlier work in studying clusters with quasichemical theory [@pratt_quasichemical_2001; @pratt_selfconsistent_2003; @merchant_thermodynamically_2009; @merchant:jcp11b], we can show that $$K_n = \frac{(e^{ \beta\mu^{\rm ex}_{p} })^n}{n!}\int\limits_v d{\vec r}_1\ldots\int\limits_v d{\vec r}_n e^{-\beta U_{SP_n}(R^n)} e^{-\beta \phi(R^n;\beta)} \, , \label{eq:kn1}$$ where $U_{SP_n}(R^n)$ is the potential energy of the $n$-solvent cluster (with the solute $S$ fixed at the center of the cluster), $\beta = (k_{\rm B}T)^{-1}$, $\phi(R^n;\beta)$ is the free energy of interaction of the cluster with the rest of the bulk medium for a given configuration $R^n$ of the cluster, and $\beta\mu^{\rm ex}_{p} $ is the excess chemical potential of the solvent particle. (For completeness, in appendix A we derive the above expression for $K_n$.) $\phi(R^n;\beta)$ can also be thought of as a field imposed by the bulk solvent medium [@pratt_selfconsistent_2003; @merchant:jcp11b] on the solute-solvent cluster in the observation volume. Earlier Pratt and Ashbaugh [@pratt_selfconsistent_2003] modeled this field using a self-consistent approach. Here we take a different approach. First note that without the field term, the cluster integral presents a simpler $n$-body problem (where $n$ is small, typically less than 20 for systems of interest here). The field is thus an interfacial term that couples the local cluster with the bulk medium. To make this explicit, we can rewrite Eq. \[eq:kn1\] as $$K_n = \frac{(e^{ \beta\mu^{\rm ex}_{p} })^n}{n!} \langle e^{-\beta \phi(R^n;\beta)}\rangle_0 \int\limits_vd{\vec r}_1\ldots\int\limits_v d{\vec r}_n e^{-\beta U_{SP_n}(R^n)} \, .$$ Here $\langle \ldots\rangle_0$ indicates averaging with over the normalized probability density for cluster conformations $R^n$ in the absence of interactions with the rest of the medium, i.e. over the density $e^{-\beta U_{SP_n}(R^n)} / (n! K_n^{(0)})$, where $$n! K_n^{(0)} = \int\limits_vd{\vec r}_1\ldots\int\limits_v d{\vec r}_n e^{-\beta U_{SP_n}(R^n)} \, , \label{eq:kn0}$$ and the interfacial contribution is $\beta\Omega_n = -\ln \langle e^{-\beta \phi(R^n;\beta)}\rangle_0$. From analysis of simulation data for different densities, we find that $\Omega_n$ can be described by a two parameter equation as $$\beta \Omega_n =-\zeta_1 \cdot n^2+\zeta_2\cdot n \, .$$ This model of the interfacial term was anticipated in our previous work [@bansal_structure_2016]. Thus we finally obtain $${p_n} = \frac{\exp(\beta \cdot n \cdot \mu^{ex}_{p})\rho_p^n [{\exp{(\zeta_1 \cdot n^2-\zeta_2\cdot n)}}]{K_n}^{(0)}}{{1 + \sum\limits_{j \ge 1} {\exp(\beta \cdot j \cdot \mu^{ex}_{p})\rho_p^j [{\exp{(\zeta_1 \cdot j^2-\zeta_2 \cdot j)}}]{K_j}^{(0)}} }} \ \label{eq:pn_2par}$$ The above equation can also be derived using a two moment maximum entropy approach, with the mean and variance of the occupancy as constraints and $K_n^{(0)}$ as the default (see appendix A). Drawing upon the work of Reiss and Merry [@reiss_upper_1981], we can model the interfacial term in terms of surface sites (of the cluster) that are available to interact with the bulk fluid. On the basis of such a mean field approach and guided by Monte Carlo (MC) simulation data for different densities of hard sphere systems, we find that $$\beta \Omega_n =(-0.0109 \cdot n^2+1.0109 \cdot n)\cdot \zeta \label{eq:surf_int}$$ $${p_n} = \frac{\exp(\beta \cdot n \cdot \mu^{ex}_{p})\rho_p^n [{\exp{(-(-0.0109 \cdot n^2+1.0109 \cdot n)\cdot \zeta)}}]{K_n}^{(0)}}{{1 + \sum\limits_{j \ge 1} {\exp(\beta \cdot j \cdot \mu^{ex}_{p})\rho_p^j [{\exp{(-(-0.0109 \cdot j^2+1.0109 \cdot j)\cdot \zeta)}}]{K_j}^{(0)}} }} \ \label{eq:pn_1par}$$ Eq. \[eq:pn\_2par\] and Eq. \[eq:pn\_1par\] are the 2-parameter and 1-parameter models, respectively, for $p_n$ on the basis of which we obtain $F^{(n)}$ (Eq. \[eq:Fn\]) to describe multi-body correlations in the reference fluid. The parameter values for different densities are given in Table \[table: param1\]. These parameters were obtained based on hard spheres mixtures with all particles of the same size. Since the information about size-asymmetry is already contained in the isolated cluster partition function, we will use these parameters to study asymmetric mixtures and will discuss limitations for cases with extreme size ratio. METHODS ======= To compare the theory results, we perform Monte-Carlo (MC) simulations for a range of systems. This section presents the details of the MC simulations for different associating and hard sphere systems.The Marshall-Chapman approximation (MCA) and the models developed for hard sphere distribution functions require isolated cluster probabilities; these were also computed for different size ratios. Monte Carlo Simulations ----------------------- MC simulations were carried out for reference hard sphere systems and associating systems to compare the results of Marshall-Chapman theory using MCA and the complete reference approach. The associating mixture contains the patchy solvent particles and the isotropically interacting solute defined by the potentials given by Eq. \[eq:potential1\] and Eq. \[eq:potential2\], respectively. The solute diameter is $\sigma_s$ and solvent diameter is $\sigma_p$. The observation volume is defined by a critical radius $r_c = 1.1\bar{\sigma}$, where $\bar{\sigma} = (\sigma_s + \sigma_p)/2$ is the closest distance of approach. For cases where $\sigma_s/\sigma_p \geq 1.5$, a cutoff of 1.1$\bar{\sigma}$ can include some of the second-shell solvent. To avoid this and focus attention to the first observation shell, we set $r_c = \bar{\sigma} + 0.1\sigma_p$ for these cases. In the associating system (Fig. \[fig:1\] and Eqs. \[eq:potential1\] and \[eq:potential2\]), the critical angles for interaction are $\theta_c^{(A)} = \theta_c^{(B)} = 27\degree$. For hard sphere mixtures 255 solvent particles and 1 solute particle were studied in a given simulation cell. Ensemble reweighting technique was used to reveal low probability states [@merchant_water_2011]. The system was equilibrated for 1 million steps with translational factors chosen to yield an acceptance rate of 0.3, and data was collected every 100 sweeps. Analysis was carried out for different densities and size ratios $\sigma_s/\sigma_p$. For associating mixtures bonding distributions and average bonding numbers were studied for mixtures with different sizes and different association strengths for solute-solvent and solvent-solvent interactions. The excess chemical potential of the coupling of the colloid with solvent was also calculated using thermodynamic integration of average binding energy of solute with solvent as a function of solute-solvent interactions, using the three-point Gauss Legendre quadrature technique [@Hummer:jcp96]. For a symmetric mixture with no solvent-solvent interactions, energetic and entropic contributions for solute chemical potential were also studied at constant volume and temperature. Concentration effects were also computed considering a total of 864 particles, with varied number of solute particles. Due to the difference in size of the solute and solvent, the computations were performed keeping the packing fraction constant. Hence, the density of the system changed with respect to the concentration. Also the maximum angle for which the patch can form single bond is computed using the law of cosines, and hence the critical angle needs to be altered when the solute size is smaller than the solvent. For a size ratio of $\sigma_s/\sigma_p=0.8$, critical angle $\theta_c^{(A)} = \theta_c^{(B)} = 20\degree$ was used to ensure single bonding condition for the $A$ patch on solvent molecules which can associate with solute molecules. Isolated cluster probabilities ------------------------------ For asymmetric mixtures, we also study isolated cluster probabilities for different size ratios of solute and solvent molecules. The observation volume around the isolated solute is the same as defined in the previous section. For different size ratios we use the spherical code (appendix B) to estimate the maximum number of solvent molecules which can be inserted in the observation volume of the isolated solute. Successive insertion probabilities are calculated as in previous works [@marshall_molecular_2013; @bansal_structure_2016], where $10^8$ to $10^9$ insertions were carried out in the observation volume around the solute and the cases with no overlap with remaining $n-1$ particles studied. The data for isolated cluster probabilities for different sizes is given in the appendix B. Results and discussions ======================= Hard sphere $\{p_n\}$ distribution {#sc:res_hs} ---------------------------------- Recall that Eq. \[eq:pn\_1par\] and Eq. \[eq:pn\_2par\] are the 1-parameter and 2-parameter models for the occupancy distribution $\{p_n\}$. To obtain the parameters for hard spheres all of the same size, for both the models we use the average occupancy ($n^{HS}_{avg}= \sum\limits_n n\cdot {p_n}$) as a fitting constraint. For the 2-parameter model we additionally use the exclusion probability ($p_0$) — the probability when no hard sphere solvent particle is present in the observation volume — as a constraint. For the 1-parameter model we study the surface interactions based on the mean field approach developed by Reiss and Merry [@reiss_upper_1981]. By analyzing the distribution functions $\{p_n\}$ for different densities, we obtain geometric effects (density independent) that describe the mutual interference of different surface sites (Eq. \[eq:surf\_int\]). The density (or packing fraction) dependent parameters for these two models are given in Table  \[table: param1\]. $\rho \sigma^3$ $\eta$ $\zeta_1$ $\zeta_2$ $\zeta$ ----------------- -------- ----------- ----------- --------- 0.2 0.105 0.0175 0.773 0.75 0.3 0.157 0.015 1.267 1.251 0.4 0.209 0.0256 1.979 1.947 0.5 0.262 0.023 2.88 2.876 0.6 0.314 0.0361 4.179 4.172 0.7 0.367 0.0457 5.947 5.985 0.8 0.419 0.0609 8.432 8.562 0.9 0.471 0.0829 12.088 12.414 : Parameters for Eq. \[eq:pn\_2par\] ($\zeta_1$,$\zeta_2$) and Eq. \[eq:pn\_1par\] ($\zeta$) for different densities $\rho \sigma^3$ and packing fractions ($\eta$) \[table: param1\] Fig. \[fig:HS\_sym\] presents the results corresponding to the average occupancy ($n^{HS}_{avg}$) and exclusion probability ($\ln p_0$) based on the models. We compare these results with the MC simulation values and also include results from literature [@chang_real_1994; @torquato_microstructure_1985]. ![image](figure3) Fig. \[fig:HS\_sym\] makes it clear that 2-parameter model can simultaneously capture both $n_{avg}$ and $\ln p_0$ in excellent agreement with simulation. Importantly, even the 1-parameter model is able to capture most of the details, affirming the physical ideas underlying the models (Eqs. \[eq:pn\_1par\] and  \[eq:pn\_2par\]). Next, using the parameters obtained above for a fluid where both solute and solvent hard-spheres are the same size (a symmetric mixture), we describe the occupancy in a fluid where the solute and solvent are of different sizes (an asymmetric mixture). Our *ansatz* is that the information about size asymmetry is adequately captured by the isolated cluster partition function $K_n^{(0)}$ (Eq. \[eq:kn0\]). For an infinitely dilute system comprising one solute in a solvent bath, Fig. \[fig:pn1\_asym\] shows the predictions of $n_{avg}^{HS}$ and $\ln p_0$ based on the 1-parameter model (Eq. \[eq:pn\_1par\]) for different size ratios and different reduced densities. The level of agreement is encouraging, but perhaps not surprising since the size ratios are not much different from a symmetric case. Thus the geometric effects describing the mutual interference of different surface sites in the packing around the solute will be similar to what is observed in the symmetric mixture. For extreme size ratios this should break down, as we will discuss in the context of average bonding numbers in associating mixtures. ![image](figure4) For asymmetric mixtures, the packing fraction is a better measure of packing in the fluid. Fig. \[fig:pn1\_asym\_eta\] ![image](figure5) presents the variation of average occupancy for different packing fractions for three different size ratios for an infinitely dilute solution. The results show that the 1-parameter model is able to predict the average occupancy quite well. We note that as the concentration of solute is changed in asymmetric mixtures, the packing fraction changes (for a given density) and parameters corresponding to the resulting packing fraction should be used from Table \[table: param1\]. Associating mixture {#sc:res_asso} ------------------- We next consider associating fluids and investigate both size and concentration effects. ### Infinite Dilution We first study an infinitely dilute solution and vary the size of solute with respect to a fixed size of solvent particles. In our complete reference theory, the reference fluid $\{p_n\}$ distribution is either computed directly from simulations (‘$p_n$-Simulation’ in Fig. \[fig:n\_avg\]) or from the 1-parameter model discussed in Sec. A above (‘$p_n$-Model1’ in Fig. \[fig:n\_avg\]). Using $\{p_n\}$ we compute $F^{(n)}$ (Eq. \[eq:Fn\]), and on that basis, the average number of bonds in the associating mixture using Eq. \[eq:Cn\_new\], Eq. \[eq:301\] and Eq. \[eq:81\]. Fig. \[fig:n\_avg\] shows the variation of average bonding numbers with size ratio of solute and solvent molecules for a density of 0.8 and association strength of 7 $k_{\rm B}T$. ![image](figure6) As the size of the solute increases with respect to the size of the solvent, more solvent molecules can associate with the solute. With accurate information about the reference fluid $\{p_n\}$ (and hence $F^{(n)}$) from direct simulations, the complete reference theory is able to capture this increase in bonding numbers quite accurately. Interestingly, even $\{p_n\}$ obtained using the 1-parameter model (Eq. \[eq:pn\_1par\]) suffices. But note that the Marshall-Chapman approximation with only up to 3-body effects incorporated in the theory underestimates the average bonding numbers. These results emphasize the importance of multi-body interactions in describing the association correctly. As noted in Sec.  \[sc:HStheory\], the amount of surface exposure (or surface sites) is an important factor in determining the packing effects in the models. For the size ratios considered in Fig. \[fig:n\_avg\], the maximum number of surface sites and hence the geometric effects in surface interactions are expected to be similar to the symmetric case, and not surprisingly, the agreement of bonding numbers between simulation and with the 1-parameter model for $p_n$ is very good. For extreme size ratios (Table. \[table:extreme\]), a high error with the 1-parameter model is expected. This results because of the disparity in surface sites for these ratios relative to the symmetric mixture within which the density independent geometric effects were obtained (Eq. \[eq:surf\_int\]). However, even for these extreme size ratios, with information for reference hard sphere distribution functions from simulation, the complete reference theory is able to capture the average bonding numbers for these extreme size ratios quite well. $\sigma_s / \sigma_p $ MC $p_n$-Simulation $p_n$-Model1 ------------------------ ------- ------------------ -------------- 0.5 3.61 3.55 4.50 2 15.84 15.04 17.66 : Comparison of average bonding numbers ($n_{avg}$) of solute for extreme size ratios. \[table:extreme\] ### Varying association strengths To understand the effect of varying association strengths between solute-solvent and solvent solvent particles, we studied a case with a fixed size ratio of $\sigma_s / \sigma_p = 0.8 $ and for varying association strengths (fig. \[fig:Asso\_asym\]). ![image](figure7) As the strength of solute-solvent association is increased as compared to solvent-solvent interactions, multi-body effects become important. It was observed that higher deviations are observed with TPT2-based Marshall-Chapman approximation, especially for increasing strength of solute-solvent association. Importantly, excellent agreement with the complete reference theory is observed for all cases noted in the figure. ### Chemical potential and energy entropy contributions Fig. \[fig:Asso\_mu\] shows the chemical potential of the solute for two limiting cases: one where the solvent-solvent association is present and one where it is absent, for two size ratios. ![image](figure8) The results indicate that despite the large deviations in $n_{avg}$ (Fig. \[fig:n\_avg\]) noted above, the deviations using the Marshall-Chapman approximation are not as high relative to the complete reference approach. To better understand this result, we consider a symmetric system where there is no association between solvent particles. For this case, the partial molar energy is the energy of the system with solute-solvent association and there is no contribution for change in solvent-solvent association due to inclusion of an ideal solute at infinite dilution. We decompose the residual chemical potential of the solute ($\mu^{Asso}_{s}$) into its energetic and entropic contributions [@yu_thermodynamic_1988; @yu_solvation_1990]: $$\beta \mu_s^{Asso} = \beta E_s^{Asso}-T\cdot \beta S_{s,V}^{Asso}$$ where $ E_s^{Asso} = ( \frac{\partial \mu^{Asso}_s/T}{\partial 1/T})_{{\rho _s},{\rho _p}}$ is the partial molar energy and $ S_{s,V}^{Asso}= {( {\frac{{\partial \mu _s^{Asso}}}{{\partial T}}})_{{\rho _s},{\rho _p}}}$ is the partial molar entropy contribution. The entropy and energy values can be obtained with MCA and complete reference approach based on the corresponding temperature derivatives of the chemical potential of solute. ![image](figure9) Fig. \[fig:en\_entr\] clearly shows that energy and entropy contributions are not captured accurately by MCA. But the apparent reasonable prediction of the chemical potential (using MCA) results because of cancellation of the errors between the entropy and energy contributions. The above result shows that in comparing perturbation theories, it could be useful and prudent to study chemical potential and also its energy and entropy contributions. ### Concentrated systems We also study the variation of average bonding number of the solute for different concentration of the solute in the asymmetric mixtures. As the concentration of the solute increases, the system becomes limited in the number of solvent molecules that can bond to the solute molecules and hence TPT2 approximation used in MCA becomes more accurate. For low concentrations, multi-body correlations are important and hence deviations are observed with MCA. Fig. \[fig:n\_avg\_conc\] shows the results for two different size ratios. The theory with the complete reference is able to capture the average bonding numbers for the whole concentration range. ![image](figure10) Concluding discussions ====================== We have studied asymmetric mixtures having strong short-range association between differently sized solute and solvent molecules. The solute molecules have isotropic interactions and the solvent molecules have directional interactions. Such systems are archetypes of colloidal mixtures that are being actively studied in designing materials from the nanoscale level. These systems can also describe the short range ion-solvation and ion-pairing effects in electrolyte systems which is another focus of our research. The isotropic interactions of the solute can allow multiple solvent molecules to associate and hence multi-body effects become important for these systems. Previously [@bansal_structure_2016], we discussed the development of an accurate perturbation theory for these systems is hindered by the difficulty in obtaining the multi-body correlations in the reference system (typically hard sphere). We discussed the limitation of an approach based on obtaining multi-body correlations for the hard sphere system in the gas phase and approximating bulk solvent effect with a linear superposition of pair correlation (together with term to account for three body correction). It was observed that this second order perturbation method fails at high association strengths and high densities. We introduced an approach to represent the multi-body clusters in terms of occupancy distribution to accurately describe the packing in the hard sphere system. Excellent agreement with MC simulation for a range of conditions of association and concentrations were obtained with this *complete reference* approach for symmetric mixtures. Here, we have built upon the earlier work and study systems with size asymmetry. Our study shows that the multi-body correlations for asymmetric hard sphere mixtures can be accurately studied in terms of occupancy distributions. With these accurate packing effects, our approach gives excellent agreement with MC simulation for the asymmetric associating mixtures. These occupancy distributions were obtained by particle simulations. Based on ideas borrowed from quasichemical theory, we have also developed parametric models to describe occupancy distributions in the hard sphere systems of different densities and different asymmetries. These distributions were obtained by describing the effects of clustering, medium and surface interactions simultaneously in the hard sphere packing around a solute and can be incorporated in perturbation theories (for eg. statistical associating fluid theory) without having to perform particle simulations for the reference fluid. We validate this complete reference theory (using parameterized models of the reference distribution) against several simulations. A critical test was in analyzing the energy and entropy contributions to the chemical potential of the solute. For a system where solvent-solvent association is not present, such a decomposition of chemical potential in energy and entropy contributions with [complete reference]{} theory showed excellent agreement with MC simulations. The apparently reasonable agreement of the second order perturbation approach is shown to arise from the balancing of errors in the energy and entropy contributions. This important finding suggests the need to study different properties while validating perturbative theories for fluids. This present framework can prove useful in modeling real solutions where concentration of solute is low and its size is different from the solvent molecules. Acknowledgment ============== We acknowledge RPSEA / DOE 10121-4204-01 and the Robert A. Welch Foundation (C-1241) for financial support. Appendix {#sc:appen} ======== Hard sphere distribution ------------------------ ### Expression for $K_n$ {#sc:Kn} From Eq. \[eq:pnp0\], we find that obtaining an expression for $K_n$ reduces to evaluating the ratio of $p_n / p_0$. The total potential energy of the system when $n$ solvent particles are coordinated with the solute and the remaining $N-n$ solvent particles are outside the observation volume can be formally written as $U = U_{SP_n} + U_{N-n|SP_n} + U_{N-n}$. $U_{SP_n}$ is the potential energy of the solute-$n$-solvent cluster. $U_{N-n|SP_n}$ is the interaction energy of the cluster with the rest of the solvent; specifically $U_{N|SP_0}$ is the interaction energy of the solute with the fluid outside the observation volume. In the particular case of hard-spheres, $U_{N|SP_0} = 0$. Finally, $U_{N-n}$ is the potential energy of the solvent constituting the bulk. Since $p_n \propto Q(n,N-n,V,T)$, where $Q(n,N-n,V,T)$ is the canonical partition function of the system with $n$ solvent in the observation volume around the solute and $N-n$ in the bulk, we immediately have $$\frac{p_n}{p_0} = \frac{N!}{(N-n)!n!} \frac {\int\limits_v d{\vec r}_1\ldots \int\limits_v d{\vec r}_n e^{-\beta U_{SP_n}} \cdot \int\limits_{V-v} d{\vec r}_{n+1} \ldots \int\limits_{V-v} d{\vec r}_{N-n} e^{-\beta U_{N-n|SP_n}} e^{-\beta U_{N-n}}}{\int\limits_{V-v} d{\vec r}_1 \ldots \int\limits_{V-v} d{\vec r}_N e^{-\beta U_{N}}} \, \label{eq:pnporatio}$$ where we have implicitly moved the center of the coordinates to the center of the solute and thus canceled a common factor of $V$ from both the numerator and denominator. Further, since $U_{SP_0} = 0$ and $U_{N|SP_0} = 0$, the denominator simply depends on the potential energy of the solvent in the bulk. (Of course for a general solute, this restriction is easily removed [@merchant_thermodynamically_2009; @merchant:jcp11b].) Next consider the ratio $$\frac{Q(0,N,V-v)}{Q(0,N-n,V-v)} = \frac{Q(0,N-n+1,V-v)}{Q(0,N-n,V-v)}\cdot \frac{Q(0,N-n+2,V-v)}{Q(0,N-n+1,V-v)}\ldots \frac{Q(0,N,V-v)}{Q(0,N-1,V-v)} \label{eq:pdt}$$ where we suppress $T$ for conciseness and the 0 indicates that there is no solute in the system (or as is the case here, $U_{SP_0} = U_{N|SP_0} = 0$). In the thermodynamic limit of large $V >> v$ and $N >> n$, from the standard potential distribution relation [@lrp:book; @lrp:cpms; @widom:jpc82], each of the above factor on the right is simply $e^{-\beta \mu_p^{\rm ex}} / \Lambda_p^3 \rho_p$, where $\Lambda_p$ is the thermal de Broglie wavelength of the solvent sphere and $\mu^{\rm ex}_p$ is its excess chemical potential, and $\rho_p$ the density of solvent. Since, $$Q(0,N-n,V-v,T) = \frac{1}{\Lambda_p^{3(N-n)} (N-n)!} \int\limits_{V-v}d{\vec r}_1\ldots \int\limits_{V-v} d{\vec r}_{N-n} e^{-\beta U_{N-n}} \,$$ we multiply and divide Eq. \[eq:pnporatio\] by the factor $\int\limits_{V-v}d{\vec r}_1\ldots \int\limits_{V-v} d{\vec r}_{N-n} e^{-\beta U_{N-n}}$. Rearranging the resulting equation using Eq. \[eq:pdt\] in the large $V$ and large $N$ limit, and noting that the momentum partition functions (for both solute and solvent) cancel exactly, we obtain Eq. 20 (main text), where $$\begin{aligned} e^{-\beta \phi(R^n; \beta)} & = & \frac{ \int\limits_{V-v} d{\vec r}_{n+1} \ldots \int\limits_{V-v} d{\vec r}_{N-n} e^{-\beta U_{N-n|SP_n}} e^{-\beta U_{N-n}}}{\int\limits_{V-v}d{\vec r}_1\ldots \int\limits_{V-v} d{\vec r}_{N-n} e^{-\beta U_{N-n}}} \nonumber \\ & = & \langle e^{-\beta U_{N-n|SP_n}} \rangle_{N-n} \,\end{aligned}$$ where $ \langle \ldots \rangle_{N-n}$ denotes averaging over the configurations of the $N-n$ solvent particles in the volume outside the observation shell. ### MaxEnt model for $\{p_n\}$ {#sc:MaxEnt} Here we present an alternative derivation of the two parameter model (Eq. \[eq:pn\_2par\]) on the basis of information theoretic modeling of $\{p_n\}$ [@sivia; @lrp:jpcb98; @asthagiri:pre03]. On the basis of the isolated cluster partition function, we have the distribution of occupancy probabilities $\{p_n^{(0)}\}$ as $$p_n^{(0)} = \frac{K_n^{(0)} \rho_p^{n}}{1+\sum\limits_{m\ge 1} K_m^{(0)} \rho_p^{m}}$$ With $\{p_n^{(0)}\}$ as the default model and accepting the availability of the mean (first moment) and variance (second moment) of the distribution $\{p_n\}$ from simulation data, by standard maximum entropy arguments, we have $$\label{it04} \frac{p_n}{p^0_n} = e^{-C} e^{-\lambda_1 \cdot n} e^{-\lambda_2 \cdot n^2} \,$$ where the Lagrange multipliers $C$, $\lambda_1$, and $\lambda_2$ are, respectively, obtained from enforcing the following constraints $$\label{it05} \sum_n p_n = 1$$ $$\label{it06} \sum_n p_n \cdot n = n_{avg}$$ $$\label{it07} \sum_n p_n \cdot n^2= -\overline{\sigma^2}$$ We thus obtain $$\label{it11} p_n = \frac{\big [ e^{-\lambda_1} e^{-\lambda_2 \cdot n} \big ]^{n} K_n^{(0)} \rho_p^{n}}{1+ \sum \limits_{m\ge 1} \big [ e^{-\lambda_1} e^{-\lambda_2 \cdot m} \big ]^{m} K_m^{(0)} \rho_p^{m}}$$ Eq. \[it11\] is the same form as obtained in section \[sc:HStheory\] for a two parameter correlation. By using $\lambda' = e^{-\lambda_1}$, we can also represent Eq. \[it11\] by $$\label{it13} p_n = \frac{\big [ \lambda' e^{-\lambda_2 \cdot n} \big ]^{n} K_n^{(0)} \rho_p^{n}}{1+ \sum\limits_{m\ge 1} \big [\lambda' e^{-\lambda_2 \cdot m} \big ]^{m} K_m^{(0)} \rho_p^{m}}$$ which is the same form introduced in our previous work [@bansal_structure_2016]. Based on this derivation, we can see that $ \lambda_2 $ is the term corresponding to surface interactions discussed in section  \[sc:HStheory\] and constraints the variance of the $\{p_n\}$ distribution. Isolated cluster probabilities ------------------------------ For asymmetric mixtures we study the isolated cluster probabilities and to find the maximum number of solvent molecules that can occupy the observation volume, we use spherical code [@sphcode; @sph_code2]. ### Spherical Code The spherical code provides information to place $n$ points optimally across a sphere. It gives the optimal angle between the points, considering the center of the sphere as the origin. This is extended to place solvent particles across the solute surface, point of contact between the sphere serving as the optimizing points. The angle between the two contact points, $\theta$, is determined as given in the fig. \[fig-sph\_pack\]. The number of solvent molecule that can be tightly packed is obtained from the data [@sphcode]. The sphere onto which the points are optimally placed is an imaginary sphere which includes the critical radius as shown in the dashed lines. It is to be noted that this is still a theoretical estimate for contact packing on the imaginary larger sphere, and due to higher freedom of packing in our case, coordination states can be marginally higher for very large size ratios, $\sigma_s/\sigma_p \geq 5$. Table in fig. \[fig-sph\_pack\] also gives the ma ximum an gle for which single bonding condition holds for a given size ratio ($\theta_{c,max}$) and specified critical distance ($r_c$). ![image](figure11){width="55.00000%"} \[fig-sph\_pack\] Once the $n^{max}$ is defined, we find $P^{( n )}$ as the probability that there is no hard sphere overlap for randomly generated solvent molecules in the observation volume (or inner-shell) of solute molecules. As was discussed previously[@marshall_molecular_2013] that a hit-or-miss Monte Carlo [@hammersley; @pratt_quasichemical_2001] approach to calculate $P^{(n)}$ proves inaccurate for large values of $n$ ($n>8$). But since $${P^{( n)}} = P_{insert}^{( n )}{P^{( {n - 1} )}} \, , \label{eq:12}$$ where $P_{insert}^{(n)}$ is the probability of inserting a [*[single]{}*]{} particle given $n-1$ particles are already in the bonding volume, an iterative procedure was be used to build the higher-order partition function from lower order one [@marshall_molecular_2013]. The one-particle insertion probability $P_{insert}^{(n)}$ is easily evaluated using hit-or-miss Monte Carlo. Following is the table for isolated cluster probabilities for different size ratios. \[table:IC\] [37]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} [****, ()](\doibase 10.1063/1.1746468) @noop [****,  ()]{} [****,  ()](http://scitation.aip.org/content/aip/journal/jcp/78/6/10.1063/1.445245) [****,  ()](http://scitation.aip.org/content/aip/journal/jcp/82/2/10.1063/1.448475) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, , ) in @noop [**]{}, , Vol. ,  (, , ) Chap. , pp.  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [“,” ]{} () @noop [“,” ]{} () @noop [**]{} (, , ) [^1]: wgchap@rice.edu
{ "pile_set_name": "ArXiv" }
--- abstract: 'Boron Carbide exhibits a broad composition range, implying a degree of intrinsic substitutional disorder. While the observed phase has rhombohedral symmetry (space group $R\bar{3}m$), the enthalpy minimizing structure has lower, monoclinic, symmetry (space group $Cm$). The crystallographic primitive cell consists of a 12-atom icosahedron placed at the vertex of a rhombohedral lattice, together with a 3-atom chain along the 3-fold axis. In the limit of high carbon content, approaching 20% carbon, the icosahedra are usually of type B$_{11}$C$^p$, where the $p$ indicates the carbon resides on a polar site, while the chains are of type C-B-C. We establish an atomic interaction model for this composition limit, fit to density functional theory total energies, that allows us to investigate the substitutional disorder using Monte Carlo simulations augmented by multiple histogram analysis. We find that the low temperature monoclinic $Cm$ structure disorders through a pair of phase transitions, first via a 3-state Potts-like transition to space group $R3m$, then via an Ising-like transition to the experimentally observed $R\bar{3}m$ symmetry. The $R3m$ and $Cm$ phases are electrically polarized, while the high temperature $R\bar{3}m$ phase is nonpolar.' address: - 'Department of Physics, Carnegie Mellon University, 5000 Forbes Ave, Pittsburgh, PA, 15232, United States of America, 412-268-7645' - 'Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC, 27708, United State of America' author: - Sanxi Yao - 'W. P. Huhn' - 'M. Widom' title: 'Phase Transitions of Boron Carbide: Pair Interaction Model of High Carbon Limit' --- Boron Carbide ,density functional theory ,multi-histogram method ,3-state Potts like transition ,Ising like transition Introduction ============ The phase diagram of boron carbide is not precisely known, with both qualitative and quantitative discrepancies among the different research groups [@Samsonov58; @Ekbom81; @Beauvy83; @Schwetz91; @Okamoto92; @Domnich11; @Rogl14]. The most widely accepted diagram of Schwetz [@Schwetz91; @Okamoto92] displays a single boron carbide phase at temperature above 1000C, coexisting with elemental boron and graphite. The carbon concentration covers the range 9$\%$-19.2$\%$ carbon, falling notably short of the 20% carbon fraction at which the electron count is believed to be optimal [@Longuet-Higgins55; @Lipscomb81; @Balakrishnarajan07]. More interestingly, the nearly temperature independent behavior of the phase boundaries are thermodynamically improbable, and the broad composition range suggests substitutional disorder at 0K, in apparent violation of the 3rd law. Since so much remains unknown, and experiment can only be assured of reaching equilibrium at high temperature, theoretical calculation offers hope for resolving the behavior at lower temperatures, in addition to interpreting the disorder at high temperatures. As determined crystallographically [@GWill76; @GWill79; @Kwei96], boron carbide has a 15-atom primitive cell, consisting of an icosahedron and 3-atom chain, in a rhombohedral lattice with symmetry $R\bar{3}m$. At 20$\%$ carbon a proposed B$_4$C structure featured pure boron icosahedra B$_{12}$ with a C-C-C chain [@Clark43]. Although this structure exhibits $R\bar{3}m$ symmetry, later experimental work [@Kwei96; @Schmechel00] suggested that the icosahedron should be B$_{11}$C instead of B$_{12}$ and the chain should be C-B-C instead of C-C-C. For other compositions, the icosahedra can be B$_{12}$, B$_{11}$C, or even B$_{10}$C$_2$ (the bi-polar defect [@Mauri01]), and the chain can be C-B-C, C-B-B, B-B$_2$-B [@Yakel; @Shirai14] or B-V-B (V means vacancy). Fig. \[fig:structure\] illustrates the rhombohedral cell, the C-B-C chain and the icosahedron. The 12 icosahedral sites are categorized into 2 classes: equatorial and polar. We further classify the polar sites into north and south. Icosahedra are connected along edges of the rhombohedral lattice, which pass through the polar sites. 0.5cm ![Primitive cell of boron carbide showing C-B-C chain at center along the 3-fold axis. The icosahedron (not to scale) occupies the cell vertex. Equatorial sites of the icosahedron are shown in red, labeled “e”, the north polar sites are shown in green and labeled $p_0$, $p_1$ and $p_2$, while the south polar sites are shown in cyan and labeled as $p_0'$, $p_1'$ and $p_2'$.[]{data-label="fig:structure"}](structure.eps "fig:"){width="0.8\linewidth"} Density functional theory studies of a large number of possible arrangements of boron and carbon atoms [@Mauri01; @Widom12; @Bylander90] identified four stable phases: pure $\beta$-Boron, rhombohedral B$_{13}$C$_2$, monoclinic B$_4$C, and graphite. The stable rhombohedral phase consists of B$_{12}$ icosahedra with C-B-C chains, giving full rhombohedral symmetry $R\bar{3}m$, while the stable monoclinic phase has B$_{11}C^p$ icosahedra and C-B-C chains. Carbon occupies the same polar site (e.g. $p_0$) in every icosahedron, resulting in symmetry $Cm$. Introducing disorder in the occupation of polar sites (i.e. randomly choosing one polar site to occupy with carbon) can restore $R\bar{3}m$ symmetry. In general, we define site occupations $m_i$ ($i=0, 1, 2, 0', 1', 2'$) corresponding to the mean occupation of the sites $p_i$. The experimentally observed phase has all $m_i=1/6$. We call this orientational disorder. The the stable monoclinic phase has one large order parameter (e.g. $m_0\sim 1$) and the remaining $m_i\sim 0$, which we call orientational order. Hence it was proposed [@Huhn12] that a temperature-driven order-disorder phase transition is responsible for the high symmetry seen in experiments that are likely in equilibrium only at high temperature. According to Landau’s theory of phase transitions [@Wooten08], the space groups of structures linked by continuous or at most weakly first order phase transitions should obey group-subgroup relationships. Additionally, the subgroup should be maximal, again provided the transition is continuous or at most weakly first order. Typically the high temperature phase possesses the higher symmetry, as this permits a higher entropy. In regard to boron carbide, the high temperature phase has space group $R\bar{3}m$ (group \#166) and the low temperature phase has group $Cm$ (group \#8). However, $Cm$ is not a maximal subgroup of $R\bar{3}m$, suggesting the possible existence of an intermediate phase. Two sequences of transitions obey the maximality requirement: $R\bar{3}m\rightarrow R3m\rightarrow Cm$ and $R\bar{3}m\rightarrow C2/m\rightarrow Cm$ [@Hahn84]. The two corresponding intermediate symmetries $R3m$ and $C2/m$ have space group numbers \#160 and \#12, respectively. In terms of the distribution of carbon atoms on icosahedra, $R3m$ breaks the inversion symmetry, and hence corresponds to occupying one pole (e.g. the north pole) more heavily than the other, so that $m_i\sim 1/3$ ($i=0, 1, 2$) with the remaining $m_i\sim 0$. In contrast, $C2/m$ breaks the 3-fold rotational symmetry but preserves inversion. Thus the carbon atoms preferentially occupy a pair of diametrically opposite polar sites, e.g. $m_0=m_0'\sim 1/2$ while the remaining $m_i\sim 0$. Since the carbon atom draws charge from surrounding borons, the phases that break inversion symmetry possess an electric dipole moment. Hence we name the $R3m$ state “polar”. The possibility of a polar phase was independently suggested recently [@Ektarawong14]. We name the $Cm$ state “tilted polar” because the broken 3-fold symmetry creates a component of polarization in the $xy$ plane. Although it lacks a net dipole moment, we name the $C2/m$ state “bipolar” because it is reminiscent of the bipolar defect [@Mauri01]. Finally, we name the the high symmetry phase $R\bar{3}m$ “nonpolar”. To identify phase transitions, and to determine which symmetry-breaking sequence occurs, we perform Monte Carlo simulations. Strictly speaking, phase transitions occur only in the thermodynamic limit of large system size, which is beyond the reach of density functional theory calculations. Hence we construct a classical interatomic interaction model, with parameters fit to density functional theory energies. For simplicity we consider only the high carbon limit where every icosahedron contains a single polar carbon (i.e. we essentially project the composition range onto the $x_C=0.2$ line). We analyze our simulation results with the aid of the multiple histogram technique [@Swendsen88; @Swendsen89]. In the end we indeed discover a sequence of two phase transitions. One arises from the breaking of 3-fold symmetry linking $Cm$ to $R3m$ that is first order, similar to the 3-state Potts model [@Potts52] in three dimensions. The other corresponds to the breaking of inversion symmetry linking $R3m$ to $R\bar{3}m$ that is in the Ising universality class. Methods ======= Pair interaction model ---------------------- Given that every primitive cell contains a B$_{11}$C$^p$ icosahedron and a C-B-C chain, the configuration can be uniquely specified by assigning a 6-state variable $\sigma$ to each cell, corresponding to which of the six polar sites holds the carbon atom. The relaxed total energy of a specific configuration can be expressed through a cluster expansion in terms of pairwise, triplet and higher-order interactions [@Sanchez84; @Walle02] of these variables. As shown below, truncating at the level of pair interactions provides sufficient accuracy for present purposes. Further, we observe that symmetry-inequivalent pairs are in nearly one-to-one correspondence with the inter-carbon separation $R_{ij}=|\bR_i-\bR_j|$ where the $\bR_i$ are the initial positions prior to relaxation and belong to a discrete set of fixed possible values $\{R_k\}$, arranged in order of increasing length. Note that we need not concern ourselves with interactions of polar carbons with chain carbons, as the number of such pairwise interactions is conserved across configurations. Thus our total energy can be expressed as $$\label{eq:bondmodel} E(N_1,\dots,N_m)=E_0+\sum_{i=1}^m a_k N_k$$ where $N_k$ is the number of intercarbon separations of length $R_k$, and $m$ is the number of such separations we choose to treat in our model. We use the density functional theory-based Vienna ab initio simulation package (VASP) [@Kresse93; @Kresse94; @Kresse961; @Kresse962] to calculate the total energies within the projector augmented wave (PAW) [@Blochl94; @Kresse99] method utilizing the PBE generalized gradient approximation [@Perdew96; @Perdew97] as the exchange-correlation functional. We construct a variety of 2x2x2 and 3x3x3 supercells, which we relax with increasing $k$-point meshes until convergence is reached at the level of 0.1 meV/atom, holding the plane-wave energy cutoff fixed at 400 eV. Using our DFT energies, we fit the $m+1$ parameters $E_0$ and $\{a_i\}$ in our bond interaction model Eq. (\[eq:bondmodel\]), increasing $m$ until we are satisfied with the quality of the fit, at $m=10$. The shortest bond included has length $R_1=1.732$ Å, corresponding to carbons at polar sites joined by an intericosahedral bond. For example, $p_0$ and $p_0'$ carbons on icosahedra joined by a bond in the $p_0$ direction. This bond has the largest strength, with $a_1=1.126$ eV. Bond strength rapidly diminishes with separation $R$. Our longest bonds included are $R_9=5.174$ Å  (the lattice constant separating neighboring rhombohedral vertices) and $R_{10}=5.465$ Å  (the second neighbor rhombohedral vertex separation). Our fitting procedure minimizes the mean-square deviation of model energy from calculated DFT energy, supplemented by a small contribution from the $L_1$ norm of the set of coefficients $\{a_k\}$. Including the $L_1$ norm regularizes the expansion in a manner similar to compressive sensing [@Hart13; @Wakin08] and improves transferability to larger cell sizes. For our fitted data set of 188 independent 2x2x2 supercells (see Fig. \[fig:fitting\]) we obtain an RMS error of 0.474meV/atom. Checking our fit on a different set of 47 $2\times2\times2$ supercells with energy below 5.7 meV/atom we obtain RMS error of 0.233meV/atom, while checking on 57 $3\times3\times3$ supercells yielded RMS error of 0.405meV/atom. Note that 5.7 meV/atom corresponds to $15\times 5.7=86$ meV/cell corresponding to $kT$ per degree of freedom at T=1000K. 1.0cm ![Fit of bond interaction model to DFT-calculated total energies in $2\times2\times2$ supercells. Inset shows cross validation check of $2\times2\times2$ supercells transferability to $3\times3\times3$ supercells.[]{data-label="fig:fitting"}](fitting.eps "fig:"){width="0.8\linewidth"} Symmetry and order parameters ----------------------------- Within Landau theory, each possible symmetry breaking is quantified by an order parameter whose transformation properties match irreducible representations of the parent symmetry group. Space group $R\bar{3}m$ contains point group $D_{3d}$, which is the symmetry group of the triangular antiprism formed by the six polar sites of the icosahedron. Important elements include 3-fold rotation about the $z$-axis, reflection in a vertical plane containing the $z$-axis, and inversion through the center. The longitudinal polarization $$P_z=m_0+m_1+m_2-m_0'-m_1'-m_2'$$ transforms as the one dimensional irreducible representation $A_{2u}$, which breaks inversion symmetry, while preserving rotation and reflection, and hence is suitable for characterizing the transition $R\bar{3}m\rightarrow R3m$. The pair of functions $$P_{xz}=(m_0-m_0')+\frac{1}{2}(m_1'+m_2'-m_1-m_2), ~~~ P_{yz}={\sqrt{3}\over 2}(m_1+m_2'-m_1'-m_2)$$ transform as the two dimensional irreducible representation $E_g$, which additionally breaks both rotational symmetry, and hence characterizes the further transition $R3m\rightarrow Cm$. Since we will not care which specific orientation is selected at low temperature, we take the norm of the two dimensional representation, and define $P_{xy}=\sqrt{P_{xz}^2+P_{yz}^2}$. Although we shall not need it, we note that the functions $$P_x=(m_0+m_0')-\frac{1}{2}(m_1+m_1'+m_2+m_2'), ~~~ P_y={\sqrt{3}\over 2}(m_1+m_1'-m_2-m_2'),$$ which transform as the irrep $E_u$, characterize the transformation $R\bar{3}m\rightarrow C2/m$. As examples of the use of these order parameters, consider a fully disordered nonpolar state of symmetry $R\bar{3}m$ in which all $m_i=1/6$. All the above order parameters vanish in this state. Now let $m_i=1/3$ while $m_i'=0$, and note that $P_z=1$, while $P_{xz}=P_{yz}=P_x=P_y=0$, so that $P_z$ indeed characterizes the polar state $R3m$. Completing the symmetry breaking so that $m_0=1$ while all others vanish, we have both $P_z=1$ and $P_{xz}=1$, so the state is both polar and tilted, with symmetry $Cm$. Finally, take $m_0=m_0'=1/2$ and all other $m_i$ and $m_i'=0$ and note that $P_x\ne 0$, while $P_z=P_{xz}=P_{yz}=0$, as expected for the bipolar state of symmetry $C2/m$. Monte Carlo simulation and multi-histogram method ------------------------------------------------- We perform conventional Metropolis Monte Carlo simulations in $L\times L\times L$ supercells of the rhombohedral primitive cell, with $L$ ranging from 3 to 12. Our basic move is a “rotation” in which we randomly select an icosahedron, then randomly displace the carbon from its current polar site to a randomly chosen alternate polar site. The move is then accepted or rejected according to the Boltzmann factor for the energy change $\Delta E$. Following an equilibration period, we begin recording the total energy $E$ and the occupations $m_i$ ($i=0, 1, 2, 0', 1', 2'$) of the polar sites for each subsequent configuration. After a run at one temperature is completed, we take the final configuration as the initial configuration for another run at a nearby temperature. At a given simulation temperature $T_s$, a histogram of configuration energies $H_{T_s}(E)$ (see Fig. \[fig:histograms8\]) can be converted into a density of states $W(E)=H_{T_s}(E) \exp{(E/kT_s)}$, which is accurate over the energy range that has been well sampled at temperature $T$. This density of states can be used to calculate the partition function $$Z(T)=\sum_E W(E) e^{-E/kT}$$ which is accurate over a range of temperatures close to $T_s$ [@Swendsen88]. The logarithm of $Z(T)$ yields the free energy, and derivatives of the free energy yield other quantities such as internal energy, entropy and specific heat (see Fig. \[fig:cv\]). Alternatively, we may take moments of the energy distribution, $$\label{eq:avE} \avg{E^q} = \sum_E W(E) E^q e^{-E/k_BT}.$$ The first moment ($q=1$) yields the thermodynamic internal energy, while the fluctuations $$\label{eq:Cv} c_v(T)=\frac{\avg{E^2}-\avg{E}^2}{k_BT^2}$$ give the specific heat. Moreover, by combining histograms taken at temperatures chosen so that the tails of the histograms overlap, the density of states can be self-consistently reconstructed [@Swendsen89] so that the free energy becomes accurate over all intervening temperatures. Inspecting the histograms shown in Fig. \[fig:histograms8\], rapid evolution is apparent with between temperatures 710 and 730K, which can be an indication of a phase transition. As supercell size increases the histograms narrow, requiring additional simulation temperatures to maintain the degree of overlap seen here. 0.5cm ![Multiple histograms for the 8x8x8 supercell. The ground state $Cm$ configuration is taken as the zero of energy.[]{data-label="fig:histograms8"}](totalhist.eps "fig:"){width="0.8\linewidth"} This notion can be extended to multidimensional histograms in which the density of states is further broken down according to values of order parameters of interest. For instance, average powers of the longitudinal polarization can be evaluated as $$\label{eq:avPz} \avg{|P_z|^q}(T) = \sum_{E, P_z} W(E, P_z) |P_z|^q e^{-E/kT},$$ where $W(E, P_z)$ is the joint distribution of energy and longitudinal polarization, and we take the absolute value of $P_z$ because in a well equilibrated simulation both positive and negative values of $P_z$ occur with equal frequency. The first power gives the mean polarization, while from the first and second powers together we obtain the longitudinal susceptibility $$\label{eq:chiz} \chi_z(T)=N\frac{\avg{|P_z|^2} - \avg{|P_z|}^2}{ k_BT},$$ where $N$ is the number of atoms. The susceptibility $\chi_{xy}(T)$ is obtained in similar fashion. The units of $\chi_z$ and $\chi_{xy}$ are $eV^{-1}/atom$. Results and Discussion ====================== Order parameters ---------------- Plotting the order parameters vs. temperature provides a quick way to determine the sequence of phases and transitions. As Fig. \[fig:4pics\] shows, $\avg{|P_z|}$ passes through two regimes of anomalous behavior. As the supercell size grows, the average longitudinal polarization $\avg{|P_z|}$ vanishes for $T\gtrsim 790$K but approaches to finite values for $T\lesssim 790$K. At $T=790$K the slope of $\avg{|P_z|}(T)$ diverges. An even stronger divergence of slope occurs at $T\approx 717$K. Meanwhile, $\avg{P_{xy}}$ decreases with increasing supercell size for $T\gtrsim 717$K but approaches finite values for $T\lesssim 717$K. The diverging slope at $T\approx 717$K is consistent with an emerging discontinuity in $\avg{P_{xy}}(T)$. On the basis of the order parameters, we judge there are three phases separated by two phase transitions. The high temperature phase has symmetry $R\bar{3}m$ both $P_z$ and $P_{xy}$ vanish. Below 790K a longitudinal polarization grows continuously, and we enter a phase of symmetry $R3m$, having lost inversion symmetry. Around 717K the polarization suddenly tilts off the $z$-axis and we enter the tilted polar phase of symmetry $Cm$. Specific heat and susceptibility -------------------------------- Having explored the order parameters, which can be considered as first derivatives of the free energy with respect to applied fields, we now consider the specific heat and susceptibilities. The specific heat corresponds to a second derivative of free energy with respect to temperature, while the susceptibilities are second derivatives with respect to conjugate fields. All are evaluated from Monte Carlo data via the fluctuations formulas such as Eqs. (\[eq:Cv\]) and (\[eq:chiz\]). Specific heat for a series of increasing supercell sizes is shown in Fig. \[fig:cv\]. In addition to a strong peak around T=717K, a weak peak around T=790K can be seen growing for larger supercell sizes in the inset. The growing peaks converge to temperatures that roughly correspond to the order parameter anomalies seen above. Fig. \[fig:4pics\] shows the longitudinal and perpendicular (i.e. in-plane) susceptibilities, $\chi_z$ and $\chi_{xy}$ respectively. Evidentally the high temperature specific heat peak coincides with the peak in $\chi_z$, and hence relates to the fluctuations associated with onset of longitudinal polarization. Similarly, the low temperature specific heat peak coincides with the peak in $\chi_{xy}$, and hence relates to fluctuations associated with the tilt of polarization off the 3-fold axis. 0.5cm ![Specific heat for 3x3x3 to 12x12x12 supercells.[]{data-label="fig:cv"}](TCv1.eps "fig:"){width="80.00000%"} ### Ising-like transition As the high temperature transition from $R\bar{3}m$ to $R3m$ coincides with a breaking of inversion symmetry, we expect the transition to be in the universality class of the three-dimensional Ising model. Some associated critical exponents are $\alpha=0.110$ (specific heat), $\gamma=1.2372$ (susceptibility) and $\nu=0.6301$ (correlation length) [@Pelissetto02]. Applying finite size scaling theory [@Landau05], we note that the specific heat peak height should diverge with increasing supercell size as $L^{\alpha/\nu}$, where $\alpha/\nu=0.175$. The small value of this exponent explains the weak divergence seen around 790K in Fig. \[fig:cv\]. Similarly the susceptibility $\chi_z$ should diverge as $L^{\gamma/\nu}$, with $\gamma/\nu=1.963$. Validation of the size- and temperature-dependence of $\chi_z$ requires a finite-size scaling collapse of axes, plotting the scaled susceptibility $\chi_z/L^{\gamma/\nu}$ as a function of an expanded temperature scale $\epsilon L^{1/\nu}$ where $\epsilon=(T-T_c)/T_c$. When plotted in this manner as seen in Fig. \[fig:Isingscale\] the finite size susceptibilities converge to a common scaling function $\chi_0$, supporting the proposed Ising universality class of this continuous phase transition as well as yielding an improved estimate for $T_c=793.7$K. 0.5cm ![Validation of universality classes. (left) Ising scaling function for $\chi_z$; (right) Lee-Kosterlitz histograms of $P_\perp$.[]{data-label="fig:Isingscale"}](Isingscaling.eps "fig:"){width="0.4\linewidth"} ![Validation of universality classes. (left) Ising scaling function for $\chi_z$; (right) Lee-Kosterlitz histograms of $P_\perp$.[]{data-label="fig:Isingscale"}](rwthistpxy.eps "fig:"){width="40.00000%"} ### 3-state Potts-like transition Once a direction for longitudinal polarization has been chosen at the high temperature Ising-like transition (e.g. north), the remaining orientational ordering requires selecting a particular in-plane direction (e.g. $i=0, 1$ or 2), resulting in a breaking of 3-fold rotational symmetry. Thus we expect the low temperature transition to be in the universality class of the 3-state Potts model. In three dimensions this transition is expected to be weakly first order [@Wu82]. Because the order parameter jumps discontinuously at a first order transition, the fluctuations per atom of energy and polarization should grow proportionally to the number of atoms, i.e. as $L^3$. When peak heights of $c_v$ and $\chi_{xy}$ are plotted on a log-log plot vs. $L$, we expect a straight line in the asymptotic limit of large $L$ whose slope should be 3. Unfortunately, our largest supercell size $L=12$ has not yet reached this limit, with the slopes around 2 and 2.8 seen for $c_v$ and $\chi_{xy}$ respectively, clearly tending to increase with $L$. The Lee-Kosterlitz criterion [@Landau05; @Lee90] is an alternative method to confirm a first order transition. Because the two coexisting phases exhibit finite differences in properties such as energy and polarization, probability distributions of such properties should then be bimodal, with each peak sharpening as system size grows. Fig. \[fig:Isingscale\] illustrates this distribution for $P_{xy}$. This distribution is obtained by marginalizing the joint energy and polarization histogram $H_{T_s}(E,P_{xy})$ over energy, then reweighting with the factor $\exp(E/k_BT_e-E/k_BT_s)$, where the temperature $T_e$ is chosen so as to make the heights of the two peaks equal. Clearly the distributions of polarization illustrate coexistence of a state with $P_{xy}=0$ and a state with $P_{xy}\sim 0.5$. Thus we conclude the transition is first order, as expected for symmetry-breaking of the 3-state Potts type in three dimensions. Electric dipole moments ----------------------- Because of the charge imbalance created by the polar carbons, the polar and tilted polar states must exhibit electric dipole moments, while the nonpolar state does not. We constructed specific representative structures of each of the three phases to calculate these dipole moments. Taking a single hexagonal unit cell containing three primitive cells, we constructed the tilted polar $Cm$ state by placing carbon at each of the three $p_0'$ sites. We then constructed a polar $R3m$ state by placing the polar carbon at $p_0'$ in one cell, $p_1'$ in another and $p_2'$ in the third. Finally we took a $2\times1\times1$ supercell of the hexagonal unit cell, and inverted the polarization of every second carbon (e.g. we replace the $p_0'$ in the first hexagonal cell by $p_0$, and did the same for $p_1'$ and $p_2'$ in the second and third hexagonal cell), resulting in a nonpolar state that locally resembles $R\bar{3}m$. Electric dipole moments as calculated by VASP are given in Table \[tab:dipole\]. Phase symmetry group $p_x$ $p_y$ $p_z$ -------------- ---------------- ---------- --------- ------- tilted polar $Cm$ -0.63437 0.36626 1.13 polar $R3m$ 0.00 0.00 1.11 nonpolar $R\bar{3}m$ 0.00 0.00 0.00 : Electric dipole moments of the 3 phases (units are $e$Å, where $e$ is the magnitude of the charge on an electron. The polar phase has p0’,p1’,p2’ carbons, resulting in dipole moment along +z direction. For $Cm$ phase, the projection onto xy-plane of dipole moment is along the projection of the vector $\bP0'$ (1.894,-1.093,-2.666) from the center of icosahedra to p0’.[]{data-label="tab:dipole"} Conclusion ========== We construct an artificial model inspired by boron carbide by placing an orientational degree of freedom at the vertices of a rhombohedral lattice, mimicking the distribution of carbon sites among polar vertices of B$_{11}$C$^p$ icosahedra. Because this model is restricted to 20% carbon it cannot capture the broad composition range of true boron carbide, but it can reveal orientational order and disorder similar to what might be seen in experiment. A pairwise interaction model counting bonds of specific type between polar carbons fits well to density functional theory total energies, and is transferable between supercells of differing sizes. Monte Carlo simulations utilizing this model reveal three distinct phases separated by a pair of phase transitions. The high temperature phase has symmetry group $R\bar{3}m$, similar to what is observed experimentally in boron carbide. As temperature falls to 790K, inversion symmetry is lost via a continuous phase transition, resulting in a polarized state of symmetry $R3m$, which is a maximal subgroup of $R\bar{3}m$. The possible existence of such a state was independently suggested [@Ektarawong14], although at a much higher temperature. Finally, as temperature falls below 717K, 3-fold rotational symmetry is broken via a first order transition, resulting in a tilted polar phase of monoclinic symmetry $Cm$, which is a maximal subgroup of $R3m$. When fully orientationally ordered, this state matches the previously known ground state of B$_4$C [@Mauri01; @Widom12; @Bylander90]. The universality classes of each transition follow the expectations based on the type of symmetry breaking. The continuous 790K transition, which breaks inversion symmetry, is shown to fall in the Ising universality class because of the finite size scaling collapse of longitudinal susceptibility $\chi_z$ as shown in Fig. \[fig:Isingscale\]. The 717K transition, which breaks 3-fold rotation symmetry, is shown by the Lee-Kosterlitz criterion to be weakly first order, consistent with expectations for the 3-state Potts universality class. Acknowledgements ================ We thank Robert H. Swendsen, David P. Landau and James P. Sethna for helpful discussions. References ========== [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} Samsonov G.V., The present state of the investigation of the boron-carbon diagram, Zh. Fiz. Khim., Vol. 32, 1958, p 2424-2429 (in Russian). Ekbom, Lars B., Amundin, Carl Olof, Microstructural evaluation of sintered boron carbides with different compositions, Science of Ceramics, 11, p 237-243. Beauvy M., Stoichiometric limits of carbon-rich boron carbide phases, J. Less-Common Met., Vol. 90, 1983, p 169-175. K. A. Schwetz, P. Karduck, Investigations of the boron-carbon system with the aid of electron probe microanalysis, J. Less Common Met. 175 (1991) 1–100. H. Okamoto, - (boron-carbon), J. Phase Equil. 13 (1992) 436. Domnich, V., Reynaud, S., Haber, R.A., Chhowalla, M. Boron carbide: Structure, properties, and stability under stress, Journal of the American Ceramic Society (2011), 94 (11), p 3605-3628. Peter F. Rogl, Jan Vřešt’' al, Takaho Tanaka and Satoshi Takenouchi, The B-rich side of the B-C phase diagram, Calphad 44 (2014) 3-9. H. C. Longuet-Higgins and M. de V. Roberts, The electronic structure of an icosahedron of boron atoms, Proc. R. Soc. Lond. A 1955 230, 110-119. W. N. Lipscomb, Borides and boranes, J. Less-Common Metals 82 (1981) 1-20. Musiri M. Balakrishnarajan,z Pattath D. Pancharatna and Roald Hoffmann, Structure and bonding in boron carbide: The invincibility of imperfections, New J. Chem. 31 (2007) 473-85. G. Will, K. H. Kossobutzki, An x-ray structure analysis of boron carbide, b13c2, J. Less-Common Met. 44 (1976) 87. G. Will, A. Kirfel, A. Gupta, E. Amberger, Electron density and bonding in b13c2, J. Less-Common Met. 67 (1979) 19-29. G. H. Kwei, B. Morosin, Structures of the boron-rich boron carbides from neutron powder diffraction: Implications for the nature of the intericosahedral chains, J. Phys. Chem. 100 (1996) 8031-9. H. K. Clark, J. L. Hoard, The crystal structure of boron carbide, J. Am. Chem. Soc. 65 (1943) 2115-9. R. Schmechel, H. Werheit, Structural defects of some icosahedral boron-rich solids and their correlation with the electronic properties, J. Solid State Chem. 154 (2000) 61-7. F. Mauri, N. Vast, C. J. Pickard, Atomic structure of icosahedral b4c boron carbide from a first principles analysis of nmr spectra, Phys. Rev. Lett. 87 (2001) 085506. Yakel, H. L., The crystal structure of a boron-rich boron carbide, Acta Crystallogr. B 31, 1797 (1975). Koun Shirai, Kyohei Sakuma and Naoki Uemura, Theoretical study of the structure of boron carbide B$_13$C$_2$, Phys. Rev. B 90 (2014). M. Widom and W. Huhn, Prediction of Orientational Phase Transition in Boron Carbide, Solid State Sciences 14, 1648 (2012), ISSN 1293-2558. D. M. Bylander, L. Kleinman, S. Lee, Self-consistent calculations of the energy bands and bonding properties of b12c3, Phys. Rev. B 42 (1990) 13941403. W.P. Huhn and M. Widom, A free energy model of boron carbide, J. Stat. Phys. 150 (2012) 432-41. El-Batanouny M, Wooten F. Symmetry and condensed matter physics: a computational approach \[M\]. Cambridge University Press, 2008. Hahn, T, and Paufler, P. International tables for crystallography, Vol. A. space-group symmetry D. REIDEL Publ. 1984. A. Ektarawong, S. I. Simak, L. Hultman, J. Birch, and B. Alling, First-principles study of configurational disorder in B4C using a superatom-special quasirandom structure method, Phys. Rev. B 90 (2014) 024204. A. M. Ferrenberg and R. H. Swendsen, Phys. Rev. Lett. 61, 2635 (1988). A. M. Ferrenberg and R. H. Swendsen, Phys. Rev. Lett. 63, 1195 (1989). R. B. Potts, Some generalized order-disorder transformations, Mathematical Proceedings of the Cambridge Philosophical Society. Cambridge University Press, 1952, 48(01): 106-109. J. M. Sanchez, F. Ducastelle and D. Gratias, Generalized cluster description of multicomponent systems, Physica 128A (1984) 334-50. A. van de Walle, M. Asta and G. Ceder, The alloy theoretic automated toolkit: A user guide, Calphad 26 (2002) 539-53. G. Kresse and J. Hafner. Ab initio molecular dynamics for liquid metals. Phys. Rev. B, 47:558, 1993. G. Kresse and J. Hafner. Ab initio molecular-dynamics simulation of the liquid-metal-amorphous-semiconductor transition in germanium. Phys. Rev. B, 49:14251, 1994. G. Kresse and J. Furtmuller. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput. Mat. Sci., 6:15, 1996. G. Kresse and J. Furthmuller. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys. Rev. B, 54:11169, 1996. P. E. Blochl. Projector augmented-wave method. Phys. Rev. B, 50:17953, 1994. G. Kresse and D. Joubert. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B, 59:1758, 1999. J. P. Perdew, K. Burke, and M. Ernzerhof. Generalized gradient approximation made simple. Phys. Rev. Lett., 77:3865, 1996. J.P. Perdew, J.A. Chevary, S.H. Vosko, K.A. Jackson, M.R. Pederson, D.J. Singh, and C. Fiolhais. Erratum: Atoms, molecules, solids, and surfaces: Applications of the generalized gradient approximation for exchange and correlation. Phys. Rev. B, 48:4978, 1993. Nelson L J, Hart G L W, Zhou F, et al. Compressive sensing as a paradigm for building physics models\[J\]. Physical Review B, 2013, 87(3): 035125. Candès E J, Wakin M B. An introduction to compressive sampling\[J\]. Signal Processing Magazine, IEEE, 2008, 25(2): 21-30. A. Pelissetto and E. Vicari, Critical phenomena and renormalization-group thoery, Physics Reports 368, 549 (2002). D. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, Cambridge University Press, New York, NY, USA, 2005, ISBN 0521842387. F. Y. Wu, The Potts model, Rev. Mod. Phys. 54, 235 (1982). J. Lee and J. M. Kosterlitz, New numerical method to study phase transitions, Phys. Rev. Lett. 65, 137 (1990). ![image](4pics.eps){width="1.1\linewidth"}
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $\Gamma$ be a finite rank subgroup of the linear torus or an elliptic curve defined over a number field with complex multiplication. We prove that the group of points which are rational over the field generated by all elements in the divisible hull of $\Gamma$, is free abelian modulo this divisible hull. This proves that a necessary condition for Rémond’s generalized Lehmer conjecture is satisfied.' address: 'Fakultät für Mathematik, Universität Duisburg-Essen, 45117 Germany' author: - Lukas Pottmeyer title: Fields Generated by Finite Rank Subgroups of Tori and Elliptic Curves --- Introduction ============ We fix once and for all an algebraic closure $\overline{\mathbb{Q}}$ of the rational numbers and assume that all algebraic extensions of $\mathbb{Q}$ are contained in this closure. The absolute logarithmic Weil-height $h$ on $\overline{\mathbb{Q}}$ can be defined as follows: For $\alpha_1\in\overline{\mathbb{Q}}$ let $f(x)=a_d (x-\alpha_1)\cdot\ldots\cdot(x-\alpha_d)\in\mathbb{Z}[x]$ be irreducible, then $$h(\alpha_1)=\frac{1}{d}\log\left( \vert a_d \vert \cdot \prod_{i=1}^d \max\{1,\vert \alpha_i\vert \}\right).$$ This function satisfies $h(\alpha^r)=\vert r \vert h(\alpha)$ for all $\alpha \in \overline{\mathbb{Q}}^*$ and all $r\in\mathbb{Q}$, and vanishes precisely at $0$ and roots of unity. It follows that for any $\alpha \in \overline{\mathbb{Q}}^*$ which is not a root of unity, the sequence $\alpha^{\nicefrac{1}{n}}$, $n\in\mathbb{N}=\{1,2,3,\ldots\}$, contains algebraic numbers of arbitrarily small positive height. Let us denote the set of roots of unity by $\mu$. We can ask the following question: Are there elements of arbitrarily small positive height in $$\mathbb{Q}(\mu,\alpha, \alpha^{\nicefrac{1}{2}}, \alpha^{\nicefrac{1}{3}},\ldots)^*$$ which are not of the form $\zeta \cdot \alpha^q$ for some $\zeta \in \mu$ and some $q\in\mathbb{Q}$? Conjecturally the answer is “no”. To formulate this conjecture in full generality, we need some further notation. Let ${\mathcal{G}}=A\times \mathbb{G}_m^N$ for some $N\in\mathbb{N}_0$ and an abelian variety $A$ defined over a number field $K$ equipped with an ample symmetric line bundle $\mathcal{L}$. The choice of this line bundle defines a Néron-Tate height $h_{\mathcal{L}}$ on $A(\overline{\mathbb{Q}})$. The canonical height $\widehat{h}_{{\mathcal{G}}}$ on ${\mathcal{G}}(\overline{\mathbb{Q}})$ is given as the sum of the Néron-Tate height and the Weil-height on each component. This means, for $(P,\alpha_1,\ldots,\alpha_N)\in{\mathcal{G}}(\overline{\mathbb{Q}})$ we set $$\widehat{h}_{{\mathcal{G}}}(P,\alpha_1,\ldots,\alpha_N)=h_{\mathcal{L}}(P)+\sum_{i=1}^N h(\alpha_i).$$ For definitions, properties and applications of these height functions we refer to [@BG]. If $G$ is any divisible group with a subgroup $\Gamma$, then we define the *divisible hull* of $\Gamma$ to be the group $$\Gamma_{\operatorname{div}}:=\{\gamma \in G \vert n\gamma \in \Gamma \text{ for some } n \in \mathbb{N}\}.$$ Let $\Gamma$ be a subgroup of ${\mathcal{G}}(\overline{\mathbb{Q}})$. Then we denote by $\operatorname{End}({\mathcal{G}})\cdot \Gamma$ the subgroup of ${\mathcal{G}}(\overline{\mathbb{Q}})$ generated by all elements of the form $\varphi(\gamma)$ with $\varphi\in\operatorname{End}({\mathcal{G}})=\operatorname{End}_{\overline{\mathbb{Q}}}({\mathcal{G}})$ and $\gamma\in\Gamma$. Moreover we define $$\Gamma_{\operatorname{sat}}:= (\operatorname{End}({\mathcal{G}})\cdot \Gamma)_{\operatorname{div}}.$$ Note that $\operatorname{End}({\mathcal{G}})\cdot \Gamma = \Gamma$ if $\operatorname{End}({\mathcal{G}})=\mathbb{Z}$. In this case $\Gamma_{\operatorname{div}} = \Gamma_{\operatorname{sat}}$. Note that in all cases $\Gamma_{\operatorname{sat}}$ is a divisible group of finite rank, whenever the rank of $\Gamma$ is finite. By the *rank* of $\Gamma$, we mean the rational rank; this is the maximal number of linearly independent elements in $\Gamma$. Moreover, if $\Gamma$ has rank zero then $\Gamma_{\operatorname{sat}}=\Gamma_{\operatorname{div}}$ is precisely given by the torsion subgroup ${\mathcal{G}}_{\operatorname{tors}}$. Now we can formulate Rémond’s generalized Lehmer conjecture [@Re11], Conjecture 3.4. \[conj\] Let ${\mathcal{G}}$ be either a torus or an abelian variety and let $\Gamma$ be a finite rank subgroup of ${\mathcal{G}}(\overline{\mathbb{Q}})$. An element $\alpha \in {\mathcal{G}}(\overline{\mathbb{Q}})$ is called $\Gamma$-transversal, if it is contained in some translate $\gamma + B$, where $\gamma\in\Gamma_{\operatorname{sat}}$ and $B$ is a connected proper algebraic subgroup of ${\mathcal{G}}$. - There exists a positive constant $c$ such that $$\widehat{h}_{{\mathcal{G}}}(\alpha)\geq \frac{c}{[K(\Gamma_{\operatorname{sat}})(\alpha):K(\Gamma_{\operatorname{sat}})]^{\nicefrac{1}{\dim({\mathcal{G}})}}} \quad \forall ~ \alpha \in {\mathcal{G}}(\overline{\mathbb{Q}}) \text{ which are not } \Gamma\text{-transversal}.$$ - For any $\varepsilon>0$ there is a positive constant $c_{\varepsilon}$ such that $$\widehat{h}_{{\mathcal{G}}}(\alpha)\geq \frac{c_{\varepsilon}}{[K(\Gamma_{\operatorname{sat}})(\alpha):K(\Gamma_{\operatorname{sat}})]^{\nicefrac{1}{\dim({\mathcal{G}})}+\varepsilon}} \quad \forall ~ \alpha \in {\mathcal{G}}(\overline{\mathbb{Q}}) \text{ which are not } \Gamma\text{-transversal}.$$ - For all finite extensions $L/K(\Gamma_{\operatorname{sat}})$ there is a positive constant $c_L$ such that $$\widehat{h}_{{\mathcal{G}}}(\alpha)\geq c_L \quad \forall ~ \alpha \in {\mathcal{G}}(L)\setminus \Gamma_{\operatorname{sat}}.$$ Conjecture \[conj\] is weaker than Rémond’s original conjecture in two points. Firstly, Conjecture 3.4 from [@Re11] predicts lower bounds for the height of subvarieties of ${\mathcal{G}}$, not just for the height of points. Secondly, the exponent on the right hand side of (a) and (b) is smaller than our exponents $\frac{1}{\dim({\mathcal{G}})}$, resp. $\frac{1}{\dim({\mathcal{G}})}+\varepsilon$. Since the focus of this paper lies solely on part (c), we did not introduce the necessary notation to present the exponents conjectured by Rémond. On the other hand, this conjecture could be generalized to cover also semi-abelian varieties of the form $A\times \mathbb{G}_m^N$. For part (c) this generalization of the conjecture can be found in [@Pl19], Conjecture 1.2. Obviously, part (a) of Conjecture \[conj\] implies part (b). It is also true that part (b) implies part (c). If $\dim({\mathcal{G}})=1$, this follows since in this case $\alpha$ is $\Gamma$-transversal if and only if $\alpha \in \Gamma_{\operatorname{sat}}$. For general ${\mathcal{G}}$ the implication (b) $\Rightarrow$ (c) follows as a very special case from the strong result [@Re11], Theorem 3.7. Part (a) of Conjecture \[conj\] is much stronger than the famous Lehmer conjecture which has its origin in [@Le33]. This conjecture predicts the existence of a positive constant $c$ such that $h(\alpha)\geq \frac{c}{[\mathbb{Q}(\alpha):\mathbb{Q}]}$ for all $\alpha \in \overline{\mathbb{Q}}^*\setminus \mu$. There are some results on this conjecture in the case that the rank of $\Gamma$ is zero. Delsinne [@Del09] proved Conjecture \[conj\] (b) in the case that ${\mathcal{G}}=\mathbb{G}^N$ and $\Gamma_{\operatorname{sat}}={\mathcal{G}}_{\operatorname{tors}}$. Also under the assumption $\Gamma_{\operatorname{sat}}={\mathcal{G}}_{\operatorname{tors}}$, Carrizosa [@Ca09] proved Conjecture \[conj\] (b) in the case that ${\mathcal{G}}$ is an abelian variety with complex multiplication. (Previously, Delsinne’s result for $N=1$ has been proven in [@AZ00], and part (c) for ${\mathcal{G}}$ an abelian variety with complex multiplication and $\Gamma_{\operatorname{sat}}={\mathcal{G}}_{\operatorname{tors}}$ has been proven in [@BS04]). Amoroso [@Am14] could achieve the following result towards the seemingly most easiest case of a group of positive rank: Let $\Gamma=\langle 2 \rangle$ be the subgroup of ${\mathcal{G}}=\mathbb{G}_m$ generated by $2$, and define $\Gamma_{3\operatorname{div}}=\{\alpha \in \mathbb{G}_m \vert 3^n \cdot \alpha \in \Gamma\}$. Then there is an effective constant $c>0$ such that $h(\alpha)\geq c$ for all $\alpha \in \mathbb{G}_m(\mathbb{Q}(\Gamma_{3\operatorname{div}}))\setminus \Gamma_{3\operatorname{div}}$. Under certain technical restrictions, a similar result for groups ${\mathcal{G}}=A\times \mathbb{G}_m$ and $\Gamma=\{0\} \times \langle b \rangle$, where $A$ is an elliptic curve and $b$ is an integer, has recently been announced in [@Pl19]. We will give some group theoretic support for the validity of Conjecture \[conj\] (c). In the next section we will see that Conjecture \[conj\] (c) implies that the group $\nicefrac{{\mathcal{G}}(L)}{\Gamma_{\operatorname{sat}}}$ is free abelian for all finite extensions $L/K(\Gamma_{\operatorname{sat}})$. Hence, the following theorem shows that a necessary condition for the truth of Conjecture \[conj\] (c) is fulfilled. \[thm:freeab\] Let ${\mathcal{G}}$ be either $\mathbb{G}_m$ or an elliptic curve with complex multiplication defined over a number field $K$, and let $\Gamma$ be a subgroup of ${\mathcal{G}}(\overline{\mathbb{Q}})$ of finite rank. Then the group $\nicefrac{{\mathcal{G}}(L)}{\Gamma_{\operatorname{sat}}}$ is free abelian for all finite extensions $L/K(\Gamma_{\operatorname{sat}})$. If the rank of $\Gamma$ is zero (i.e. if $\Gamma_{\operatorname{sat}}={\mathcal{G}}_{\operatorname{tors}}$), then the statement of Theorem \[thm:freeab\] is true for all semi-abelian varieties of the form $A\times \mathbb{G}_m^N$. This was proved by Bays, Hart and Pillay in the appendix of [@BHP]. Their result is used in our proof, as it provides a kind of *base case* (see Proposition \[prop:firstclaim\]). The paper is organized as follows. In Section 2 we will clarify the connection between Rémond’s conjecture and free abelian groups. In Section 3 we recall a criterion due to Pontryagin for a group to be free abelian. We apply this criterion to show that the group $\nicefrac{{\mathcal{G}}(L)}{\Gamma_{\operatorname{sat}}}$ is free abelian if the torsion group of $$\label{eq:gr} \nicefrac{{\mathcal{G}}(L)}{{\mathcal{G}}(K({\mathcal{G}}_{\operatorname{tors}}))+\Gamma_{\operatorname{sat}}} \quad \text{ or } \quad \nicefrac{{\mathcal{G}}(L)}{{\mathcal{G}}(K)+\Gamma_{\operatorname{sat}}}$$ has finite exponent (this reduction step is true in the general case ${\mathcal{G}}=A\times \mathbb{G}_m^N$). Next we prove Theorem \[thm:freeab\] in the case that ${\mathcal{G}}$ is the linear torus, by showing that the first group from is torsion free. This follows quite elementary by basic facts on cyclic field extensions. We collect some facts on elliptic curves in Section 4. The proof that the exponent of the torsion part of the second group from is finite if ${\mathcal{G}}$ is an elliptic curve with complex multiplication is proven in Sections 6. The final Section 7 provides a result towards a proof of Theorem \[thm:freeab\] for ${\mathcal{G}}$ an elliptic curves without complex multiplication. In this case we prove that for all but finitely many prime numbers $\ell$ the second group from has trivial $\ell$-torsion. In the case where ${\mathcal{G}}$ is an elliptic curve, we use some Kummer theory. In particular, we need that the rank of the Galois group of $K({\mathcal{G}}_{\operatorname{tors}},\frac{1}{\ell^m}\Gamma)/K({\mathcal{G}}_{\operatorname{tors}})$ is as large as possible for all primes $\ell$ and some integer $m$ which is equal to $1$ for all but finitely many primes. For ${\mathcal{G}}_{\operatorname{tors}}$ replaced by the $\ell$-torsion of ${\mathcal{G}}$ this result is due to Bashmakov ([@Ba], Theorem 6). As we could not find a reference for the precise statement used in this paper, we will present a proof of this result following the outline of V.§5 of [@La]. A main ingredient in this proof is Serre’s famous open image theorem [@Se]. Some group theory and Rémond’s Lemma ==================================== All abelian groups in this section will be written additively. A *norm* on an abelian group $G$ is a function ${\lVert\cdot\rVert}: G \longrightarrow \mathbb{R}_{\geq 0}$ satisfying (i) ${\lVertg\rVert} = 0 ~ \Longleftrightarrow ~ g = 0$ is the neutral element, (ii) ${\lVert g + f\rVert} \leq {\lVert g \rVert} + {\lVert f \rVert}$ for all $g,f \in G$, and (iii) ${\lVert n\cdot g \rVert} = \vert n \vert \cdot {\lVert g \rVert}$ for all $g\in G$ and all $n \in \mathbb{Z}$. By (i) and (iii), a group norm can only exist for torsion-free groups. If ${\lVert\cdot\rVert}$ only satisfies (ii) and (iii), then it is called a *semi-norm*. A norm ${\lVert\cdot\rVert}$ is called *discrete* on $G$ if and only if $0$ is not an accumulation point in the set $\{{\lVertg\rVert} \vert g\in G\}$. One of the main properties of $\widehat{h}_{{\mathcal{G}}}$ is that it is well-defined on $\nicefrac{{\mathcal{G}}(\overline{\mathbb{Q}})}{{\mathcal{G}}_{\operatorname{tors}}}$, where ${\mathcal{G}}_{\operatorname{tors}}$ is the torsion subgroup of ${\mathcal{G}}$. Moreover, the map $$(P,\alpha_1,\ldots,\alpha_N) \mapsto \sqrt{h_{\mathcal{L}}(P)}+\sum_{i=1}^N h(\alpha_i)$$ is a norm on $\nicefrac{{\mathcal{G}}(\overline{\mathbb{Q}})}{{\mathcal{G}}_{\operatorname{tors}}} = \nicefrac{A(\overline{\mathbb{Q}})\times\mathbb{G}_m^N(\overline{\mathbb{Q}})}{{\mathcal{G}}_{\operatorname{tors}}}$. We will denote this norm on $\nicefrac{{\mathcal{G}}(\overline{\mathbb{Q}})}{{\mathcal{G}}_{\operatorname{tors}}}$ by $\Vert \cdot \Vert_h$. Let ${\lVert.\rVert}$ be a norm on a divisible group $G$ and let $\Gamma$ be a subgroup of $G$. Then we define the function $${\lVert.\rVert}_{\Gamma}: \nicefrac{G}{\Gamma_{\operatorname{div}}} \longrightarrow \mathbb{R}_{\geq0} \quad ; \quad {\lVert[\alpha]\rVert}_{\Gamma}= \inf\{{\lVert\alpha+\gamma\rVert} \vert \gamma \in \Gamma_{\operatorname{div}}\} .$$ Since this map is well-defined we will simply write ${\lVert\alpha\rVert}_{\Gamma}$ for ${\lVert[\alpha]\rVert}_{\Gamma}$. The following lemma is along the lines of [@Re11], Lemma 3.5. The proof is also due to Gaël Rémond, who presented a special case of it at the workshop on “Heights in Diophantine geometry, group theory and additive combinatorics” at the ESI in Vienna in November 2013. See also [@Gr17] for a quantitative version in the $G=\mathbb{G}_m$ case. \[Bogomolov\] Let $G$ be a divisible group such that there is a norm ${\lVert.\rVert}$ on $\nicefrac{G}{G_{\operatorname{tors}}}$. Let $\Gamma$ be a subgroup of $G$ of rank $r <\infty$ with $\gamma_1,\dots,\gamma_r \in \Gamma$ linearly independent. Moreover, let $H$ be another subgroup of $G$ with $\gamma_1,\dots,\gamma_r \in H$ and such that ${\lVertg\rVert}\geq \kappa$ for all $g\in \nicefrac{H}{H_{\operatorname{tors}}}\setminus \nicefrac{\Gamma_{\operatorname{div}}\cap H}{H_{\operatorname{tors}}}$, for some constant $\kappa >0$. Then there exists a positive constant $c$ only depending on $H$ and $\Gamma$ such that ${\lVertg\rVert}_{\Gamma} \geq c$ for all $g\in H\setminus \Gamma_{\operatorname{div}}$. Let $g \in H\setminus \Gamma_{\operatorname{div}}$, and let $\underline{a}=(a_1,\dots,a_r)$ and $\underline{b}=(b_1,\dots,b_r)$ be elements in $\mathbb{Q}^r$. Then properties (ii) and (iii) of ${\lVert.\rVert}$ yields $$\begin{aligned} \label{lipschitz} \big\vert {\lVertg+a_1 \gamma_1+ \dots + a_r\gamma_r\rVert} - {\lVertg + b_1 \gamma_1 + \dots + b_r \gamma_r\rVert} \big\vert &\leq {\lVert(a_1 - b_1) \gamma_1 + \dots + (a_r - b_r)\gamma_r\rVert} \nonumber \\ &\leq \max_{1\leq i \leq r} \{{\lVert\gamma_i\rVert}\} \cdot \sum_{i=1}^r \vert a_i - b_i \vert.\end{aligned}$$ We have to bound ${\lVertg + b_1 \gamma_1 + \dots + b_r \gamma_r\rVert}$ from below, independently of $\underline{b}$. If $\underline{a}\in \frac{1}{m}\mathbb{Z}^{r}$, then $mg+ ma_1 \gamma_1 + \dots + ma_r \gamma_r\in H$ and ${\lVertg+ a_1 \gamma_1 + \dots + a_r \gamma_r\rVert} = \frac{1}{m}{\lVertmg + ma_1\gamma_1 + \dots + ma_r \gamma_r\rVert}$. Since $g$ is not an element in $\Gamma_{\operatorname{div}}$, we can apply our assumption on ${\lVert.\rVert}$ restricted to $\nicefrac{H}{H_{\operatorname{tors}}}\setminus\nicefrac{H \cap \Gamma_{\operatorname{div}}}{H_{\operatorname{tors}}}$ to conclude $$\label{bound} {\lVertg + a_1 \gamma_1 + \dots + a_r \gamma_r\rVert} \geq \frac{\kappa}{m},$$ Let again $\underline{b}\in \mathbb{Q}^r$ be arbitrary and let $Q\geq 2r\max_{1\leq i \leq r} {\lVert\gamma_i\rVert} \kappa^{-1}$ be an integer. By Dirichlet’s approximation theorem, there is a positive integer $m$ and $\underline{a}\in\frac{1}{m}\mathbb{Z}^r$ such that $$\label{fin} m\leq Q^r \text{ and } \vert a_i - b_i \vert \leq \frac{1}{mQ} \text{ for all } i \in \{1,\ldots,r\}.$$ Combining and yields that ${\lVertg + b_1 \gamma_1 + \dots + b_r \gamma_r\rVert}$ is bounded from below by $$\begin{aligned} \geq &{\lVertg + a_1 \gamma_1+ \dots + a_r \gamma_r\rVert} - \big\vert {\lVertg+a_1 \gamma_1+ \dots + a_r\gamma_r\rVert} - {\lVertg + b_1 \gamma_1 + \dots + b_r \gamma_r\rVert} \big\vert \\ \geq & \frac{\kappa}{m} - \max_{1\leq i \leq r} \{{\lVert\gamma_i\rVert}\} \cdot \sum_{i=1}^n \vert a_i - b_i \vert\overset{\eqref{fin}}{\geq} \frac{\kappa}{m} - \frac{\max_{1\leq i \leq r} \{{\lVert\gamma_i\rVert}\} \cdot r}{mQ}\\ = & \frac{\kappa Q - \max_{1\leq i \leq r} \{{\lVert\gamma_i\rVert}\} \cdot r}{mQ} \geq \frac{\max_{1\leq i \leq r} \{{\lVert\gamma_i\rVert}\} \cdot r}{Q^{r+1}}.\end{aligned}$$ The latter is the postulated positive constant which only depends on $H$ and $\Gamma$. As a corollary we will state the statement explicitly for $G={\mathcal{G}}(\overline{\mathbb{Q}})$, where ${\mathcal{G}}=A\times \mathbb{G}_m^N$ is a semi-abelian variety, and $H={\mathcal{G}}(F)$ for some field $F\subseteq\overline{\mathbb{Q}}$. \[cor:mcG\] Let $\Gamma \subseteq {\mathcal{G}}(\overline{\mathbb{Q}})$ be a subgroup of finite rank, and let $\Gamma_{\operatorname{sat}}=\langle \gamma_1,\ldots,\gamma_r\rangle_{\operatorname{div}}$. Let $F$ be a subfield of $\overline{\mathbb{Q}}$ satisfying (i) $\gamma_1,\ldots,\gamma_r \in {\mathcal{G}}(F)$, and (ii) there is a positive constant $\kappa$ such that $\widehat{h}_{{\mathcal{G}}}(\alpha) \geq \kappa$ for all $\alpha \in {\mathcal{G}}(F)\setminus \Gamma_{\operatorname{sat}}$. Then there is a positive constant $c$ only depending on $F$ and $\Gamma$ such that ${\lVert\alpha\rVert}_{h,\Gamma}\geq c$ for all $\alpha \in {\mathcal{G}}(F)\setminus \Gamma_{\operatorname{sat}}$. \[heightisnorm\] Let $\Gamma$ be a subgroup of ${\mathcal{G}}(\overline{\mathbb{Q}})$ and ${\lVert.\rVert}$ a norm on $\nicefrac{{\mathcal{G}}(\overline{\mathbb{Q}})}{{\mathcal{G}}_{\operatorname{tors}}}$. Then the function ${\lVert.\rVert}_{\Gamma}$ is a seminorm on $\nicefrac{{\mathcal{G}}(\overline{\mathbb{Q}})}{\Gamma_{\operatorname{sat}}}$. If $\Gamma$ has finite rank, then the particular function ${\lVert.\rVert}_{h,\Gamma}$ is a norm. First we will show that ${\lVert.\rVert}_{\Gamma}$ is a semi-norm, without additional assumptions on the group $\Gamma$. In order to do so, we have to check the properties (ii) and (iii) from the beginning of this section. These properties follow from the respective properties of the norm ${\lVert.\rVert}$. The triangular inequality follows from $$\begin{aligned} {\lVert\alpha\rVert}_{\Gamma}+{\lVert\beta\rVert}_{\Gamma} &=\inf\{{\lVert\alpha+\gamma\rVert} \vert \gamma \in \Gamma_{\operatorname{div}}\}+\inf\{{\lVert\beta+\gamma'\rVert} \vert \gamma' \in \Gamma_{\operatorname{div}}\}\\ &=\inf\{{\lVert\alpha+\gamma\rVert} + {\lVert\beta+\gamma'\rVert} \vert \gamma,\gamma' \in \Gamma_{\operatorname{div}}\}\\ &\geq \inf\{{\lVert\alpha+\beta+\gamma+\gamma'\rVert} \vert \gamma,\gamma' \in \Gamma_{\operatorname{div}}\} \\ &=\inf\{{\lVert\alpha+\beta+\gamma''\rVert} \vert \gamma'' \in \Gamma_{\operatorname{div}}\} = {\lVert\alpha+\beta\rVert}_{\Gamma},\end{aligned}$$ and the last statement follows similarly from the equation $$\begin{aligned} {\lVertn\alpha\rVert}_{\Gamma} &=\inf\{{\lVertn\alpha + \gamma\rVert} \vert \gamma \in \Gamma_{\operatorname{div}}\} =\inf\{{\lVertn(\alpha+\gamma')\rVert} \vert \gamma' \in \Gamma_{\operatorname{div}}\}\\ &=\inf\{n{\lVert\alpha+\gamma'\rVert} \vert \gamma' \in \Gamma_{\operatorname{div}}\} =n{\lVert\alpha\rVert}_{\Gamma}.\end{aligned}$$ From now on we assume that $\Gamma$ is of rank $r<\infty$ and that the norm ${\lVert.\rVert}={\lVert.\rVert}_h$ is induced by the canonical height $\widehat{h}_{{\mathcal{G}}}$ on ${\mathcal{G}}$. Let $\gamma_1,\dots,\gamma_r$ be linearly independent elements in $\Gamma$. We are left to prove property (i); i.e. ${\lVert\alpha\rVert}_{h,\Gamma}=0 \Longleftrightarrow \alpha\in \Gamma_{\operatorname{sat}}$. Obviously it is ${\lVert\alpha\rVert}_{h,\Gamma}=0$ for all $\alpha \in \Gamma_{\operatorname{sat}}$. By Northcott’s theorem, ${\lVert.\rVert}_{h}$ is discrete on $\nicefrac{{\mathcal{G}}(F)}{{\mathcal{G}}_{\operatorname{tors}}(F)}$ for all number fields $F$. Hence, if $\alpha \in {\mathcal{G}}(\overline{\mathbb{Q}})\setminus \Gamma_{\operatorname{sat}}$ is arbitrary we set $F=\mathbb{Q}(\gamma_1,\dots,\gamma_r,\alpha)$ and apply Corollary \[cor:mcG\]. This yields ${\lVert\alpha\rVert}_{h,\Gamma} \neq 0$ and concludes the proof. We use the notation from Conjecture \[conj\]. Since $\Gamma$ and $\operatorname{End}({\mathcal{G}})$ are of finite rank, the same is true for $\Gamma_{\operatorname{sat}}$. Hence, Lemma \[heightisnorm\] tells us that ${\lVert\cdot\rVert}_{h,\Gamma_{\operatorname{sat}}}$ is a norm on $\nicefrac{{\mathcal{G}}(L)}{\Gamma_{\operatorname{sat}}}$. This norm is discrete if and only if the statement of Conjecture \[conj\] (c) is true. Therefore, Conjecture \[conj\] (c) is true if and only if ${\lVert\cdot\rVert}_{h,\Gamma_{\operatorname{sat}}}$ is a discrete norm on $\nicefrac{{\mathcal{G}}(L)}{\Gamma_{\operatorname{sat}}}$. Note that there exists a discrete norm on an abelian group if and only if this group is free abelian. This result was proved independently by Lawrence [@La84] and Zorzitto [@Zo85] for countable groups, and by Steprāns [@St85] in the general case. As this result is the bridge between Conjecture \[conj\] and our main theorem, we state it as a proposition. \[LSZ\] An abelian group $G$ is free if and only if there is a discrete norm on $G$. We conclude $$\begin{aligned} \text{Conjecture \ref{conj} (c) is true } &\Longleftrightarrow ~ {\lVert\cdot\rVert}_{h,\Gamma_{\operatorname{sat}}} \text{ is a discrete norm on } \nicefrac{{\mathcal{G}}(L)}{\Gamma_{\operatorname{sat}}}\\ &\Longrightarrow ~ \nicefrac{{\mathcal{G}}(L)}{\Gamma_{\operatorname{sat}}} \text{ is free abelian}\end{aligned}$$ Hence, Theorem \[thm:freeab\] tells us, that at least there exists *some* discrete norm on $\nicefrac{{\mathcal{G}}(L)}{\Gamma_{\operatorname{sat}}}$ if ${\mathcal{G}}$ is either the linear torus or an elliptic curve with complex multiplication. Pontryagin’s criterion ====================== The results of this section are valid in a more general setting, than needed for the proof of Theorem \[thm:freeab\]. Hence, in this section ${\mathcal{G}}= A\times \mathbb{G}_m^N$ is defined over a number field $K$, where $A$ is an abelian variety and $N \in \mathbb{N}_0$. \[freemodgamma\] Let $F$ be any subfield of $K(\Gamma_{\operatorname{sat}})$. The group $\nicefrac{{\mathcal{G}}(K(\Gamma_{\operatorname{sat}}))}{\Gamma_{\operatorname{sat}}}$ is free abelian if for every field $E$, with $F\subseteq E \subseteq K(\Gamma_{\operatorname{sat}})$ and $[E:F]< \infty$, we have (i) $\nicefrac{{\mathcal{G}}(E)}{(\Gamma_{\operatorname{sat}}\cap {\mathcal{G}}(E))}$ is free abelian, and (ii) the torsion group of $\nicefrac{{\mathcal{G}}(K(\Gamma_{\operatorname{sat}}))}{{\mathcal{G}}(E) + \Gamma_{\operatorname{sat}}}$ has finite exponent. This is mainly an application of a classification result of Pontryagin. The proof follows very closely the proofs of [@Ma72], Lemma 1, and [@GHP], Proposition 2.3. By a theorem of Pontryagin, cf. [@EM], Theorem VI.2.3, an abelian group $G$ is free abelian, if every finite subset of $G$ is contained in a free abelian subgroup $H\subseteq G$ such that $\nicefrac{G}{H}$ is torsion free. Therefore, let $S=\{[\alpha_1],\ldots,[\alpha_s]\}\subseteq \nicefrac{{\mathcal{G}}(K(\Gamma_{\operatorname{sat}}))}{\Gamma_{\operatorname{sat}}}$ and set $E:=K({\mathcal{G}}_{\operatorname{tors}},\alpha_1,\ldots,\alpha_s)$. Obviously $E$ is a finite extension of $K({\mathcal{G}}_{\operatorname{tors}})$ and it is $$S \subseteq \nicefrac{{\mathcal{G}}(E)+\Gamma_{\operatorname{sat}}}{\Gamma_{\operatorname{sat}}}\cong \nicefrac{{\mathcal{G}}(E)}{{\mathcal{G}}(E)\cap \Gamma_{\operatorname{sat}}}.$$ Let $m\in\mathbb{N}$ be the exponent of the torsion subgroup of $$\nicefrac{\left( \nicefrac{{\mathcal{G}}(K(\Gamma_{\operatorname{sat}}))}{\Gamma_{\operatorname{sat}}} \right)}{\left( \nicefrac{{\mathcal{G}}(E)+\Gamma_{\operatorname{sat}}}{\Gamma_{\operatorname{sat}}} \right)}\cong \nicefrac{{\mathcal{G}}(K(\Gamma_{\operatorname{sat}}))}{{\mathcal{G}}(E)+\Gamma_{\operatorname{sat}}}.$$ Note, that this exponent is indeed an element of $\mathbb{N}$, by assumption (ii) of the lemma. Now define $$H:=\left\{[\alpha]\in \nicefrac{{\mathcal{G}}(K(\Gamma_{\operatorname{sat}}))}{\Gamma_{\operatorname{sat}}} \vert m\cdot [\alpha] \in \nicefrac{{\mathcal{G}}(E)+\Gamma_{\operatorname{sat}}}{\Gamma_{\operatorname{sat}}} \right\}.$$ By assumption (i) the group $\nicefrac{{\mathcal{G}}(E)+\Gamma_{\operatorname{sat}}}{\Gamma_{\operatorname{sat}}}\cong \nicefrac{{\mathcal{G}}(E)}{{\mathcal{G}}(E)\cap \Gamma_{\operatorname{sat}}}$ is free abelian, and hence $H$ is free abelian. Moreover, by construction the quotient of $\nicefrac{{\mathcal{G}}(K(\Gamma_{\operatorname{sat}}))}{\Gamma_{\operatorname{sat}}}$ by $H$ is torsion free. Hence, $S \subseteq H$ and $H$ satisfies the hypothesis of Pontryagins theorem. It follows that under the assumptions (i) and (ii) $\nicefrac{{\mathcal{G}}(K(\Gamma_{\operatorname{sat}}))}{\Gamma_{\operatorname{sat}}}$ is free abelian. As stated in the introduction, we will use the result from Bays, Hart, and Pillay that $\nicefrac{{\mathcal{G}}(K({\mathcal{G}}_{\operatorname{tors}}))}{{\mathcal{G}}_{\operatorname{tors}}}$ is free abelian. The case ${\mathcal{G}}=\mathbb{G}_m$ is originally due to Iwasawa [@Iw53], and the case ${\mathcal{G}}=A$ is originally due to Larsen [@La05]. We will sketch a proof of this result. \[thm:BHP\] Let ${\mathcal{G}}$ and $K$ be as above and let $K'/K$ be finite. Then (i) $\nicefrac{{\mathcal{G}}(K')}{{\mathcal{G}}(K')\cap{\mathcal{G}}_{\operatorname{tors}}}$ is free abelian, and (ii) the exponent of the torsion group of $\nicefrac{{\mathcal{G}}(K'({\mathcal{G}}_{\operatorname{tors}}))}{{\mathcal{G}}(K')+{\mathcal{G}}_{\operatorname{tors}}}$ is finite. In particular, $\nicefrac{{\mathcal{G}}(K({\mathcal{G}}_{\operatorname{tors}}))}{{\mathcal{G}}_{\operatorname{tors}}}$ is a free abelian group. In this proof we use the language of continuous group cohomology. The field $K'$ is a number field. Hence the norm ${\lVert\cdot\rVert}_h$ induced by the canonical height of ${\mathcal{G}}$ is discrete on $\nicefrac{{\mathcal{G}}(K')}{{\mathcal{G}}(K')\cap{\mathcal{G}}_{\operatorname{tors}}}$. Statement (i) follows from Proposition \[LSZ\]. In order to prove (ii), let $\alpha \in {\mathcal{G}}(K'({\mathcal{G}}_{\operatorname{tors}}))$ be such that $n\cdot \alpha \in {\mathcal{G}}(K')+{\mathcal{G}}_{\operatorname{tors}}$ for some $n \in \mathbb{N}$. Then, the map $\tau \mapsto \tau(\alpha)-\alpha$ represents an element in $H^1(\operatorname{Gal}(K'({\mathcal{G}}_{\operatorname{tors}})/K'),{\mathcal{G}}[n])$. By Lemma A.3 from [@BHP], there is a constant $c$ only depending on ${\mathcal{G}}$ and $K'$ such that $\tau \mapsto c\cdot(\tau(\alpha)-\alpha)$ is equivalent to the zero map in $H^1(\operatorname{Gal}(K'({\mathcal{G}}_{\operatorname{tors}})/K'),{\mathcal{G}}[n])$. Hence, there is an element $P\in{\mathcal{G}}[n]$ such that $$c\cdot(\tau(\alpha)-\alpha)=\tau(P)-P \quad \forall ~ \tau \in \operatorname{Gal}(K'({\mathcal{G}}_{\operatorname{tors}})/K').$$ It follows $$\tau(c\cdot \alpha - P) = c\cdot \alpha - P \quad \forall ~ \tau \in \operatorname{Gal}(K'({\mathcal{G}}_{\operatorname{tors}})/K'),$$ and hence $c\cdot \alpha -P \in {\mathcal{G}}(K')$, respectively $c\cdot \alpha \in {\mathcal{G}}(K') + {\mathcal{G}}_{\operatorname{tors}}$. This means that the order of the residue class of $\alpha \in \nicefrac{{\mathcal{G}}(K'({\mathcal{G}}_{\operatorname{tors}}))}{{\mathcal{G}}_{\operatorname{tors}}}$ divides $c$, which proves statement (ii). If we apply Lemma \[freemodgamma\] with $\Gamma=\{0\}$ and $F=K$, it follows from (i) and (ii) that $\nicefrac{{\mathcal{G}}(K({\mathcal{G}}_{\operatorname{tors}}))}{{\mathcal{G}}_{\operatorname{tors}}}$ is free abelian. \[prop:firstclaim\] Let $K'/K$ be finite and let $\Gamma$ be a finite rank subgroup of ${\mathcal{G}}(\overline{\mathbb{Q}})$. For any field $K \subseteq E \subseteq K'({\mathcal{G}}_{\operatorname{tors}})$, the group $\nicefrac{{\mathcal{G}}(E)}{{\mathcal{G}}(E)\cap \Gamma_{\operatorname{sat}}}$ is free abelian. We have just seen that $\nicefrac{{\mathcal{G}}(K'({\mathcal{G}}_{\operatorname{tors}}))}{{\mathcal{G}}_{\operatorname{tors}}}$ is free abelian. Therefore $\nicefrac{{\mathcal{G}}(E)}{{\mathcal{G}}(E)_{\operatorname{tors}}}$ is, as a subgroup, free abelian. Hence by Proposition \[LSZ\] there is a discrete norm ${\lVert.\rVert}$ on $\nicefrac{{\mathcal{G}}(E)}{{\mathcal{G}}(E)_{\operatorname{tors}}}$. Set $\tilde{\Gamma}=\Gamma_{\operatorname{sat}}\cap {\mathcal{G}}(E)$ which is a finite rank subgroup, since $\operatorname{End}({\mathcal{G}})$ and $\Gamma$ are of finite rank. The discrete norm ${\lVert.\rVert}$ extends uniquely to a norm on $\nicefrac{{\mathcal{G}}(E)_{\operatorname{div}}}{{\mathcal{G}}_{\operatorname{tors}}}$. Hence we can apply Lemma \[Bogomolov\] to deduce the existence of a constant $c > 0$ such that $$\label{eq:firstpart} {\lVert\alpha\rVert}_{\tilde{\Gamma}} = \inf \{{\lVert\alpha+\gamma\rVert} \vert \gamma\in \tilde{\Gamma}_{\operatorname{div}} \} \geq c \text{ for all } \alpha \in {\mathcal{G}}(E)\setminus \tilde{\Gamma}_{\operatorname{div}}.$$ By Lemma \[heightisnorm\] we already know that ${\lVert.\rVert}_{\tilde{\Gamma}}$ is a seminorm on $\nicefrac{{\mathcal{G}}(E)}{\tilde{\Gamma}}$. Therefore tells us that ${\lVert.\rVert}_{\tilde{\Gamma}}$ is actually a discrete norm. If we apply Proposition \[LSZ\] once more, we achieve that $\nicefrac{{\mathcal{G}}(E)}{\tilde{\Gamma}}$ is free abelian. Proof of Theorem \[thm:freeab\] for the linear torus ==================================================== In this subsection we work with ${\mathcal{G}}=\mathbb{G}_m$; i.e. we work in the multiplicative group of a field. Recall that we denote by $\mu$ the set of all roots of unity. \[prop:gmtorsionfree\] Let $\Gamma=\langle \gamma_1,\ldots,\gamma_r \rangle$ be a subgroup of $\overline{\mathbb{Q}}^*$, and let $K$ be a number field. We set $L=K(\Gamma_{\operatorname{sat}})$ and let $E\subseteq L$ be a finite extension of $K(\mu,\gamma_1,\ldots,\gamma_r)$. Then the group $\nicefrac{L^*}{E^* \Gamma_{\operatorname{sat}}}$ is torsion free. Let $[\alpha]\in \nicefrac{L^*}{E^* \Gamma_{\operatorname{sat}}}$ be a torsion point. Then there exists a natural number $n$ with $\alpha^n \in E$. Since $\alpha$ is in $L=K(\Gamma_{\operatorname{sat}})$, it is $E(\alpha)\subseteq E(\gamma_1^{\nicefrac{1}{m_1}},\dots,\gamma_r^{\nicefrac{1}{m_r}})$ for some $m_1,\dots,m_r \in \mathbb{N}$. We set $E_0 = E$ and define for every $i\in\{1,\dots,r\}$ the field $$E_i = E(\gamma_1^{\nicefrac{1}{m_1}},\dots,\gamma_i^{\nicefrac{1}{m_i}}).$$ Every extension $E_i / E_{i-1}$ in the chain $$E=E_0 \subseteq E_1 \subseteq E_2 \subseteq \cdots \subseteq E_r$$ is cyclic of some order $k_i \mid m_i$. Note, that $\gamma_i \in E$, for all $i\in \{1,\dots,r\}$. By the classic theory of cyclic extensions (cf. [@La02], Chapter VI), every intermediate field of $E_{i}=E_{i-1}(\gamma_i^{\nicefrac{1}{m_i}})/E_{i-1}$ is given by $E_{i-1}(\gamma_i^{\nicefrac{d}{m_i}})$ for some $d\in\mathbb{N}$. Hence, $E_{r-1}(\alpha)=E_{r-1}(\gamma_r^{\nicefrac{d_r}{m_r}})$. Assume, that the degree of $\alpha$ over $E_{r-1}$ is $n_r$ and that $\sigma$ is a generator of $\operatorname{Gal}(E_{r-1}(\alpha)/E_{r-1})$. Then, since $\alpha^n \in E \subseteq E_{r-1}$, we have $$\sigma(\gamma_r^{\nicefrac{d_r}{m_r}})=\zeta_{n_r}\gamma_r^{\nicefrac{d_r}{m_r}} \quad \text{ and } \quad \sigma(\alpha)=\zeta_{n_r}^{l_r} \alpha,$$ where $\zeta_{n_r}$ is a primitive $n_r$-th root of unity and $l_r \in \mathbb{N}$. We can conclude that $\sigma$, and hence $\operatorname{Gal}(E_{r-1}(\alpha)/E_{r-1})$, acts trivial on the element $\nicefrac{\alpha}{\gamma_r^{\nicefrac{d_r l_r}{m_r}}}$. It follows $$\frac{\alpha}{\gamma_r^{\nicefrac{d_r l_r}{m_r}}} \in E_{r-1} \quad \text{ and } \quad \left(\frac{\alpha}{\gamma_r^{\nicefrac{d_r l_r}{m_r}}}\right)^{n m_r} \in E.$$ Thus we can repeat this argument with $n$ replaced by $n m_r$, and $\alpha$ replaced by $\nicefrac{\alpha}{\gamma_r^{\nicefrac{d_r l_r}{m_r}}}$. Induction yields $$\frac{\alpha}{\prod_{i=1}^{r}\gamma_i^{\nicefrac{d_i l_i}{m_i}}} \in E_0^*$$ which is equivalent to $\alpha \in E^* \Gamma_{\operatorname{sat}}$. Hence the residue class of $\alpha$ in the group $\nicefrac{L^*}{E^* \Gamma_{\operatorname{sat}}}$ is trivial, meaning that the group is torsion free. Now let $L$ be a finite extension of $K(\Gamma_{\operatorname{sat}})$; say $L=K'(\Gamma_{\operatorname{sat}})$ for a finite extension $K'/K$. Set $F=K'(\gamma_{1},\ldots,\gamma_{r},\mu)$ and let $E/F$ be any finite extension such that $E\subseteq L$. By Proposition \[prop:firstclaim\] we know that $\nicefrac{E^*}{E^*\cap \Gamma_{\operatorname{div}}}$ is free abelian and we have just seen that $\nicefrac{L^*}{E^* \Gamma_{\operatorname{div}}}$ is torsion free. Hence, the assumptions from Lemma \[freemodgamma\] are met, which proves Theorem \[thm:freeab\] for ${\mathcal{G}}=\mathbb{G}_m$. Preliminaries on elliptic curves ================================ In this section $K$ denotes a number field and we fix an elliptic curve $A$ defined over $K$. We also fix a finite rank subgroup $\Gamma \subseteq A(\overline{\mathbb{Q}})$. There are $\mathbb{Z}$-linearly independent elements $\gamma_1,\ldots,\gamma_s$ such that $\Gamma_{\operatorname{div}} = \langle \gamma_1,\ldots,\gamma_s\rangle_{\operatorname{div}}$. If $\gamma_1,\ldots,\gamma_s$ are $\operatorname{End}(A)$-linearly dependent (which can only occur if $A$ has complex multiplication), then there are $\phi_1,\ldots,\phi_s \in \operatorname{End}(A)$ not all constantly zero such that $\phi_1(\gamma_1)+\ldots+\phi_s(\gamma_s) = 0$. We may assume that $\phi_s\neq 0$, and that $\hat{\phi_s}$ is the dual of $\phi_s$. Then $$-(\hat{\phi_s}\circ\phi_1(\gamma_1)+\ldots+\hat{\phi_s}\circ\phi_{s-1}(\gamma_{s-1})) = \hat{\phi_s}\circ\phi_s(\gamma_s)=\deg(\phi_s)\gamma_s.$$ We find that $\gamma_s \in (\operatorname{End}(A)\cdot \gamma_1+\ldots+\operatorname{End}(A)\cdot \gamma_{s-1})_{\operatorname{div}}$, and hence in $\Gamma_{\operatorname{sat}}=(\operatorname{End}(A)\cdot \gamma_1+\ldots+\operatorname{End}(A)\cdot \gamma_{s-1})_{\operatorname{div}}$. Therefore, after possibly shrinking the set of generators $\gamma_1,\ldots,\gamma_s$, we may assume $$\begin{aligned} \label{eq:Endlinin} \Gamma_{\operatorname{sat}} &=(\operatorname{End}(A)\cdot \Gamma)_{\operatorname{div}}=(\operatorname{End}(A)\cdot \gamma_1 + \ldots + \operatorname{End}(A)\cdot \gamma_r)_{\operatorname{div}} \\ &\text{ with } \gamma_1,\ldots,\gamma_r ~ \operatorname{End}(A)\text{-linearly independent}. \nonumber\end{aligned}$$ The group $\nicefrac{A(K(A_{\operatorname{tors}}))}{A_{\operatorname{tors}}}$ is free abelian (see Theorem \[thm:BHP\]). Hence, after replacing $\gamma_i$ by some division point of $\gamma_i$, we may assume that for all prime numbers $\ell$ it is $$\label{eq:divisionprop} \gamma_i \in A(K(A_{\operatorname{tors}})) \text{ and } \frac{1}{\ell}\gamma_i \notin A(K(A_{\operatorname{tors}})) \quad \forall ~ i\in\{1,\ldots,r\}.$$ Moreover we assume, that $\gamma_1,\ldots,\gamma_r\in A(K)_{\operatorname{div}}$ and $\operatorname{End}(A)$ is defined over $K$, which is always possible after replacing $K$ by a finite extension. In particular, we assume $$\label{eq:alloverK} \Gamma':=(\operatorname{End}(A)\cdot \gamma_1 + \ldots + \operatorname{End}(A)\cdot \gamma_r) \subseteq A(K)_{\operatorname{div}}.$$ For $n\in\mathbb{N}$ denote the group of $n$-torsion points of $A$ by $A[n]$. We start with collecting some basic facts: - For any $n\in\mathbb{N}$ it is $A[n]\cong \left(\nicefrac{\mathbb{Z}}{n\mathbb{Z}}\right)^{2}$. - Let $F$ be a subfield of $\overline{\mathbb{Q}}$, with $A_{\operatorname{tors}}\subseteq A(F)$, and let $\gamma \in A(F)$. Then, $F(\frac{1}{n}\gamma)/F$ is a Galois extension and independent on the choice of the $n$-th division point of $\gamma$. Moreover, the map $$\operatorname{Gal}(F(\frac{1}{n}\gamma)/F)\longrightarrow A[n]\quad ; \quad \sigma \mapsto \sigma(\frac{1}{n}\gamma)-\frac{1}{n}\gamma$$ is an injective group homomorphism. - In the situation above, let $n=k\cdot m$ with $k$ and $m$ coprime. Then $F(\frac{1}{n}\gamma)=F(\frac{1}{k}\gamma,\frac{1}{m}\gamma)$. We will use these facts freely for the remainder of this paper. If $\ell$ is a prime, $n\in\mathbb{N}$ and $G$ is any subgroup of $A(\overline{\mathbb{Q}})$, then we set $$\frac{1}{\ell^n} G=\{\gamma \in {\mathcal{G}}(\overline{\mathbb{Q}}) \vert \ell^n \gamma \in G\} \quad \text{ and } \quad \frac{1}{\ell^{\infty}} G= \bigcup_{n\in\mathbb{N}} \frac{1}{\ell^n} G$$ Moreover, we define $$G_{\operatorname{tors}}=\operatorname{Gal}(K(A_{\operatorname{tors}})/K).$$ Throughout this section we use the notation from above. In particular, we assume that the assumptions , and are met. \[lem:intersection\] For any prime number $\ell$ and any $n\in\mathbb{N}$ it is $$\Gamma' + A_{\operatorname{tors}} = \left( \frac{1}{\ell^{n}}\Gamma' \cap A(K(A_{\operatorname{tors}}))\right) + A_{\operatorname{tors}}.$$ We set $\widetilde{\Gamma_n}= \left( \frac{1}{\ell^{n}}\Gamma' \cap A(K(A_{\operatorname{tors}}))\right)$. By , $\nicefrac{\widetilde{\Gamma_n}+A_{\operatorname{tors}}}{A_{\operatorname{tors}}}$ is a free $\operatorname{End}(A)$-module of rank $r$. Moreover, it is $$\nicefrac{\Gamma'+A_{\operatorname{tors}}}{A_{\operatorname{tors}}}\subseteq \nicefrac{\widetilde{\Gamma_n}+A_{\operatorname{tors}}}{A_{\operatorname{tors}}} \subseteq \nicefrac{\frac{1}{\ell^n}\Gamma'+A_{\operatorname{tors}}}{A_{\operatorname{tors}}}.$$ Now, implies $\nicefrac{\Gamma'+A_{\operatorname{tors}}}{A_{\operatorname{tors}}} = \nicefrac{\widetilde{\Gamma_n}+A_{\operatorname{tors}}}{A_{\operatorname{tors}}}$. \[lem:fellinjective\] For any prime number $\ell$ and any $n\in\mathbb{N}$ the map $$f_{\ell^n} : \nicefrac{\Gamma'}{\ell^n \Gamma'}\longrightarrow \operatorname{Hom}(\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}})),A[\ell^n]) \quad ; \quad [\gamma] \mapsto \left[ \varphi_\gamma : \sigma \mapsto \sigma(\frac{1}{\ell^n}\gamma) - \frac{1}{\ell^n}\gamma\right]$$ is an injective group-homomorphism. Let $\gamma \in \Gamma'$ be arbitrary. From the facts collected above, it is clear that $\varphi_\gamma$ is indeed a well-defined group homomorphism. Moreover, for any $\gamma' \in \Gamma'$, it is $$\varphi_{\gamma+\ell^n\gamma'}(\sigma) = \sigma(\frac{1}{\ell^n}\gamma + \gamma')-(\frac{1}{\ell^n}\gamma + \gamma') =\sigma(\frac{1}{\ell^n}\gamma) + \sigma(\gamma') -\gamma' -\frac{1}{\ell^n}\gamma =\sigma(\frac{1}{\ell^n}\gamma)-\frac{1}{\ell^n}\gamma =\varphi_{\gamma}(\sigma)$$ for all $\sigma \in \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))$. Hence, the map $f_{\ell^n}$ is well-defined and visibly an homomorphism. Moreover, $$\begin{aligned} [\gamma] \in \ker(f_{\ell^n}) ~ & \Longrightarrow ~ \sigma(\frac{1}{\ell^n}\gamma)=\frac{1}{\ell^n}\gamma \quad \forall ~\sigma \in \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}})) \\ & \Longrightarrow ~ \frac{1}{\ell^n}\gamma \in \frac{1}{\ell^{n}}\Gamma'\cap A(K(A_{\operatorname{tors}})) \overset{\ref{lem:intersection}}{\subseteq} \Gamma' + A_{\operatorname{tors}} $$ By and , the group $\Gamma'$ is torsion-free. Hence, if $\frac{1}{\ell^n}\gamma=\gamma'+T$ for some $\gamma'\in\Gamma'$ and some $T\in A_{\operatorname{tors}}$, then $T\in \frac{1}{\ell^n}\Gamma' \cap A_{\operatorname{tors}}=A[\ell^n]$. This implies $\gamma = \ell^n \gamma' + \ell^n T=\ell^n\gamma' \in\ell^n \Gamma'$; i.e. $[\gamma]=[0]$. Hence $f_{\ell^n}$ is injective, which proves the lemma. \[lem:imageisGstable\] For any $\gamma \in \Gamma'$, the map $\varphi_{\gamma}$ from Lemma \[lem:fellinjective\] is a $G_{\operatorname{tors}}$-module homomorphism. Let $\tau\in G_{\operatorname{tors}}$ and $\sigma \in \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))$ be arbitrary, and chose an extension of $\tau$ to an element from $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K)$ which we will again denote by $\tau$. Using the canonical isomorphism $$\nicefrac{\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K)}{\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))}\cong G_{\operatorname{tors}},$$ there is a well-defined element $\tau\circ\sigma\circ\tau^{-1}\in\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))$. Indeed, the group $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))$ is abelian and hence for any $\tau'\in \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))$ it is $$(\tau\circ\tau')\circ\sigma\circ(\tau\circ\tau')^{-1}=\tau\circ\sigma\circ\tau^{-1}.$$ This conjugation gives $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))$ the structure of a $G_{\operatorname{tors}}$-module. Since $\frac{1}{\ell^n}\gamma\in\Gamma_{\operatorname{sat}}$, assumptions and imply that some multiple of $\frac{1}{\ell^n}\gamma$ is defined over $K$. In particular, $\tau^{-1}(\frac{1}{\ell^n}\gamma)-\frac{1}{\ell^n}\gamma \in A_{\operatorname{tors}}$, and hence $$\sigma\circ\tau^{-1}(\frac{1}{\ell^n}\gamma) - \sigma(\frac{1}{\ell^n}\gamma)=\sigma(\tau^{-1}(\frac{1}{\ell^n}\gamma)-\frac{1}{\ell^n}\gamma)=\tau^{-1}(\frac{1}{\ell^n}\gamma)-\frac{1}{\ell^n}\gamma.$$ Applying $\tau$ on both sides implies $$\tau(\varphi_{\gamma}(\sigma))=\tau(\sigma(\frac{1}{\ell^n}\gamma)-\frac{1}{\ell^n}\gamma)=\tau\circ\sigma\circ\tau^{-1}(\frac{1}{\ell^n}\gamma)-\frac{1}{\ell^n}\gamma = \varphi_{\gamma}(\tau\sigma\tau^{-1}),$$ which proves the lemma. \[cor:Galois\] The following properties hold true: (i) $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))$ is isomorphic to $$\{(\varphi_{\gamma_1}(\tau),\ldots,\varphi_{\gamma_r}(\tau)) \vert \tau \in \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}}))\} \subseteq A[\ell^n]^r$$ as $G_{\operatorname{tors}}$-modules, and (ii) the maps $\varphi_{\gamma_1},\ldots,\varphi_{\gamma_r}$ are $\nicefrac{\operatorname{End}(A)}{\ell^n\operatorname{End}(A)}$-linearly independent. The map $\tau \mapsto (\varphi_{\gamma_1}(\tau),\ldots,\varphi_{\gamma_r}(\tau))$ is obviously surjective, and it is a $G_{\operatorname{tors}}$-module homomorphism by Lemma \[lem:imageisGstable\]. An element $\tau$ in the kernel must fix all of the points $\frac{1}{\ell^n}\gamma_1,\ldots,\frac{1}{\ell^n}\gamma_r$. Since $\operatorname{End}(A)$ is defined over $K$, this element fixes all elements in $\frac{1}{\ell^n} \Gamma'$. Hence, such $\tau$ is the identity on $K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')$. Therefore, the kernel is trivial, which concludes the proof of part (i). Let $a_1\cdot \varphi_{\gamma_1} + \ldots + a_r\cdot \varphi_{\gamma_r}=0$, with $a_1,\ldots,a_r\in\operatorname{End}(A)$. Then $\gamma=a_1\cdot\gamma_1 + \ldots + a_r\cdot \gamma_r$ satisfies $$\begin{gathered} \varphi_{\gamma}(\sigma) =\sigma(\frac{1}{\ell^n}\gamma)-\frac{1}{\ell^n}\gamma = a_1\cdot \varphi_{\gamma_1}(\sigma) + \ldots + a_r\cdot \varphi_{\gamma_r}(\sigma) =0 \nonumber \\ \text{ for all } \sigma \in \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}})).\end{gathered}$$ By Lemma \[lem:fellinjective\], it follows $\gamma\in\ell^n\Gamma'$. Since $\gamma_1,\ldots,\gamma_r$ are $\operatorname{End}(A)$-linearly independent, it is $a_i\in \ell^n \cdot \operatorname{End}(A)$ for all $i\in\{1,\ldots,r\}$, proving part (ii). If $M$ and $N$ are two $G_{\operatorname{tors}}$-modules, then as usual $\operatorname{Hom}_{G_{\operatorname{tors}}}(M,N)$ is the set of all $G_{\operatorname{tors}}$-module homomorphisms from $M$ to $N$. Hence, the combination of Lemmas \[lem:fellinjective\] and \[lem:imageisGstable\] gives $$\label{eq:lowerboundhom} \ell^{n\cdot r} = \left\vert \nicefrac{\Gamma'}{\ell^n \Gamma'}\right\vert \leq \vert\operatorname{Hom}_{G_{\operatorname{tors}}}(\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^n}\Gamma')/K(A_{\operatorname{tors}})),A[\ell^n])\vert$$ \[lem:restriction\] Assume that $\alpha\in A(K(A_{\operatorname{tors}},\Gamma_{\operatorname{sat}}))$ satisfies $\ell^n \alpha \in A(K(A_{\operatorname{tors}}))$. Then $\alpha \in A(K(A_{\operatorname{tors}},\frac{1}{\ell^{\infty}}\Gamma'))$. For any $k\in\mathbb{N}$ it is (since $\operatorname{End}(A)$ is defined over $K$) $$\label{eq:generator} K(A_{\operatorname{tors}},\frac{1}{\ell^k}\Gamma')=K(A_{\operatorname{tors}},\frac{1}{\ell^k}\gamma_1,\ldots,\frac{1}{\ell^k}\gamma_k).$$ Let $n_1,\ldots,n_r \in \mathbb{N}$ be such that $$\alpha\in A(K(A_{\operatorname{tors}},\frac{1}{n_1}\gamma_1 ,\ldots ,\frac{1}{n_r}\gamma_r)).$$ For all $i \in \{1,\ldots,r\}$ we write $n_i=\ell^{e_i} m_i$ with $\ell\nmid m_i$, and set $$K_i:=K(A_{\operatorname{tors}},\frac{1}{n_1}\gamma_1,\ldots,\frac{1}{n_{i-1}}\gamma_{i-1},\frac{1}{\ell^{e_i}}\gamma_i,\frac{1}{n_{i+1}}\gamma_{i+1},\ldots,\frac{1}{n_r}\gamma_r).$$ Then - $G_i:=\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{n_1}\gamma_1,\ldots ,\frac{1}{n_r}\gamma_r)/ K_i) \hookrightarrow A[m_i]$, and - $G_{i,\alpha}:= \operatorname{Gal}(K_i(\alpha)/ K_i) \hookrightarrow A[\ell^{n}]$, and - $G_{i,\alpha}$ is isomorphic to a quotient of $G_i$. Since $\gcd(\vert A[\ell^{e_i}] \vert , \vert A[m_i]\vert)=1$, this is only possible if $G_{i,\alpha}$ is trivial and hence $\alpha \in A(K_i)$. Therefore, we can assume $m_i=1$. This is true for all $i\in\{1,\ldots,r\}$ and hence, we have $$\alpha\in A(K(A_{\operatorname{tors}},\frac{1}{\ell^{e_1}}\gamma_1,\ldots,\frac{1}{\ell^{e_r}}\gamma_r))\subseteq A(K(A_{\operatorname{tors}},\frac{1}{\ell^{\infty}}\Gamma')).$$ Proof of Theorem \[thm:freeab\] for elliptic curves with complex multiplication =============================================================================== From now on let $A$ be an elliptic curve defined over a number field $K$ with complex multiplication. Let $\mathcal{O}$ be the order in an imaginary quadratic field $\kappa$ such that $\operatorname{End}(A)\cong \mathcal{O}$. By enlarging $K$ we may assume $\kappa\subseteq K$. For any prime number $\ell$ and any $m\in\mathbb{N}$ it is $A[\ell^m]\cong \nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}$ and $\operatorname{Gal}(K(A[\ell^m])/K)$ acts on $A[\ell^m]\cong \nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}$ as multiplication by units in $\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}$. Therefore, we can regard $\operatorname{Gal}(K(A[\ell^m])/K)$ as a subgroup of $\left(\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}\right)^*$. Up to this restriction, the fields generated by torsion points are almost as large as possible. Namely, $$\label{eq:serreCM} (\left(\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}\right)^*:\operatorname{Gal}(K(A[\ell^m])/K))\leq M,$$ for some absolute constant $M$ only depending on $A$ and $K$ (cf. [@Se], Section 4.5). Let $f$ be the conductor of $\mathcal{O}$ and define for all primes $\ell$ the positive integer $$m_{\ell} \text{ minimal such that } \ell^{m_{\ell}} > 4 \ell^{\operatorname{ord}_\ell (f)} M.$$ It is obvious that $m_\ell=1$ for all but finitely many primes $\ell$. \[lem:indexCM\] We use the notation from above. Then it is $$(\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}:\nicefrac{\mathbb{Z}}{\ell^m \mathbb{Z}}[\operatorname{Gal}(K(A[\ell^m])/K)]) \leq 4 \ell^{\operatorname{ord}_\ell (f)} M < \ell^{m_l}.$$ Here we identify $\operatorname{Gal}(K(A[\ell^m])/K)$ with a subgroup of $\left(\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}\right)^*$. We denote by $\mathcal{O}_{\kappa}$ the ring of integers of $\kappa$. Then the map $$\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}} \longrightarrow \nicefrac{\mathcal{O}_{\kappa}}{\ell^m \mathcal{O}_{\kappa}} \quad ; \quad a+\ell^m \mathcal{O} \mapsto a +\ell^m\mathcal{O}_{\kappa}$$ is a $\ell^{\operatorname{ord}_\ell (f)}$-to-$1$ ring-homomorphism. In particular we have $$\label{eq:CM1} \vert \left(\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}\right)^* \vert \geq \frac{\vert \left(\nicefrac{\mathcal{O}_{\kappa}}{\ell^m \mathcal{O}_{\kappa}}\right)^* \vert}{\ell^{\operatorname{ord}_\ell (f)}}.$$ This last quantity can be explicitly calculated and it is $$\label{eq:CM2} \vert \left(\nicefrac{\mathcal{O}_{\kappa}}{\ell^m \mathcal{O}_{\kappa}}\right)^* \vert = \ell^{2m} (1-\ell^{-1}) (1-\left(\frac{\operatorname{disc}_{\kappa}}{\ell}\right) \ell^{-1}),$$ where $\operatorname{disc}_{\kappa}$ is the discriminant of $\kappa/\mathbb{Q}$ and $\left(\frac{\operatorname{disc}_{\kappa}}{\ell}\right)$ is the Legendre symbol (this follows from solving exercise 7.29 from [@Cox]). Using , and the definition of $M$ we get $$\vert \nicefrac{\mathbb{Z}}{\ell^m \mathbb{Z}}[\operatorname{Gal}(K(A[\ell^m])/K)] \vert \geq \vert \operatorname{Gal}(K(A[\ell^m])/K) \vert \geq \frac{\vert\left(\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}\right)^*\vert}{M} \geq \frac{\ell^{2m} (1-\ell^{-1})^2}{M \ell^{\operatorname{ord}_\ell (f)}}.$$ It immediately follows $$(\nicefrac{\mathcal{O}}{\ell^m \mathcal{O}}:\nicefrac{\mathbb{Z}}{\ell^m \mathbb{Z}}[\operatorname{Gal}(K(A[\ell^m])/K)]) \leq \frac{\ell^{2m} \ell^{\operatorname{ord}_\ell (f)} M}{\ell^{2m} (1-\ell^{-1})^2} \leq 4 \ell^{\operatorname{ord}_\ell (f)} M.$$ In order to prove Theorem \[thm:freeab\] for elliptic curves with complex multiplication, let $\Gamma \subseteq A(\overline{\mathbb{Q}})$ be a subgroup of finite rank. We have to prove that $\nicefrac{A(L)}{\Gamma_{\operatorname{sat}}}$ is free abelian for any finite extension $L$ of $K(\Gamma_{\operatorname{sat}})$. Since the property of being free abelian is inherited by subgroups, we may enlarge the field $L$ as we please. In particular, we may assume $\gamma_1,\ldots,\gamma_r\in A(L)$. After replacing our base field $K$ by a finite extension, we can also assume $L=K(\Gamma_{\operatorname{sat}})$. Thanks to Lemma \[freemodgamma\] and Proposition \[prop:firstclaim\] it is enough to prove that the torsion group of $\nicefrac{A(L)}{A(K')+\Gamma_{\operatorname{sat}}}$ has finite exponent for all finite extensions $K'/K$. This is surely the case if the statement is true for $\nicefrac{A(K'L)}{A(K')+\Gamma_{\operatorname{sat}}}$. Hence we have to prove that the torsion group of $$\nicefrac{A(K(\Gamma_{\operatorname{sat}}))}{A(K)+\Gamma_{\operatorname{sat}}}$$ has finite exponent for all sufficiently large number fields $K$ over which $A$ is defined. Hence, we fix a number field $K$, with $A$ is defined over $K$, and such that there are $\gamma_1,\ldots,\gamma_r$ satisfying properties , and (in particular it is $\kappa\subseteq K$). Let $c$ be the exponent of the torsion group of $\nicefrac{A(K(A_{\operatorname{tors}}))}{A(K)+A_{\operatorname{tors}}}$ (cf. Theorem \[thm:BHP\]). Let $[\alpha] \in \nicefrac{A(K(\Gamma_{\operatorname{sat}}))}{A(K)+\Gamma_{\operatorname{sat}}}$ be of order $\ell^{m_\ell+ \operatorname{ord}_{\ell}(c)}$, where $\ell$ is a prime number. By possible changing the representative of $[\alpha]$, we can assume that $\ell^{m_\ell+ \operatorname{ord}_{\ell}(c)} \alpha \in A(K)$. By definition of $c$, it is $\ell^{m_{\ell}-1}\alpha \notin A(K(A_{\operatorname{tors}}))$, since otherwise $\ell^{m_{\ell}-1}\alpha$ would be an element of order $\ell^{\operatorname{ord}_{\ell}(c)+1}\nmid c$ in $\nicefrac{A(K(A_{\operatorname{tors}}))}{A(K)+A_{\operatorname{tors}}}$. Hence, $$\ell^m \alpha \in A(K(A_{\operatorname{tors}})) \quad \text{ and } \quad \ell^{m-1}\alpha \notin A(K(A_{\operatorname{tors}})) \text{ for some } m\geq m_{\ell}.$$ If $\gamma_1,\ldots,\gamma_r,\alpha$ would be $\mathcal{O}$-linearly dependent, then – as seen in Section 3 – $\alpha \in \Gamma_{\operatorname{sat}}$ which is not the case. Hence $\gamma_1,\ldots,\gamma_r,\ell^m \alpha$ are $\mathcal{O}$-linearly independent and satisfy , and with $\Gamma$ replaced by $\Gamma_{\alpha}=\langle\Gamma,\alpha\rangle$ and $\Gamma'$ replaced by $\Gamma_{\alpha}'=\mathcal{O}\cdot \langle\gamma_1,\ldots,\gamma_r,\ell^m \alpha\rangle$. Therefore, Corollary \[cor:Galois\] tells us that $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^m}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))$ is a $\nicefrac{\mathbb{Z}}{\ell^m\mathbb{Z}}[\operatorname{Gal}(K(A[\ell^m])/K)]$-submodule of $A[\ell^{m}]^{r+1} \cong \left(\nicefrac{\mathcal{O}}{\ell^{m} \mathcal{O}}\right)^{r+1}$. Let $W\subseteq \left(\nicefrac{\mathcal{O}}{\ell^{m} \mathcal{O}}\right)^{r+1}$ be the $\mathcal{O}$-module generated by $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))$. Hence, any element in $W$ is the sum of elements of the form $a \sigma$ with $\sigma \in \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))$ and $a \in \mathcal{O}$. By Lemma \[lem:indexCM\] we find that $$\label{eq:Wexponent} \text{the exponent of } \nicefrac{W}{\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))} \text{ is at most } 4M\ell^{ord_{\ell}(f)}.$$ Assume that $W\neq \left(\nicefrac{\mathcal{O}}{\ell^{m} \mathcal{O}}\right)^{r+1}$, then there are $a_1,\ldots,a_{r+1} \in \mathcal{O}$, not all congruent to zero modulo $\ell^{m}\mathcal{O}$, such that $$a_1 w_1 + \ldots + a_{r+1} w_{r+1} \equiv 0 \mod{\ell^{m}\mathcal{O}} \text{ for all } (w_1,\ldots,w_{r+1})\in W$$ (see Lemma 3 in V§5 of [@La]). In particular, for all $\sigma \in \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))$ it is $$a_1 \varphi_{\gamma_1}(\sigma) + \ldots + a_r \varphi_{\gamma_r}(\sigma) + a_{r+1} \varphi_{\ell^{m}\alpha}(\sigma) \equiv 0 \mod{\ell^{m}\mathcal{O}},$$ where $\varphi_{\gamma}(\sigma)=\sigma(\frac{1}{\ell^{m}}\gamma)-\frac{1}{\ell^{m}}\gamma$. This is a contradiction, since the maps $\varphi_{\gamma_1},\ldots,\varphi_{\gamma_r},\varphi_{\ell^{m}\alpha}$ are $\nicefrac{\mathcal{O}}{\ell^{m}\mathcal{O}}$-linearly independent by Corollary \[cor:Galois\]. We conclude, that $W=\left(\nicefrac{\mathcal{O}}{\ell^{m} \mathcal{O}}\right)^{r+1}$. If it were not possible to embed $A[\ell]^{r+1}\cong\left(\nicefrac{\mathcal{O}}{\ell\mathcal{O}}\right)^{r+1}$ into $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))$, then the exponent of $$\nicefrac{A[\ell^{m}]^{r+1}}{\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))} \cong \nicefrac{W}{\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))}$$ would be equal to $\ell^{m}$. Then implies that $\ell^m \leq 4M\ell^{ord_{\ell}(f)} \overset{\eqref{eq:serreCM}}{<} \ell^{m_\ell}$ in contradiction to the fact $m\geq m_{\ell}$. Therefore, $A[\ell]^{r+1}$ is isomorphic to a subgroup of $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m_\ell}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}}))$. By the classification of finite abelian groups, there is a surjective group homomorphism $$\label{eq:surjectionm} \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^{m}}\Gamma_{\alpha}')/K(A_{\operatorname{tors}})) \twoheadrightarrow A[\ell]^{r+1}.$$ On the other hand, $\alpha$ is in $A(K(\Gamma_{\operatorname{sat}}))=A(K(\mathcal{O}\cdot \langle \gamma_1,\ldots,\gamma_r\rangle)_{\operatorname{div}})$ and hence there is an $N\in\mathbb{N}$ such that $\alpha \in A(K(A_{\operatorname{tors}},\frac{1}{\ell^N} (\mathcal{O}\cdot \langle \gamma_1,\ldots,\gamma_r\rangle))$ by Lemma \[lem:restriction\]. Therefore, there is a surjective group homomorphism $$\begin{aligned} \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^N} (\mathcal{O}\cdot \langle \gamma_1,\ldots,\gamma_r\rangle))/K(A_{\operatorname{tors}})) &\twoheadrightarrow \operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^m} (\mathcal{O}\cdot \langle \gamma_1,\ldots,\gamma_r,\ell^m\alpha\rangle))/K(A_{\operatorname{tors}})) \\ &\overset{\eqref{eq:surjectionm}}{\twoheadrightarrow} A[\ell]^{r+1}.\end{aligned}$$ This is a contradiction to the fact that $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^N} (\mathcal{O}\cdot \langle \gamma_1,\ldots,\gamma_r\rangle))/K(A_{\operatorname{tors}}))$ is isomorphic to a subgroup of $A[\ell^N]^r$. We conclude that there is no point of order $\ell^{m_{\ell}+\operatorname{ord}_{\ell}(c)}$ in $\nicefrac{A(K(\Gamma_{\operatorname{sat}}))}{A(K)+\Gamma_{\operatorname{sat}}}$. It follows that the exponent of the torsion subgroup of this group is a divisor of $$c\cdot \prod_{\ell \text{ prime}} \ell^{m_{\ell}-1}$$ and hence finite, since $m_{\ell}=1$ for all but finitely many primes $\ell$. Fields generated by finite rank subgroups of elliptic curves without complex multiplication =========================================================================================== From now on let $A/K$ be an elliptic curve without complex multiplication and let $\ell$ be a prime number. We will use Serre’s famous open image theorem [@Se], Section 4.4. This implies that there exists an absolute constant $M$ only depending on $A$ and $K$, such that $$\label{eq:serre} (GL_2(\nicefrac{\mathbb{Z}}{\ell^n\mathbb{Z}}):\operatorname{Gal}(K(A[\ell^n])/K))\leq M.$$ With this constant we define $$\label{eq:ml} m_\ell\in \mathbb{N} \text{ minimal such that } M! < \ell^{m_\ell}.$$ Obviously, for all but finitely many prime numbers it is $m_\ell =1$. \[lem:fullorbit\] Let $m\geq m_\ell$ and $P\in A[\ell^{m}]$ be of exact order $\ell^{m}$. Then $\nicefrac{A[\ell^m]}{\mathbb{Z}[G_{\operatorname{tors}}]\cdot P}$ is cyclic of order dividing $\ell^{m_\ell-1}$. In particular, $A[\ell]$ is a simple $\mathbb{Z}[G_{\operatorname{tors}}]$-module for all but finitely many prime numbers $\ell$. Let $P$ be of order $\ell^m$ and let $P' \in A[\ell^m]$ be such that $\{P,P'\}$ is a $\nicefrac{\mathbb{Z}}{\ell^m \mathbb{Z}}$-basis of $A[\ell^m]$. We represent the elements of $GL_2(\nicefrac{\mathbb{Z}}{\ell^m \mathbb{Z}})$ in this basis and regard $\operatorname{Gal}(K(A[\ell^m])/K)$ as a subset of $GL_2(\nicefrac{\mathbb{Z}}{\ell^m \mathbb{Z}})$. Let $N\leq M!$ be the index of the normal core of $\operatorname{Gal}(K(A[\ell^m])/K)$ in $GL_2(\nicefrac{\mathbb{Z}}{\ell^m \mathbb{Z}})$. Then $$\sigma=\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}^N = \begin{pmatrix} 1 & 0 \\ N & 1 \end{pmatrix} \in \operatorname{Gal}(K(A[\ell^m])/K).$$ Now $\sigma(P)-P=N\cdot P' \in \mathbb{Z}[G_{\operatorname{tors}}]\cdot P$. Let $\ell^n$ be the largest $\ell$-power dividing $N$, then $\nicefrac{N}{\ell^n}\in\left(\nicefrac{\mathbb{Z}}{\ell^m\mathbb{Z}}\right)^*$ and hence $\ell^n \cdot P' \in \mathbb{Z}[G_{\operatorname{tors}}]\cdot P$. Therefore, $\nicefrac{A[\ell^m]}{\mathbb{Z}[G_{\operatorname{tors}}]\cdot P}$ is a cyclic group of order dividing $\ell^n \leq \ell^{m_\ell -1}$. This proves the first statement of the lemma. The second statement follows, since for all but finitely many primes $\ell$ it is $m_{\ell}=1$. \[cor:homsmall\] We use the notation from the previous lemma. Then $\vert \operatorname{Hom}_{G_{\operatorname{tors}}}(A[\ell^m],A[\ell^m])\vert \leq \ell^m \cdot \ell^{3(m_{\ell} -1)}$. In particular, $\operatorname{Hom}_{G_{\operatorname{tors}}}(A[\ell],A[\ell]) \cong \nicefrac{\mathbb{Z}}{\ell \mathbb{Z}}$ for all but finitely many prime numbers $\ell$. Let $P\in A[\ell^m]$ be of exact order $\ell^m$ and let $\varphi \in \operatorname{Hom}_{G_{\operatorname{tors}}}(\mathbb{Z}[G_{\operatorname{tors}}]\cdot P,A[\ell^m])$. We take any $P'\in A[\ell^m]$ such that $A[\ell^m] = \langle P , P' \rangle$. As in the proof of Lemma \[lem:fullorbit\] there is an element $\sigma \in G_{\operatorname{tors}}$ such that $\sigma(P)=P$ and $\sigma(P')=N\cdot P+ P'$ for some $0< N < \ell^{m_\ell}$. Let $\varphi(P)=a\cdot P + b \cdot P'$ for some $a,b \in \mathbb{Z}$. Since $\varphi$ is a $G_{\operatorname{tors}}$-module homomorphism, it is $\varphi(\sigma(P))=\sigma(\varphi(P))$. This means that the following equality must hold $$a\cdot P + b \cdot P' = (a+bN)\cdot P + b \cdot P'.$$ Hence, $\ell^m \mid bN$ implying $\ell^{m-m_{\ell}+1} \mid b$. Therefore, there are at most $\ell^m \cdot \ell^{m_{\ell} -1}$ possible choices for $\varphi(P)$, and hence $\vert \operatorname{Hom}_{G_{\operatorname{tors}}}(\mathbb{Z}[G_{\operatorname{tors}}]\cdot P,A[\ell^m]) \vert \leq \ell^m \cdot \ell^{m_{\ell} -1}$. By Lemma \[lem:fullorbit\], there are at most $\ell^{2(m_{\ell} -1)}$ possibilities to extend an element from $\operatorname{Hom}_{G_{\operatorname{tors}}}(\mathbb{Z}[G_{\operatorname{tors}}]\cdot P,A[\ell^m])$ to an element in $\operatorname{Hom}_{G_{\operatorname{tors}}}(A[\ell^m],A[\ell^m])$. This proves the first statement of the Corollary. Again the the second statement follows by noting that $m_{\ell}=1$ for almost all primes $\ell$. Let $\Gamma \subseteq A(\overline{\mathbb{Q}})$ be a subgroup of rank $r$, with $r$ linearly independent elements $\gamma_1,\ldots,\gamma_r$. In sight of Theorem \[thm:freeab\] the goal is to prove that $\nicefrac{A(L)}{\Gamma_{\operatorname{div}}}$ is free abelian for any finite extension $L$ of $K(\Gamma_{\operatorname{div}})$. Note, that $\Gamma_{\operatorname{sat}} =\Gamma_{\operatorname{div}}$ in the present situation. Since the property of being free abelian is inherited by subgroups, we may enlarge the field $L$ as we please. In particular, we may assume $\gamma_1,\ldots,\gamma_r\in A(L)$. After replacing our base field $K$ by a finite extension, we can also assume $L=K(\Gamma_{\operatorname{div}})$. Thanks to Lemma \[freemodgamma\] and Proposition \[prop:firstclaim\] it would be enough to prove that the torsion group of $\nicefrac{A(L)}{A(K')+\Gamma_{\operatorname{div}}}$ has finite exponent for all finite extensions $K'/K$. This is surely the case if the statement is true for $\nicefrac{A(K'L)}{A(K')+\Gamma_{\operatorname{div}}}$. Hence we would have to prove that the torsion group of $$\label{eq:group} \nicefrac{A(K(\Gamma_{\operatorname{div}}))}{A(K)+\Gamma_{\operatorname{div}}}$$ has finite exponent for all sufficiently large number fields $K$ over which $A$ is defined. Hence, we fix an arbitrary number field $K$ such that $A$ is defined over $K$, and such that assumptions , and are satisfied. \[prop:aaone\] For all but finitely many prime numbers $\ell$ the $\ell$-torsion part of the group from is trivial. For all but finitely many prime numbers it is $A[\ell]$ a simple $G_{\operatorname{tors}}$-module with $$\operatorname{Hom}_{G_{\operatorname{tors}}}(A[\ell],A[\ell]) \cong \nicefrac{\mathbb{Z}}{\ell\mathbb{Z}}$$ (cf. Lemma \[lem:fullorbit\] and Corollary \[cor:homsmall\]). Let $\ell$ be such a prime number with the additional assumption that $\ell$ does not divide the exponent of the torsion group of $\nicefrac{A(K(A_{\operatorname{tors}}))}{A(K)+A_{\operatorname{tors}}}$. We denote this exponent (which is an integer by Theorem \[thm:BHP\]) by $c$. For the sake of contradiction we assume that there is some $[\alpha]\in \nicefrac{A(K(\Gamma_{\operatorname{div}}))}{A(K)+\Gamma_{\operatorname{div}}}$ of order $\ell$. In particular, $\alpha \notin \Gamma_{\operatorname{div}}$. After replacing $\alpha$ by some element in $\alpha+ \Gamma_{\operatorname{div}}$, we may assume $\ell \cdot \alpha \in A(K)$. Since we assume $\ell \nmid c$, it is $\alpha \notin A(K(A_{\operatorname{tors}}))$. Hence $\gamma_1,\ldots,\gamma_r,\ell\cdot\alpha$ satisfy the properties , and , with $\Gamma$ replaced by $\langle\Gamma,\ell\cdot\alpha\rangle$. Hence, by Corollary \[cor:Galois\] it is $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell}\langle \gamma_1,\ldots,\gamma_r,\ell\cdot\alpha \rangle)/K(A_{\operatorname{tors}}))$ isomorphic to a $G_{\operatorname{tors}}$-submodule of $A[\ell]^{r+1}$. Since $A[\ell]$ is simple by assumption, it follows $$\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell}\langle \gamma_1,\ldots,\gamma_r,\ell\cdot\alpha \rangle)/K(A_{\operatorname{tors}}))\cong A[\ell]^{r'}$$ with $r'\leq r+1$. Now Corollary \[cor:homsmall\] implies $$\vert \operatorname{Hom}_{G_{\operatorname{tors}}}(\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell}\langle \gamma_1,\ldots,\gamma_r,\ell\cdot\alpha\rangle)/K(A_{\operatorname{tors}})),A[\ell]) \vert = \ell^{r'}.$$ Since we also have $\ell^{r+1} \leq \vert \operatorname{Hom}_{G_{\operatorname{tors}}}(\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell}\langle \gamma_1,\ldots,\gamma_r,\ell\cdot\alpha\rangle)/K(A_{\operatorname{tors}})),A[\ell]) \vert$ by , it is $r'=r+1$. On the other hand, $\alpha$ is is $A(K(\Gamma_{\operatorname{div}}))$ and hence there is an $N\in\mathbb{N}$ such that $\alpha \in A(K(A_{\operatorname{tors}},\frac{1}{\ell^N} \langle\gamma_1,\ldots,\gamma_r\rangle))$ by Lemma \[lem:restriction\]. In particular, $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell}\langle \gamma_1,\ldots,\gamma_r,\ell\cdot\alpha \rangle)/K(A_{\operatorname{tors}}))\cong A[\ell]^{r+1}$ is isomorphic to a quotient of $\operatorname{Gal}(K(A_{\operatorname{tors}},\frac{1}{\ell^N}\langle \gamma_1,\ldots,\gamma_r\rangle)/K(A_{\operatorname{tors}}))\subseteq A[\ell^N]^{r}$. This is a contradiction, and therefore there is no element of order $\ell$ in $\nicefrac{A(K(\Gamma_{\operatorname{div}}))}{A(K)+\Gamma_{\operatorname{div}}}$. In order to give a full proof of Theorem \[thm:freeab\] for elliptic curves without complex multiplication, one needs the additional result that for all prime numbers $\ell$ the $\ell$-torsion part of the group from has finite exponent. This result remains open in this paper. [*Acknowledgements:*]{} I would like to thank Gaël Rémond for providing the proof of Lemma \[Bogomolov\], which initiated this project, and for invaluable comments on an early version of this manuscript. Moreover, special thanks go to Arno Fehm for many interesting discussions on this topic, for pointing out the relevance of Bashmakov’s theorem, and for his patience reading first drafts of this manuscript. [24]{}
{ "pile_set_name": "ArXiv" }
--- address: | Mathematics Department\ CSULB\ Long Beach, CA 90840-1001 author: - Scott Crass title: | New light on solving the sextic by iteration:\ An algorithm using reliable dynamics ---
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that if $\mathfrak{A}$ is a commutative complex non-unital Banach Algebra with norm $\|\cdot\|$, then $\|\cdot\|$ is regular on $\mathfrak{A}$ if and only if $\|\cdot\|_{op}$ is a norm on $\mathfrak{A}\oplus \mathbb{C}$ and $\mathfrak{A}\oplus\mathbb{C}$ is a commutative complex Banach Algebra with respect to $\|\cdot\|_{op}$.' address: 'Department of Mathematics, University at Buffalo, Buffalo, NY 14260 , USA' author: - Adam Orenstein title: 'Regular norm and the operator seminorm on a non-unital complex commutative Banach Algebra' --- Background {#regBack} ========== Let $\mathfrak{A}$ be complex non-unital Banach Algebra $\mathfrak{A}$ with norm $\|\cdot\|$ and $\mathfrak{A}_1^+$ be the unitization of $\mathfrak{A}$ with norm $\|\cdot\|_1$ defined by $\|(a,\lambda)\|_1 = \|a\|+|\lambda|$ for all $a\in\mathfrak{A}$ and $\lambda\in\mathbb{C}$. More about the unitization of a non-unital Banach Algebra can be found in [@zhuAl]. Let $\|\cdot\|_{op}:\mathfrak{A}\oplus\mathbb{C}\rightarrow [0,\infty)$ be defined by $$\label{opSemi}\|(a,\lambda)\|_{op}=\sup\{\|ax+\lambda x\|, \|xa+\lambda x\|:x\in \mathfrak{A}, \|x\|\leq1\}.$$ Straightforward calculations show $\|\cdot\|_{op}$ is a seminorm on $\mathfrak{A}\oplus\mathbb{C}$. As in [@GaurKov], we call $\|\cdot\|_{op}$ the operator seminorm on $\mathfrak{A}\oplus\mathbb{C}$. If $\|\cdot\|_{op}$ is a norm on $\mathfrak{A}\oplus\mathbb{C}$, then we let $\mathfrak{A}_{op}^+$ denote the normed algebra $\mathfrak{A}\oplus\mathbb{C}$ with addition, scalar multiplication and multiplication defined as in $\mathfrak{A}_1^+$ and with norm $\|\cdot\|_{op}$. The norm $\|\cdot\|$ on $\mathfrak{A}$ is by definition regular if for all $a\in \mathfrak{A}$, $$\label{regDef}\|a\|=\sup\{\|ax\|,\|xa\|:x\in \mathfrak{A}, \|x\|\leq1\}$$ [@Tak]. For any $a\in \mathfrak{A}$, $\sup\{\|ax\|,\|xa\|:x\in \mathfrak{A}, \|x\|\leq1\}\leq\|a\|$. This means $\|\cdot\| \text{ is regular on }\mathfrak{A} \text{ if and only if }\|a\|\leq \sup\{\|ax\|,\|xa\|:x\in \mathfrak{A}, \|x\|\leq1\}$ for all $a\in \mathfrak{A}$. Moreover for any $a\in \mathfrak{A}$, $\|(a,0)\|_1=\|a\|$ and $\|(a,0)\|_{op}=\sup\{\|ax\|,\|xa\|:x\in \mathfrak{A}, \|x\|\leq1\}$. So $$\label{regCond}\|\cdot\| \text{ is regular on }\mathfrak{A} \text{ if and only if }\|(a,0)\|_1\leq\|(a,0)\|_{op}$$for all $a\in \mathfrak{A}$. Clearly if $\mathfrak{A}$ has unit 1 with $\|1\|=1$, then $\|\cdot\|$ is regular on $\mathfrak{A}$. But if $\mathfrak{A}$ is non-unital, then $\|\cdot\|$ may not be regular. For consider $\ell^1$ with componentwise multiplication and let $\|\cdot\|_{\ell^1}$ be the $\ell^1$ norm. Let $x=\left\{\frac{1}{n^2}\right\}_{n=1}^\infty$. Then for any $y=\{y_j\}_{j=1}^\infty\in\ell^1$ with $\|y\|_{\ell^1}\leq 1$, $\|yx\|_{\ell^1}=\|xy\|_{\ell^1}=\sum_{j=1}^\infty \frac{|y_j|}{j^2}\leq \sum_{j=1}^\infty |y_j|\leq1$ It follows that $\sup\{\|xy\|_{\ell^1},\|yx\|_{\ell^1}:\|y\|_{\ell^1}\leq 1\}\leq1$. But $\|x\|_{\ell^1}=\frac{\pi^2}{6}$. The notion of a regular norm and its relation with $\|\cdot\|_{op}$ has been studied in [@Arh; @GaurKov; @Tak]. In [@GaurKov] the following question was asked: “is the regularity of $\|\cdot\|$ equivalent to $\mathfrak{A}_{op}^+$ being a Banach Algebra?” We will prove here that the answer is yes if $\mathfrak{A}$ is commutative. More specifically, we will prove the following theorem. \[mainThmReg\] Let $\mathfrak{A}$ be a complex commutative Banach Algebra with no unit and with norm $\|\cdot\|$. Let $\mathfrak{A}_1^+$ and $\mathfrak{A}_{op}^+$ be as above. Then $\|\cdot\|$ is regular on $\mathfrak{A}$ if and only if $\mathfrak{A}_{op}^+$ is a complex Banach Algebra with respect to $\|\cdot\|_{op}$. Numerical Range =============== In order to prove , we will need the notion of the numerical range of an element in a complex unital Banach Algebra. Let $\mathfrak{B}$ be any unital complex Banach Algebra with unit $1_\mathfrak{B}$, norm $\|\cdot\|_\mathfrak{B}$ and dual space $\mathfrak{B}^{'}$. Let $S(\mathfrak{B})=\{y\in\mathfrak{B}:\|y\|_\mathfrak{B}=1\}$. For any $y\in S(\mathfrak{B})$ let $D(\mathfrak{B},y)=\{f\in \mathfrak{B}^{'}:\|f\|=f(y)=1\}$. The elements of $D(\mathfrak{B},1_\mathfrak{B})$ are called the normalized states (on $\mathfrak{B}$) [@Bon]. For any $b\in \mathfrak{B}$ and $y\in S(\mathfrak{B})$, the sets $V(\mathfrak{B},b)$ and $V(\mathfrak{B},b,y)$ are defined by $$\label{numRangeDef} \begin{split}&V(\mathfrak{B},b,y)=\{f(by):f\in D(\mathfrak{B},y)\}\\& \text{ and } V(\mathfrak{B}, b)=\bigcup_{y\in S(\mathfrak{B})}V(\mathfrak{B},b,y).\end{split}$$ $V(\mathfrak{B},b)$ is called the numerical range of $b$. Also let $\sigma_\mathfrak{B}(b)$ be the spectrum of $b\in\mathfrak{B}$ and let $\text{co}(\sigma_\mathfrak{B}(b))$ be the convex hull of $\sigma_\mathfrak{B}(b)$. We will need the following results in order to prove . The first two are proved in [@Bon]. \[numRangProp\] Let $\mathfrak{B}$ be a unital complex Banach Algebra with unit $1_\mathfrak{B}$. Then for any $b\in\mathfrak{B}$, $V(\mathfrak{B},b)=V(\mathfrak{B},b,1_\mathfrak{B})$. \[specNum\] Let $\mathfrak{B}$ be a unital complex Banach Algebra with unit $1_\mathfrak{B}$ and norm $\|\cdot\|_\mathfrak{B}$. Let $\mathcal{N}_\mathfrak{B}$ be the set of all algebra norms $p$ on $\mathfrak{B}$ equivalent to $\|\cdot\|_\mathfrak{B}$ such that $p(1_\mathfrak{B})=1$ and $p(bc)\leq p(c)p(b)$ for all $c,b\in\mathfrak{B}$. For each $p\in\mathcal{N}_\mathfrak{B}$, let $V_p(\mathfrak{B},b)$ be the numerical range of $b\in \mathfrak{B}$ with $p$ replacing $\|\cdot\|_\mathfrak{B}$. Then for all $b\in \mathfrak{B}$ $$\text{co}(\sigma_\mathfrak{B}(b))=\bigcap_{p\in\mathcal{N}_\mathfrak{B}}V_p(\mathfrak{B},b)$$ The next theorem is proved in [@Gol]. \[normStates\] Let $\mathfrak{B}$ be a unital complex commutative Banach Algebra and let $f\in \mathfrak{B}^{'}$. Then $f\in D(\mathfrak{B},1_\mathfrak{B}) \text{ if and only if } f(b)\in \text{co}(\sigma_\mathfrak{B}(b))$ for all $b\in \mathfrak{B}$. The next lemma is easy to prove. \[numDisk\] If $\mathfrak{B}$ is a non-unital complex Banach Algebra with norm $\|\cdot\|_\mathfrak{B}$, then for all $b\in \mathfrak{B}$ and $\lambda\in\mathbb{C}$, $$V(\mathfrak{B}_1^+, (b,\lambda))=\|b\|_\mathfrak{B}\overline{\mathbb{D}}\times \{\lambda\}$$ where $\|b\|_\mathfrak{B}\overline{\mathbb{D}}\times \{\lambda\}=\{(\|b\|_\mathfrak{B}w,\lambda):w\in\overline{\mathbb{D}}\}$. Proof of ========= $(\Leftarrow)$ Assume $\|\cdot\|_{op}$ is a norm and $\mathfrak{A}_{op}^+$ is a complex Banach algebra with respect to $\|\cdot\|_{op}$. Note that $\mathfrak{A}_{op}^+$ is commutative as $\mathfrak{A}$ is. Let $\Psi:\mathfrak{A}_1^+\rightarrow \mathfrak{A}_{op}^+$ be defined by $\Psi((a,\lambda))=(a,\lambda)$. Clearly $\Psi$ is bijective, linear and since $\|(a,\lambda)\|_{op}\leq \|(a,\lambda)\|_1$ for all $(a,\lambda)\in \mathfrak{A}\oplus\mathbb{C}$, $\|\Psi\|\leq 1$. Then by the Open Mapping Theorem [@Rud], $\|\Psi^{-1}\|<\infty$. Hence there exists $\delta>0$ so that for all $(a,\lambda)\in \mathfrak{A}\oplus\mathbb{C}$, $$\label{normIneq} \|(a,\lambda)\|_{op}\leq \|(a,\lambda)\|_1\leq \delta\|(a,\lambda)\|_{op}.$$ Thus $\|\cdot\|_{op}$ and $\|\cdot\|_1$ are equivalent. Let $\mathcal{N}$ be the set of all algebra norms $p$ on $\mathfrak{A}\oplus\mathbb{C}$ equivalent to $\|\cdot\|_1$ such that $p((0,1))=1$, $p((a,\lambda)(b,\gamma))\leq p((a,\lambda))p((b,\gamma))$ for all $a,b\in\mathfrak{A}$ and $\lambda,\gamma\in\mathbb{C}$. By and the fact that $\|(0,1)\|_{op}=1$, $\|\cdot\|_{op}\in \mathcal{N}$. Then by , $$\label{conHullInter}\text{co}(\sigma_{{\mathfrak{A}_1}^+}(a,\lambda))\subseteq V(\mathfrak{A}_{op}^+,(a,\lambda))$$ for all $(a,\lambda)\in \mathfrak{A}_{op}^+$. Moreover by and , $$\label{conHullSpec}V(\mathfrak{A}_1^+, (a, \lambda))= \text{co}(\sigma_{{\mathfrak{A}_1}^+}((a,\lambda)))$$ for all $(a,\lambda)\in \mathfrak{A}_{op}^+$. Thus by $$V(\mathfrak{A}_1^+, (a,\lambda))\subseteq V(\mathfrak{A}_{op}^+, (a,\lambda))$$ for all $(a,\lambda)\in \mathfrak{A}_{op}^+$. Interchanging $\mathfrak{A}_1^+$ with $\mathfrak{A}_{op}^+$ and $\|\cdot\|_1$ with $\|\cdot\|_{op}$ in the above argument yields $$V(\mathfrak{A}_{op}^+, (a, \lambda))\subseteq V(\mathfrak{A}_1^+, (a,\lambda))$$ for all $(a,\lambda)\in \mathfrak{A}_{op}^+$. Hence $$\label{numRangeEqu} V(\mathfrak{A}_1^+, (a,\lambda))=V(\mathfrak{A}_{op}^+, (a,\lambda))$$ for all $(a,\lambda)\in \mathfrak{A}_{op}^+$. By $$\label{coordZero}V(\mathfrak{A}_1^+, (a,0))=\|a\|\overline{\mathbb{D}}$$ for every $a\in \mathfrak{A}$. Also for any $F((a,0))\in V(\mathfrak{A}_{op}^+, (a,0))$, $|F(a,0)|\leq \|(a,0)\|_{op}$. It follows from this and that $\|a\|\leq \|(a,0)\|_{op}$ for all $a\in \mathfrak{A}$. That is $\|(a,0)\|_1\leq \|(a,0)\|_{op}$ for all $a\in \mathfrak{A}$. Therefore by , $\|\cdot\|$ is regular on $\mathfrak{A}$. $(\Rightarrow)$ Assume $\|\cdot\|$ is regular on $\mathfrak{A}$. As stated in , $\|\cdot\|_{op}$ is a seminorm. Let $(a,\lambda)\in \mathfrak{A}\oplus\mathbb{C}$ and assume $\|(a,\lambda)\|_{op}=0$. Then for all $x\in \mathfrak{A}$ with $\|x\|\leq1$, $\|ax+\lambda x\|=0$. Hence $$\label{normOP}ax+\lambda x=0 \text{ and } ax=-\lambda x$$ for all $x\in \mathfrak{A}$. Suppose $\lambda\neq0$. Then implies $\left(\frac{a}{-\lambda}\right)x=x$ for all $x\in \mathfrak{A}$. So $\mathfrak{A}$ has a unit. This is a contradiction since by assumption $\mathfrak{A}$ is not unital. Thus $\lambda=0$ and $ax=0$ for all $x\in \mathfrak{A}$. Then since $\|\cdot\|$ is regular, $\|a\|=0$ and $a=0$. Therefore $(a,\lambda)=(0,0)$ and $\|\cdot\|_{op}$ is a norm on $\mathfrak{A}\oplus \mathbb{C}$. Now we will prove $\mathfrak{A}_{op}^+$ is complete. This part is based on some calculations used in the proof of Theorem 2.3 from [@GaurHus]. Let $\mathfrak{L}(\mathfrak{A})$ be the space of all bounded linear operators on $\mathfrak{A}$ with operator norm $\|\cdot\|$. For each $(a,\lambda)\in \mathfrak{A}\oplus\mathbb{C}$, let $L_{(a,\lambda)}:\mathfrak{A}\rightarrow \mathfrak{A}$ be defined by $$L_{(a,\lambda)}(x)=ax+\lambda x.$$ It is easy to see that $L_{(a,\lambda)}$ is linear for all $(a,\lambda)\in \mathfrak{A}\oplus\mathbb{C}$ and $$\label{mapNorm}\|L_{(a,\lambda)}\|=\|(a,\lambda)\|_{op}.$$ Thus $L_{(a,\lambda)}\in\mathfrak{L}(\mathfrak{A})$ for all $(a,\lambda)\in \mathfrak{A}_{op}^+$. Let $\mathcal{B}=\{L_{(a,\lambda)}:(a,\lambda)\in \mathfrak{A}_{op}^+\}$ and $\Gamma:\mathcal{B}\rightarrow \mathbb{C}$ by $\Gamma(L_{(a,\lambda)})=\lambda$. Now $\mathcal{B}$ is a subalgebra of $\mathfrak{L}(\mathfrak{A})$ and $\Gamma$ is a well-defined linear map with $\ker(\Gamma)=\{L_{(a,0)}:a\in \mathfrak{A}\}$. Since $\|\cdot\|$ is regular on $\mathfrak{A}$, tells us $\|L_{(a,0)}\|=\|a\|$. This implies the mapping $a\mapsto L_{(a,0)}$ is an surjective isometry on $\mathfrak{A}$ and hence $\ker(\Gamma)$ is a closed subalgebra of $\mathcal{B}$. It follows that $\Gamma$ is a bounded linear transformation. Now let $\{(a_n, \lambda_n)\}_n$ be a Cauchy sequence in $\mathfrak{A}_{op}^+$. Let $\epsilon>0$. Then there exist $N_1>0$ so that $$\label{cauchy1}n,m\geq N_1 \Rightarrow\|(a_n,\lambda_n)-(a_m,\lambda_m)\|_{op}<\epsilon.$$ Also by , $\{L_{(a_n, \lambda_n)}\}_n$ is Cauchy in $\mathcal{B}$. Thus by the above work $\{\Gamma(L_{(a_n, \lambda_n)})\}_n=\{\lambda_n\}_n$ is a Cauchy sequence in $\mathbb{C}$. So there exist $N_2>0$ such that $$\label{cauchy2}m,n\geq N_2 \Rightarrow|\lambda_n-\lambda_m|<\epsilon.$$ Now for any $x\in \mathfrak{A}$ with $\|x\|\leq1$, $$\label{cauchy3}\begin{split}\|a_nx-a_mx\|&\leq \|a_nx+\lambda_nx-(a_mx+\lambda_m x)\|+\|\lambda_n x-\lambda_m x\|\\&\leq \|a_nx+\lambda_nx-(a_mx+\lambda_m x)\|+|\lambda_n -\lambda_m|.\end{split}$$ It follows that $$\label{cauchy4}n,m\geq \max\{N_1, N_2\} \Rightarrow \|a_nx-a_mx\|<2\epsilon$$ for all $x\in \mathfrak{A}$ with $\|x\|\leq1$. Thus since $\|\cdot\|$ is regular, $\{a_n\}_n$ is Cauchy in $\mathfrak{A}$. So $\{a_n\}_n$ converges in $\mathfrak{A}$ to some $a\in \mathfrak{A}$. Moreover $\{\lambda_n\}_n$ converges to $\lambda\in\mathbb{C}$ as $\{\lambda_n\}_n$ is Cauchy. Hence $\lim_{n\rightarrow\infty}\|(a_n,\lambda_n)-(a,\lambda)\|_1=0$. Therefore since $\|\cdot\|_{op}\leq \|\cdot\|_1$, $\{(a_n, \lambda_n)\}_n$ converges to $(a,\lambda)$ with respect to $\|\cdot\|_{op}$ and $\mathfrak{A}_{op}^+$ is a Banach Algebra. Corollary ========= The following proposition is proved in [@GaurKov]. Among all the unital norms on $\mathfrak{A}\oplus\mathbb{C}$ which extend the norm $\|\cdot\|$ on $\mathfrak{A}$, $\|\cdot\|_1$ is maximal and if $\|\cdot\|$ is regular, then $\|\cdot\|_{op}$ is minimal. Combining this proposition with and yields the following Corollary. If $\mathfrak{A}$ is a commutative complex non-unital Banach Algebra, then all unital norms on $\mathfrak{A}\oplus\mathbb{C}$ are equivalent $\Leftrightarrow \mathfrak{A}_{op}^+$ is a Banach Algebra $\Leftrightarrow \|\cdot\|$ is regular on $\mathfrak{A}$. [10]{} Arhippainen, J.. and M$\ddot{\text{u}}$ller, V. Norms on Unitizations of Banach Algebras Revisited. *Acta Mathematica Hungarica*. 114 no.13(2007): 201-204. Bonsall, F.F. and Duncan, J. *Numerical Ranges of Operators on Normed Spaces and of Elements of Normed Algebras*. Volume 1. New York: Cambridge University Press 1971. Gaur, A.K. and Husain, T. Spatial Numerical Ranges Of Elements Of Banach Algebras. *International Journal of Mathematics and Mathematical Sciences*. 12 no.4 (1989): 633-640. Gaur, A. K. and A.V. Kov$\acute{\text{a}}\breve{\text{r}}\acute{\text{i}}$k. Norms on Unitizations of Banach Algebras. *Proceedings of the American mathematical Society*. 117 no.1 (1993): 111-113. Gaur, Abhay K. and Kov$\acute{\text{a}}\breve{\text{r}}\acute{\text{i}}$k, Zdislav V. Norms, States And Numerical Ranges On Direct Sums. *Analysis*. 11 no.2-3 (1991): 155-164. Golfarshchi, Fatemeh. and Khalilzadeh, Ali Asghar. Numerical Radius Preserving Linear Maps On Banach Algebras. *International Journal of Pure and Applied Mathematics*. 88 no.2 (2013): 233-238. Rudin, Walter. *Real and Complex Analysis*. 3rd edition. McGraw-Hill Science/Engineering/Math, 1987. Takahasi, Sin-Ei. Takeshi, Miura. and Hayata, Takahiro. An Equality Condition of Arhippainen-M$\ddot{\text{u}}$ller’s Estimate and its Related Problem. *Taiwanese Journal of Mathematics*. 15 no.1 (2011): 165-169. Zhu, Kehe. *An Introduction to Operator Algebras*. Boca Raton: CRC Press Inc 2000.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this letter, we present a closed-form approximation of the outage probability for the multi-hop amplify-and-forward (AF) relaying systems with fixed gain in Rayleigh fading channel. The approximation is derived from the outage event for each hop. The simulation results show the tightness of the proposed approximation in low and high signal-to-noise ratio (SNR) region.' author: - 'Jun Kyoung Lee, *Student Member*, Janghoon Yang, *Member*, and Dong Ku Kim, *Member* [^1][^2][^3]' nocite: '[@*]' title: 'An Approximation of the Outage Probability for Multi-hop AF Fixed Gain Relay' --- Wireless relay channel, multi-hop relay, outage probability Introduction ============ probability is an important measure of system performances in fading channel for reliability of link quality. Although multihop AF fixed gain relays can be more preferable for simple implementation, due to complication in analysis associated with noise accumulation factor with multi-hop relay, there are not many existing theoretical researches on them. In \[1\] and \[2\], the outage probabilities of multihop decode-and-forward (DF) relay systems were calculated and the optimum power allocation schemes were proposed. In \[1\], it is argued that the outage probability of multihop DF relay systems can be the lower bound of multihop AF relay systems. In \[3\], Hasna et. al. found the outage probability for multihop AF variable gain relay systems. In \[4\], Karagiannidis provided the bounds of the outage probability for multihop AF fixed gain relay systems by using harmonic and geometric mean, which are not tight in high SNR region. In this letter, we derive a closed-form approximation of the outage probability for multihop AF fixed gain relay systems by a novel approach considering the event space of the outage, which is tight in all SNR regions. The remainder of this letter is organized as follows. The system and channel model for the multihop AF fixed gain relay system is presented, and the received SNR at the destination is derived in Section II. In Section III, outage probability of relaying system is derived for arbitrary number of hops. In Section IV, we discuss the theoretical results and the simulation results for the system. Finally, we conclude this letter in Section V. System model ============ Considering the general $N$-hop relay network, there are $(N-1)$ relays between the source and the destination. It is assumed that the relaying network is operated on time division multiplexing (TDM) so that the transmission at any node occurs in different time slots on the same carrier frequency. Assuming that a signal at the source is transmitted with an average power ${E_1 }$ and the fixed gain relays are serially placed from the source to the destination, the instantaneous end-to-end SNR at the destination can be written as $$\begin{array}{l} \gamma _N = \frac{{A_{N - 1}^2 A_{N - 2}^2 \, \cdots A_1^2 \left| {h_N } \right|^2 \left| {h_{N - 1} } \right|^2 \,\, \cdots \,\,\left| {h_1 } \right|^2 E_1 }} {{\sum\limits_{j = 1}^{N - 1} {\left( {\prod\limits_{i = j}^{N - 1} {A_i^2 \left| {h_{i + 1} } \right|^2 } } \right)} \sigma _j^2 + \sigma _N^2 }} \end{array}$$ where $h_k $ is the fading amplitude of the channel at the $k$-th hop with unit variance, i.e., $E\{ {\left| {h_k } \right|^2 } \} = 1$ where $k = 1,\,\, \cdots \,\,,\,\,N$, in which $E\{ \cdot \}$ is the expectation operator, and $\sigma _k^2$ is the variance of the additive white Gaussian noise (AWGN) with mean zero at the $k$-th hop. In (1), the amplification factor of the $l$-th relay with fixed gain is defined \[5\] as $$A_l = \sqrt {\frac{{E_{l + 1} }} {{E_l + \sigma _l^2 }}} \,\,\,\,, \,\,l = 1,\,\, \cdots \,\,,\,\,N-1$$ Outage probability for multi-hop relaying systems with fixed gain ================================================================= The outage probability is defined as the probability that the end-to-end SNR, $\gamma _N$ , falls below a threshold level of SNR. For N-hop AF fixed gain relay system, the outage probability can be written as $$P_{out} (\gamma _N < \gamma _{Th} )$$ where $\gamma _{Th}$ is the threshold level of SNR. Unfortunately, it is very difficult to obtain the closed-form expression for the outage probability of multi-hop AF fixed gain relay system due to the noise accumulation, as shown in the denominator of (1). Alternatively, the outage probability of DF relay in \[1\] or of AF variable gain relay in \[3\] were proposed to be lower bounds for AF fixed gain relay. However, those can be definitely lower bounds but are not tight. \[4\] found a lower bound for AF fixed gain relay by using the well-known inequality between harmonic and geometric mean. But it loses tightness in high SNR region. In this letter, rather than finding a bound, we focus on finding out a closed-form accurate approximation of the outage probability for multihop AF fixed gain relay. To this end, the following theorem is introduced first. The outage probability for the N-hop AF relaying system with fixed gain is lower bounded by $G_{1,N + 1}^{N,1} \left[ {\bar \gamma _{Th} \left| {\begin{array}{*{20}c} 1 \\ {1,\,\,1,\,\, \cdots \,\,,\,\,1,0} \\ \end{array}} \right.} \right]$, where $\bar \gamma _{Th} = \frac{{\sigma _N^2 }}{{A_{N - 1}^2 A_{N - 2}^2 \, \cdots A_1^2 E_1 }}\gamma _{Th} $. The end-to-end SNR of N-hop AF relay is upper bounded to $\tilde \gamma _N $ with the assumption that relays operate at asymptotically high SNR, i.e., $\sigma _1^2 = \,\, \cdots \,\, = \sigma _{N - 1}^2 = 0$. $$\gamma _N \le \frac{{A_{N - 1}^2 A_{N - 2}^2 \, \cdots A_1^2 \left| {h_N } \right|^2 \left| {h_{N - 1} } \right|^2 \,\, \cdots \,\,\left| {h_1 } \right|^2 E_1 }}{{\sigma _N^2 }} \buildrel \Delta \over = \tilde \gamma _N$$ From (4), the outage probability is obviously lowered bounded as $$P_{out} (\tilde \gamma _N < \gamma _{Th} ) \le P_{out} (\gamma _N < \gamma _{Th} )$$ The left term of (5) can be equivalently expressed as $$P_{out} (\gamma '_N < \bar \gamma _{Th,N} ) \le P_{out} (\gamma _N < \gamma _{Th} )$$ where $\gamma '_N = \left| {h_1 } \right|^2 \left| {h_2 } \right|^2 \cdots \left| {h_N } \right|^2 $ and $\bar \gamma _{Th,N} = \frac{{\sigma _N^2 }}{{A_{N - 1}^2 A_{N - 2}^2 \, \cdots A_1^2 E_1 }}\gamma _{Th} $. By using Weibull distribution which is the general form of the exponential distribution family, the PDF of the cascaded exponential random variables, $\gamma =\left| {h_1 } \right|^2 \left| {h_2 } \right|^2 \,\, \cdots \,\,\left| {h_N } \right|^2$, can be expressed by using \[6, eq. (3)\] as $$f_{\gamma} (\gamma ) = \frac{1} {\gamma }G_{N,0}^{0,N} \left[ {\frac{1} {\gamma }} \bigg\vert{ {\begin{array}{*{20}c}{0,\,\,0,\,\, \cdots \,\,,\,\,0} \\ {-} \end{array}} } \right]$$ where $G\left[ \cdot \right]$ is the Meijer G function in \[7, eq. (9.301)\] which is a built-in function in the well-known softwares such as MAPLE and MATHEMATICA. ![The bound in Theorem 1 of outage probability for multihop AF relay system with fixed gain ($A_1^2 = \cdots = A_{N - 1}^2 = 1$).[]{data-label="Fig 3"}](Outage_Bound_Theorem.eps){width="3.4in"} The lower bound of the outage probability for the N-hop AF fixed gain relay can be calculated with the help of \[8, eq. (07.34.16.0002.01)\] and \[8, eq. (07.34.21.0001.01)\] as $$\begin{array}{l} P_{out} (\tilde \gamma _N < \gamma _{Th} ) = P_{out} (\gamma '_N < \bar \gamma _{Th,N})\\ = \int\limits_0^{\bar \gamma _{Th,N} } {\frac{1}{\gamma }G_{N,0}^{0,N} \left[ {\left. {\frac{1}{\gamma }} \right|\begin{array}{*{20}c} {0,\,\,0,\,\, \cdots \,\,,\,\,0} \\ - \\ \end{array}} \right]} \,d\gamma \\ = G_{1,N + 1}^{N,1} \left[ {\bar \gamma _{Th,N} \left| {\begin{array}{*{20}c} 1 \\ {1,\,\,1,\,\, \cdots \,\,,\,\,1,0} \\ \end{array}} \right.} \right] \\ \end{array}$$ As shown in Fig. 1, Theorem 1 is confirmed as a lower bound of outage probability for multihop AF fixed gain relay by the simulation, assuming that all amplification factors of the relays are 1. However, (8) loses tightness in low SNR region, since the closed-form lower bound is derived under the high SNR assumption. Therefore, the different method should be considered to increase accuracy in low SNR region. In N-hop AF fixed gain relay network, as the number of hops is increased, the end-to-end SNR is decreased, $\gamma _N \le \gamma _{N - 1} \le \,\, \cdots \,\, \le \gamma _2 \le \gamma _1 $, while outage probability is increased, $P_{out} (\gamma _1 < \gamma _{Th} ) \le P_{out} (\gamma _2 < \gamma _{Th} ) \le \cdots \le P_{out} (\gamma _N < \gamma _{Th} )$. It can be easily proved. ![The outage event space for multihop AF relaying system with fixed gain.[]{data-label="Fig 3"}](Outage_Space.eps){width="3.2in"} For notational simplicity, let $P_{out,n}$ and $P_{out,n}^*$ for $n = 1,2, \cdots N$ be denoted by $P_{out} (\gamma _n < \gamma _{Th} )$ and $P_{out} (\tilde \gamma _n < \gamma _{Th} )$, respectively. From Theorem 1 and Lemma 1, the outage event space for multihop AF relaying system with fixed gain can be drawn as Fig. 2. As shown in Fig. 2, the outage probability can be expressed as the sum of the probabilities for each hop. The outage probability can be evaluated for each number of hops in the following way. \(1) 1-hop case: $$P_{out,1} = P(\gamma _1 < \gamma _{Th} ) = P_{out} (\gamma '_1 < \bar \gamma _{Th,1} ) = 1 - e^{ - \bar \gamma _{Th,1} }$$ \(2) 2-hop case: $$\begin{array}{l} P_{out,2} = P(\gamma _2 < \gamma _{Th} ) \\ = P(\gamma _1 < \gamma _{Th} ,\;\gamma _2 < \gamma _{Th} ) + P(\gamma _1 > \gamma _{Th} ,\;\gamma _2 < \gamma _{Th} ) \\ \end{array}$$ where the outage probability cannot be directly calculated since $\gamma _1$ and $\gamma _2$ are not independent. However, it can be approximated with the help of Fig. 2. The first term in (10) is obviously rewritten as $P(\gamma _1 < \gamma _{Th} )$ and the second term is approximately calculated as $P(\gamma _1 > \gamma _{Th} )P(\gamma _2 < \gamma _{Th} )$ for simplicity, as if the two events are independent. Since it is hard to have a closed form expression for $P(\gamma _2 < \gamma _{Th} )$, it can approximated by using the lower bound in Lemma 1. Therefore, the approximation of the outage probability for the 2-hop case can be obtained as $$\begin{array}{l} P_{out,2} \approx P(\gamma '_1 < \bar \gamma _{Th,1} ) + P(\gamma '_1 > \bar \gamma _{Th,1} )P(\gamma _2 < \gamma _{Th} ) \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ge P_{out,1} + P(\gamma '_1 > \bar \gamma _{Th,1} )P(\gamma '_2 < \bar \gamma _{Th,2} ) \buildrel \Delta \over = \tilde P_{out,2} \\ \end{array}$$ \(3) 3-hop case: $$\begin{array}{l} P_{out,3} = P(\gamma _3 < \gamma _{Th} ) \\ = P(\gamma _1 < \gamma _{Th} ,\;\gamma _2 < \gamma _{Th} ,\gamma _3 < \gamma _{Th} ) \\ \,\,\,\,\,+ P(\gamma _1 > \gamma _{Th} ,\;\gamma _2 < \gamma _{Th} ,\gamma _3 < \gamma _{Th} ) \\ \,\,\,\,\,+ P(\gamma _1 > \gamma _{Th} ,\;\gamma _2 > \gamma _{Th} ,\gamma _3 < \gamma _{Th} ) \\ \approx \tilde P_{out,2} \, + P(\gamma _3 < \gamma _{Th} )P(\gamma _1 > \gamma _{Th} ,\;\gamma _2 > \gamma _{Th} ) \\ \approx \tilde P_{out,2} + P(\gamma '_3 < \bar \gamma _{Th,3} )P(\gamma _3 > \gamma _{Th} ) \\ = \tilde P_{out,2} + P(\gamma '_3 < \bar \gamma _{Th,3} )(1 - P(\gamma _3 < \gamma _{Th} )) \\ \approx \tilde P_{out,2} + P(\gamma '_3 < \bar \gamma _{Th,3} )(1 - P(\gamma '_3 < \bar \gamma _{Th,3} )) \buildrel \Delta \over = \tilde P_{out,3} \\ \end{array}$$ Similarly, for general N-hop AF relaying system with fixed gain, its outage probability can be approximated as \(4) N-hop case: $$\begin{array}{l} P_{out,N} = P(\gamma _N < \gamma _{Th} ) \\ \approx \tilde P_{out,N - 1} + P(\gamma '_N < \bar \gamma _{Th,N} )(1 - P(\gamma '_N < \bar \gamma _{Th,N} )) \\ \,\,\,\,\,\,\, \buildrel \Delta \over = \tilde P_{out,N} \\ \end{array}$$ To increase the accuracy of the approximation, we perform averaging out the noise power in (1) and applying it to the right term in (13). If $\sigma _1^2 = \,\, \cdots \,\, = \sigma _{N }^2$, then the approximation of the outage probability can be written as $$\begin{array}{l} P_{out,N} \approx \tilde P_{out,N - 1} + P\left( {\gamma '_N < \bar \gamma _{Th,N} } \right)\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \cdot \left( {1 - P\left( {\gamma '_N < \left[ {\sum\limits_{j = 1}^{N - 1} {\left( {\prod\limits_{i = j}^{N - 1} {A_i^2 } } \right)} + 1} \right]\bar \gamma _{Th,N} } \right)} \right) \end{array}$$ Without loss of generality, for $N \ge 3$, the outage probability in (14) can be rewritten as $$\begin{array}{l} P_{out,N} \\ \approx \left( {1 - e^{ - \bar \gamma _{Th,1} } } \right) + e^{ - \bar \gamma _{Th,1} } G_{1,3}^{2,1} \left[ {\bar \gamma _{Th,2} \left| {\begin{array}{*{20}c} 1 \\ {1,\,\,1,\,\,0} \\ \end{array}} \right.} \right] \\ + \sum\limits_{n = 3}^N {G_{1,n + 1}^{n,1} \left[ {\bar \gamma _{Th,n} \left| {\begin{array}{*{20}c} 1 \\ {1,\,\,1,\,\, \cdots \,\,,\,\,1,0} \\ \end{array}} \right.} \right]} \left( {1 - \begin{array}{*{20}c} {} \\ {} \\ \end{array}} \right. \\ \left. {G_{1,n + 1}^{n,1} \left[ {\left( {\sum\limits_{j = 1}^{n - 1} {\left( {\prod\limits_{i = j}^{n - 1} {A_i^2 } } \right)} + 1} \right)\bar \gamma _{Th,n} \left| {\begin{array}{*{20}c} 1 \\ {1,1, \cdots ,1,0} \\ \end{array}} \right.} \right]} \right) \\ \end{array}$$ Simulation results ================== The proposed approximation and the simulation results of the outage probability for multihop AF relaying system with fixed gain are shown in Fig. 3. For the simulation, the threshold level of SNR is 0dB. From the results, it is noted that the proposed approximation of the outage probability is very close to the simulation result in low and high SNR region. ![The proposed approximation of outage probability for multihop AF relay system with fixed gain ($A_1^2 = \cdots = A_{N - 1}^2 = 2$).[]{data-label="Fig 3"}](multihop_AF_Outage.eps){width="3.6in"} Conclusion ========== In this letter, an accurate approximation of the outage probability for the multihop AF relaying systems with fixed gain in Rayleigh fading channel is derived by using the outage event space. The numerical results show that it provides the very accurate approximation in all SNR region. [1]{} Mazen O. Hasna and Mohamed-Slim Alouini, “Optimal power allocation for relayed transmissions over Rayleigh-fading channels,” IEEE Trans. Wireless Commun., vol. 3, no. 6, pp. 1999-2004, Nov. 2004. Jing Han, Hanfeng Zhang, and Weiling Wu, “End-to-end joint power allocation strategy in multihop wireless netwroks,” WiCom 2007 International Conference, pp. 877-880, Sep. 2007. Mazen O. Hasna and Mohamed-Slim Alouini, “Outage probability of multihop transmission over Nakagami fading channels,” IEEE Commun. Letter, vol. 7, no. 5, pp. 216-218, May 2003. George K. Karagiannidis, “Performance bounds of multihop wireless communications with blind relays over generalized fading channels,” IEEE Trans. Wireless Commun., vol. 5, no. 3, pp. 498-503, Mar. 2006. C. S. Patel and G. L. Stuber, “Channel estimation for amplify and forward relay based cooperation diversity systems,” IEEE Trans. Wireless Commun., vol. 6, no. 6, pp. 2348-2356, Jun. 2007. N. C. Sagias and G. S. Tombras, “On the cascaded Weibull fading channel model,” Journal of the Franklin Institute, vol. 344, issue 1, pp. 1-11, Jan. 2007. I. S. Gradshteyn and I. M. Ryzhik, [*Table of Integrals, Series, and Products*]{}, 6 ed. New York: Academic, 2000. Wolfram, The Wolfram functions site, Internet. URL http://functions.wolfram.com/. [^1]: EDICS : CL.1.2.0, CL.1.2.1, CL.1.2.2 [^2]: The authors are with the Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea. (email: [player72, jhyang00, dkkim]{}@yonsei.ac.kr) [^3]: Tel: +82-10-9530-5436, Fax:+82-2-365-4504
{ "pile_set_name": "ArXiv" }
--- abstract: 'We repeat the directional spherical  wavelet analysis, used to detect non-Gaussianity in the  () 1-year and 3-year data [@mcewen:2005:ng; @mcewen:2006:ng], on the 5-year data. The non-Gaussian signal [detected]{} previously is present in the 5-year data at a slightly increased statistical significance of approximately 99%. Localised regions that contribute most strongly to the non-Gaussian signal are found to be very similar to those detected in the previous releases of the  data. When the localised regions detected in the 5-year data are excluded from the analysis the non-Gaussian signal is eliminated.' bibliography: - 'bib.bib' date: 'Accepted 29 April 2008. Received 29 April 2008; in original form 14 March 2008' title: 'A high-significance detection of non-Gaussianity in the  5-year data using directional spherical wavelets' --- cosmic microwave background – methods: data analysis – methods: numerical Introduction {#sec:intro} ============ The statistics of the primordial fluctuations provide a useful mechanism for distinguishing between various scenarios of the early Universe, such as various models of inflation. Furthermore, the primordial fluctuations give rise to the anisotropies of the (), which may be observed directly. In the simplest inflationary scenarios, primordial perturbations seed Gaussian temperature fluctuations in the  that are statistically isotropic over the sky. However, this is not the case for non-standard inflationary models or alternative models to inflation. Evidence of primordial non-Gaussianity in the  temperature anisotropies would therefore have profound implications for the standard cosmological model. Initial analyses of the  () 1-year [@bennett:2003a], three-year [@hinshaw:2006] and five-year [@hinshaw:2008] observations of the  (hereafter referred to as 1, 3 and 5), performed by @komatsu:2003, @spergel:2006 and @komatsu:2008 respectively, find no evidence for deviations from Gaussianity. However, no one statistic is sensitive to all possible forms of non-Gaussianity that may exist in the  data due to either foreground contamination, systematics or of primordial origin. It is therefore important to test the data for deviations from Gaussianity using a range of different methods and, indeed, many additional studies have been performed on the 1 and 3 data: @bernui:2007 [@cabella:2005; @cayon:2005; @chen:2005; @chiang:2003; @chiang:2004; @chiang:2006; @coles:2004; @cg:2003; @creminelli:2007; @cruz:2005; @cruz:2006a; @cruz:2006b; @dineen:2005; @eriksen:2004; @eriksen:2005; @eriksen:2007; @gw:2003; @gott:2007; @hansen:2004; @hikage:2008; @jeong:2007; @larson:2004; @larson:2005; @lm:2004; @lew:2008; @mcewen:2005:ng; @mcewen:2006:ng; @mcewen:2006:bianchi; @mm:2004; @medeiros:2006; @monteserin:2007; @mw:2004; @naselsky:2007; @raeth:2007; @sadegh:2006; @tojeiro:2005; @vielva:2003; @wiaux:2006; @wiaux:2008; @yadav:2007]. Deviations from Gaussianity have been detected in many of these works. Although the 5 data are consistent with previous releases, the modelling of beams is improved considerably, new masks are defined and a further two-years of observations mean that the 5 data can provide reliable confirmation of previous non-Gaussianity analyses. In this article we focus on the detection of non-Gaussianity that we made previously in the 1 and 3 data [@mcewen:2005:ng; @mcewen:2006:ng]. The Kp0 mask provided for previous  releases and used in our Gaussianity analyses was constructed from the K-band  observations, which contain  and foreground emission. Consequently, the application of this mask may introduce negative skewness in the distribution of the  [@komatsu:2008]. Since our previous detections of non-Gaussianity were observations of negative skewness in wavelet coefficients computed from  data masked in this manner, it is prudent to readdress our analysis in light of the new  data and masks. The remainder of this letter is organised as follows. In we discuss the 5 map considered and present the results of the non-Gaussianity analysis. Concluding remarks are made in . Non-Gaussianity analysis and results {#sec:analysis} ==================================== We repeat our non-Gaussianity analysis performed previously on the 1 and 3 data [@mcewen:2005:ng; @mcewen:2006:ng], focusing on the most significant detection of non-Gaussianity made in the skewness of real Morlet wavelet coefficients. A detailed description of the analysis procedure is presented in @mcewen:2005:ng and a brief overview is also given in @mcewen:2006:ng. Consequently, we do not review the method in detail here but merely comment that it involves a Monte Carlo analysis of real Morlet wavelet coefficients of the data. Twelve scales $a_i$ spaced equally between $50\arcmin$ and $600\arcmin$ are considered. Furthermore, the real Morlet wavelet analysis probes directional structure in the data and we examine five wavelet azimuthal orientations spaced equally in the domain $[0,\pi)$. The directional analysis is facilitated by our fast directional continuous spherical wavelet transform code [@mcewen:2006:fcswt], which is based on the fast spherical convolution developed by @wandelt:2001. We consider the signal-to-noise ratio enhanced co-added map constructed from the 5 data (see @komatsu:2003, @mcewen:2005:ng for descriptions of the co-added map construction procedure). Each simulated map used in the Monte Carlo simulations is constructed in an analogous manner to the co-added map constructed from the data. A Gaussian  realisation is simulated from the theoretical  () power spectrum fitted by the  team [@dunkley:2008]. Measurements made by the various receivers are then simulated by convolving with realistic beams and adding anisotropic noise for each receiver, where the beams and noise properties used correspond to 5 observations. The simulated observations for each receiver are then combined to give a co-added map. In this analysis we use the new KQ75 and KQ85 masks, rather than the Kp0 mask used in our previous analyses. The construction of these new masks is discussed by @gold:2008. The KQ75 mask is the more conservative of the two new masks and is recommended for Gaussianity analyses [@komatsu:2008]. Nevertheless, we consider both masks since these are the changes in the 5 data that are most likely to affect the results of our analysis. The skewness of the real Morlet wavelet coefficients of the co-added 5 map are displayed in , with confidence intervals constructed from 1000 Monte Carlo simulations consistent with the 5 observations also shown. Only the plot corresponding to the orientation of the maximum deviation from Gaussianity is shown. The non-Gaussian signal present in previous releases of the data is clearly present in the 5 data for both choices of mask. In particular, the large deviation on scale $a_{11}=550\arcmin$ and orientation $\gamma=72^\circ$ remains. Next we consider in more detail the most significant deviation from Gaussianity on scale $a_{11}=550\arcmin$ and orientation $\gamma=72^\circ$. shows histograms of this particular statistic constructed from the 5 Monte Carlo simulations for both masks. The skewness value measured from the data is also shown on the plot, with the number of standard deviations each observation deviates from the mean of the appropriate set of simulations. The distribution of this skewness statistic is not significantly altered between simulations analyses with the KQ75 or KQ85 masks. The observed statistics for the data are also similar for both masks. ![Histograms of real Morlet wavelet coefficient skewness obtained from 1000 5 Monte Carlo simulations. Histograms are plotted for statistics computed from the simulations using the KQ75 (green) and KQ85 (blue) masks. The observed statistics for the 5 data with the KQ75 and KQ85 masks maps are shown by the green and blue lines respectively. The number of standard deviations these observations deviate from the mean of the appropriate set of simulations is also displayed.[]{data-label="fig:hist"}](figures/hist2_skewness_morlet_ia11_ig02_maskboth){width="75mm"} To quantify the statistical significance of the detected deviation from Gaussianity we consider two techniques. The first technique involves comparing the deviation of the skewness statistic computed from the 5 data on scale $a_{11}=550\arcmin$ and orientation $\gamma=72^\circ$ to all statistics computed from the simulations. This is a very conservative means of constructing significance levels. The second technique involves performing a $\chi^2$ test. The $\chi^2$ value computed from the 5 data is compared to $\chi^2$ statistics computed from the simulations. In both of these tests we relate the observation to all test statistics computed originally,  to both skewness and kurtosis statistics. For a more thorough description of these techniques see @mcewen:2005:ng. Using the first technique, the significance of the detection of non-Gaussianity in the 5 is made at $99.2\pm0.3$% and $99.1\pm0.3$% using the KQ75 and KQ85 masks respectively. The distribution of $\chi^2$ values obtained from the Monte Carlo simulations is shown in . The $\chi^2$ value obtained for the data is also shown on the plot. Again, the distribution and value observed in the data is not altered significantly when using the different masks. Computing the significance of the detection of non-Gaussianity directly from the $\chi^2$ distributions and observations, the significance of the detection in the 5 data is made at $99.3\pm0.3$% and $99.2\pm0.3$% using the KQ75 and KQ85 masks respectively. Using both of the techniques outlined above the detection of non-Gaussianity made in the 5 data is made at a slightly higher significance than in previous releases of the data. Nevertheless, the same non-Gaussian signal appears to be present. ![Histograms of normalised $\chi^2$ test statistics computed from real Morlet wavelet coefficient statistics obtained from 1000 5 Monte Carlo simulations. Histograms are plotted for statistics computed from the simulations using the KQ75 (green) and KQ85 (blue) masks. The $\chi^2$ value for the 5 data with the KQ75 and KQ85 masked maps are shown by the green and blue lines respectively. The significance of these observations, computed from the appropriate set of simulations, is also displayed.[]{data-label="fig:chi2"}](figures/histchi2_wmap5_maskboth_morlet_sTkT){width="75mm"} The wavelet analysis allows one to localise those regions on the sky that contribute most significantly to deviations from Gaussianity [@mcewen:2005:ng]. In we plot the thresholded wavelet coefficients corresponding to the most significant detection of non-Gaussianity made on scale $a_{11}=550\arcmin$ and orientation $\gamma=72^\circ$.[^1] These localised regions match the localised regions detected in the 1 and 3 data closely. When excluding localised regions from the initial analysis, the highly significant non-Gaussian signals present previously are eliminated (see ). In our previous non-Gaussianity analyses we concluded that noise was not atypical in the localised regions detected [@mcewen:2005:ng]. Moreover, we also concluded that foregrounds and systematics were not the likely source of the detected non-Gaussianity [@mcewen:2006:bianchi]. The localised regions detected in the 5 data have not changed markedly to those detected previously and foregrounds and systematics are treated more thoroughly, hence we do not expect these findings to alter in the 5 data. Conclusions {#sec:conclusions} =========== In this work we have repeated our non-Gaussianity analysis on the 5 data. The non-Gaussian signal detected previously remains present in the 5 data. The possible introduction of negative skewness in the  data by the application of the Kp0 mask [@komatsu:2008] appears not to be responsible for our non-Gaussian signal. Non-Gaussianity is detected at significance levels of $99.2\pm0.3$% and $99.1\pm0.3$% using the KQ75 and KQ85 masks respectively, when using our conservative method for constructing significance measures. Using our second method, which is based on a $\chi^2$ analysis, the significance of the detection is made at $99.3\pm0.3$% and $99.2\pm0.3$% using the KQ75 and KQ85 masks respectively. These detections of deviations from Gaussianity are made at a slightly higher significance in the 5 data than in previous releases. We have no intuitive explanation for this marginal rise in significance. The most likely sources of non-Gaussianity that were localised on the sky in the 5 data match those regions detected from previous releases of the data reasonable closely (and are made available publicly). It is interesting to note that the highly significant detection of primordial non-Gaussianity made with the bispectrum by @yadav:2007 is sensitive to skewness, which is also the type of non-Gaussianity detected with our real Morlet wavelet analysis. To test whether these two analyses detect the same source of non-Gaussianity, one could remove the localised regions that we detect and repeat the analysis performed by @yadav:2007 to see if their detection of non-Gaussianity remains. This analysis is currently being performed by Yadav & Wandelt (private communication). Acknowledgements {#acknowledgements .unnumbered} ================ Some of the results in this paper have been derived using the [^2] package [@gorski:2005]. We acknowledge the use of the [^3] (). Support for  is provided by the NASA Office of Space Science. \[lastpage\] [^1]: We make corresponding localised region masks available publicly from <http://www.mrao.cam.ac.uk/~jdm57/> so that other researchers may determine whether these regions are responsible for detections of non-Gaussianity made with other analysis techniques. [^2]: <http://healpix.jpl.nasa.gov/> [^3]: <http://lambda.gsfc.nasa.gov/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'An $\epsilon$-distance-uniform graph is one in which from every vertex, all but an $\epsilon$-fraction of the remaining vertices are at some fixed distance $d$, called the critical distance. We consider the maximum possible value of $d$ in an $\epsilon$-distance-uniform graph with $n$ vertices. We show that for $\frac1n \le \epsilon \le \frac1{\log n}$, there exist $\epsilon$-distance-uniform graphs with critical distance $2^{\Omega(\frac{\log n}{\log \epsilon^{-1}})}$, disproving a conjecture of Alon et al. that $d$ can be at most logarithmic in $n$. We also show that our construction is best possible, in the sense that an upper bound on $d$ of the form $2^{O(\frac{\log n}{\log \epsilon^{-1}})}$ holds for all $\epsilon$ and $n$.' author: - 'Mikhail Lavrov[^1]' - 'Po-Shen Loh[^2]' - 'Arnau Messegué[^3]' title: 'Distance-Uniform Graphs with Large Diameter' --- Introduction ============ We say that an $n$-vertex graph is *$\epsilon$-distance-uniform* for some parameter $\epsilon>0$ if there is a value $d$, called the *critical distance*, such that, for every vertex $v$, all but at most $\epsilon n$ of the other vertices are at distance exactly $d$ from $v$. Distance-uniform graphs exist for some, but not all, possible triplets $(n, \epsilon, d)$; a trivial example is the complete graph $K_n$, which is distance-uniform with $\epsilon = \frac1n$ and $d=1$. So it is natural to try to characterize which triplets $(n,\epsilon, d)$ are realizable as distance-uniform graphs. The notion of distance uniformity is introduced by Alon, Demaine, Hajiaghayi, and Leighton in [@alon13], motivated by the analysis of network creation games. It turns out that equilibria in a certain network creation game can be used to construct distance-uniform graphs. As a result, understanding distance-uniform graphs tells us which equilibria are possible. From network creation games to distance uniformity -------------------------------------------------- The use of the Internet has been growing significantly in the last few decades. This fact has motivated theoretical studies that try to capture properties of Internet-like networks into models. Fabrikant et al. [@fabrikant03] proposed one of these first models, the so called *sum classic network creation game* (or abbreviated sum classic) from which variations (like [@bilo15], [@ehsani15]) and extensions of it (like [@bilo15max], [@brandes08]) have been considered in the subsequent years. Although all these models try to capture different aspects of Internet, all of them can be identified as *strategic games*: every agent or node (every player in the game) buys some links (every player picks an strategy) in order to be connected in the network formed by all the players (the strategic configurations formed as a combination of the strategies of every player) and tries to minimize a cost function modeling their needs and interests. All these models together with their results constitute a whole subject inside game theory and computer science that stands on its own: the field of *network creation games*. Some of the most relevant concepts discussed in network creation games are *optimal network*, *Nash equilibria* and the *price of anarchy*, among others. An optimal network is the outcome of a configuration having minimum overall cost, that is, the sum of the costs of every player has the minimum possible value. A Nash equilibrium is a configuration where each player cannot strictly decrease his cost function given that the strategies of the other players are fixed. The price of anarchy quantifies the loss in terms of efficiency between the worst Nash equilibrium (anyone having maximum overall cost) and any optimal network (anyone having minimal overall cost). The sum classic is specified with a set of players $N = \left\{ 1,...,n\right\}$ and a parameter $\alpha > 0$ representing the cost of establishing a link. Every player $i\in N$ wishes to be connected in the resulting network, then the strategy $s_i \in \mathcal{P}(N \setminus \left\{i \right\})$ represents the subset of players to which $i$ establishes links. Then considering the tuple of the strategies for every player $s=(s_1,...,s_n)$ (called a *strategy profile*) the *communication network* associated to $s$, noted as $G[s]$, is defined as the undirected graph having $N$ as the set of vertices and the edges $(i,j)$ iff $i \in s_j $ or $j \in s_i$. The communication network represents the resulting network obtained after considering the links bought for every node. Then the cost function for a strategy profile $s = (s_1,...,s_n)$ has two components: the *link cost* and the *usage cost*. The link cost for a player $i \in N$ is $\alpha |s_i|$ and it quantifies the cost of buying $|s_i|$ links. In contrast, the usage cost for a player $i$ is $\sum_{j \neq i} d_{G[s]}(i,j)$. Therefore, the total cost incurred for player $i$ is $c_i(s)=\alpha |s_i|+\sum_{j \neq i} d_{G[s]}(i,j)$. On the other hand, a given undirected graph $G$ in the *sum basic network creation game* (or abbreviated sum basic) is said to be in equilibrium iff, for every edge $(i,j) \in E(G)$ and every other player $k$, the player $i$ does not strictly decrease the sum of distances to the other players by swapping the edge $(i,j)$ for the edge $(i,k)$. At first glance, the sum basic could be seen as the model obtained from the sum classic when considering only deviations that consists in swapping individual edges. However, in any Nash equilibrium for the sum classic, only one of the endpoints of any edge has bought that specific edge so that just one of the endpoints of the edge can perform a swap of that specific edge. Therefore, one must be careful when trying to translate a property or result from the sum basic to the sum classic. In the sum classic game it has been conjectured that the price of anarchy is constant (asymptotically) for any value of $\alpha$. Until now this conjecture has been proved true for $\alpha = O(n^{1-\epsilon})$ with $\epsilon \geq 1/\log n $ ([@demaine12]) and for $\alpha >9n$ ([@alvarez17]). In [@demaine12] it is proved that the price of anarchy is upper bounded by the diameter of any Nash equilibrium. This is why the diameter of equilibria in the sum basic is studied. In [@alon13], the authors show that sufficiently large graph powers of an equilibrium graph in the sum basic model will result in distance-uniform graphs; if the critical distance is large, then the original equilibrium graph in the sum basic model imposed a high total cost on its nodes. In particular, it follows that if $\epsilon$-distance-uniform graphs had diameter $O(\log n)$, the diameter of equilibria for the sum basic would be at most $O(\log^3 n)$. Previous results on distance uniformity --------------------------------------- This application motivates the already natural question: in an $\epsilon$-distance-uniform graph with $n$ vertices and critical distance $d$, what is the relationship between the parameters $\epsilon$, $n$, and $d$? Specifically, can we derive an upper bound on $d$ in terms of $\epsilon$ and $n$? Up to a constant factor, this is equivalent to finding an upper bound on the diameter of the graph, which must be between $d$ and $2d$ as long as $\epsilon < \frac12$. Random graphs provide one example of distance-uniform graphs. In [@bollobas81], Bollobás shows that for sufficiently large $p = p(n)$, the diameter of the random Erdős–Rényi random graph $\mathcal G_{n,p}$ is asymptotically almost surely concentrated on one of two values. In fact, from every vertex $v$ in $\mathcal G_{n,p}$, the breadth-first search tree expands by a factor of $O(np)$ at every layer, reaching all or almost all vertices after about $\log_r n$ steps. Such a graph is also expected to be distance-uniform: the biggest layer of the breadth-first search tree will be much bigger than all previous layers. More precisely, suppose that we choose $p(n)$ so that the average degree $r = (n-1)p$ satisfies two criteria: that $r \gg (\log n)^3$, and that for some $d$, $r^d/n - 2 \log n$ approaches a constant $C$ as $n \to \infty$. Then it follows from Lemma 3 in [@bollobas81] that (with probability $1-o(1)$) for every vertex $v$ in $\mathcal G_{n,p}$, the number of vertices at each distance $k < d$ from $v$ is $O(r^k)$. It follows from Theorem 6 in [@bollobas81] that the number of vertex pairs in $\mathcal G_{n,p}$ at distance $d+1$ from each other is Poisson with mean $\frac12 e^{-C}$, so there are only $O(1)$ such pairs with probability $1-o(1)$. As a result, such a random graph is $\epsilon$-distance-uniform with $\epsilon = O(\frac{\log n}{r})$, and critical distance $d = \log_r n + O(1)$. This example provides a compelling image of what distance-uniform graphs look like: if the breadth-first search tree from each vertex grows at the same constant rate, then most other vertices will be reached in the same step. In any graph that is distance-uniform for a similar reason, the critical distance $d$ will be at most logarithmic in $n$. In fact, Alon et al. conjecture that all distance-uniform graphs have diameter $O(\log n)$. Alon et al. prove an upper bound of $O(\frac{\log n}{\log \epsilon^{-1}})$ in a special case: for $\epsilon$-distance-uniform graphs with $\epsilon<\frac14$ that are Cayley graphs of Abelian groups. In this case, if $G$ is the Cayley graph of an Abelian group $A$ with respect to a generating set $S$, one form of Plünnecke’s inequality (see, e.g., [@tao06]) says that the sequence $$|\underbrace{S + S + \dots + S}_k|^{1/k}$$ is decreasing in $k$. Since $S, S+S, S+S+S, \dots$ are precisely the sets of vertices which can be reached by $1, 2, 3, \dots$ steps from 0, this inequality quantifies the idea of constant-rate growth in the breadth-first search tree; Theorem 15 in [@alon13] makes this argument formal. Our results ----------- In this paper, we disprove Alon et al.’s conjecture by constructing distance-uniform graphs that do not share this behavior, and whose diameter is exponentially larger than these examples. We also prove an upper bound on the critical distance (and diameter) showing our construction to be best possible in one asymptotic sense. Specifically, we show the following two results: \[thm:intro-upper\] In any $\epsilon$-distance-uniform graph with $n$ vertices, the critical distance $d$ satisfies $$d = 2^{O\left(\frac{\log n}{\log \epsilon^{-1}}\right)}.$$ \[thm:intro-lower\] For any $\epsilon$ and $n$ with $\frac1n \le \epsilon \le \frac1{\log n}$, there exists an $\epsilon$-distance-uniform graph on $n$ vertices with critical distance $$d = 2^{\Omega\left(\frac{\log n}{\log \epsilon^{-1}}\right)}.$$ Note that, since a $\frac1{\log n}$-distance-uniform graph is also $\frac12$-distance-uniform, Theorem \[thm:intro-lower\] also provides a lower bound of $d = 2^{\Omega(\frac{\log n}{\log \log n})}$ for any $\epsilon > \frac1{\log n}$. Combined, these results prove that the maximum critical distance is $2^{\Theta(\frac{\log n}{\log \epsilon^{-1}})}$ whenever they both apply. A small gap remains for sufficiently large $\epsilon$: for example when $\epsilon$ is constant as $n \to \infty$. In this case, Theorem \[thm:intro-upper\] gives an upper bound on $d$ which is polynomial in $n$, while the lower bound of Theorem \[thm:intro-lower\] grows slower than any polynomial. The family of graphs used to prove Theorem \[thm:intro-lower\] is interesting in its own right. We give two different interpretations of the underlying structure of these graphs. First, we describe a combinatorial game, generalizing the well-known Tower of Hanoi puzzle, whose transition graph is $\epsilon$-distance-uniform and has large diameter. Second, we give a geometric interpretation, under which each graph in the family is the skeleton of the convex hull of an arrangement of points on a high-dimensional sphere. Upper bound =========== For a vertex $v$ of a graph $G$, let ${\Gamma_{r}(v)}$ denote the set $\{w \in V(G) \mid d(v,w) = r\}$: the vertices at distance exactly $r$ from $v$. In particular, ${\Gamma_{0}(v)} = \{v\}$ and ${\Gamma_{1}(v)}$ is the set of all vertices adjacent to $v$. Let $${N_{r}(v)} = \bigcup_{i=0}^r {\Gamma_{i}(v)}$$ denote the set of vertices within distance at most $r$ from $v$. Before proceeding to the proof of Theorem \[thm:intro-upper\], we begin with a simple argument that is effective for an $\epsilon$ which is very small: \[lemma:min-degree\] The minimum degree $\delta(G)$ of an $\epsilon$-distance-uniform graph $G$ satisfies $\delta(G) \ge \epsilon^{-1} - 1$. Suppose that $G$ is $\epsilon$-distance-uniform, $n$ is the number of vertices of $G$, and $d$ is the critical distance: for any vertex $v$, at least $(1-\epsilon)n$ vertices of $G$ are at distance exactly $d$ from $v$. Let $v$ be an arbitrary vertex of $G$, and fix an arbitrary breadth-first search tree $T$, rooted at $v$. We define the *score* of a vertex $w$ (relative to $T$) to be the number of vertices at distance $d$ from $v$ which are descendants of $w$ in the tree $T$. There are at least $(1-\epsilon)n$ vertices at distance $d$ from $v$, and all of them are descendants of some vertex in the neighborhood ${\Gamma_{1}(v)}$. Therefore the total score of all vertices in ${\Gamma_{1}(v)}$ is at least $(1-\epsilon)n$. On the other hand, if $w \in {\Gamma_{1}(v)}$, each vertex counted by the score of $w$ is at distance $d-1$ from $w$. Since at least $(1-\epsilon)n$ vertices are at distance $d$ from $w$, at most $\epsilon n$ vertices are at distance $d-1$, and therefore the score of $w$ is at most $\epsilon n$. In order for $|{\Gamma_{1}(v)}|$ scores of at most $\epsilon n$ to sum to at least $(1-\epsilon)n$, $|{\Gamma_{1}(v)}|$ must be at least $\frac{(1-\epsilon)n}{\epsilon n} = \epsilon^{-1} - 1$. This lemma is enough to show that in a $\frac1{\sqrt n}$-distance-uniform graph, the critical distance is at most $2$. Choose a vertex $v$: all but $\sqrt n$ of the vertices of $G$ are at the critical distance $d$ from $v$, and $\sqrt n - 1$ of the vertices are at distance $1$ from $v$ by Lemma \[lemma:min-degree\]. The remaining uncounted vertex is $v$ itself. It is impossible to have $d \ge 3$, as that would leave no vertices at distance $2$ from $v$. For larger $\epsilon$, the bound of Lemma \[lemma:min-degree\] becomes ineffective, but we can improve it by a more general argument of which Lemma \[lemma:min-degree\] is just a special case. \[lemma:arnau\] Let $G$ be an $\epsilon$-distance-uniform graph with critical distance $d$. Suppose that for some $r$ with $2r+1 \le d$, we have $|{N_{r}(v)}| \ge N$ for each $v \in V(G)$. Then we have $|{N_{3r+1}(v)}| \ge N\epsilon^{-1}$ for each $v \in V(G)$. Let $v$ be any vertex of $G$, and let $\{w_1, w_2, \dots, w_t\}$ be a maximal collection of vertices in ${\Gamma_{2r+1}(v)}$ such that $d(w_i, w_j) \ge 2r+1$ for each $i \ne j$ with $1 \le i,j \le t$. We claim that for each vertex $u \in {\Gamma_{d}(v)}$—for each vertex $u$ at the critical distance from $v$—there is some $i$ with $1 \le i \le t$ such that $u \in {N_{d-1}(w_i)}$. To see this, consider any shortest path from $v$ to $u$, and let $u_\pi \in {\Gamma_{2r+1}(v)}$ be the $(2r+1)$^th^ vertex along this path. (Here we use the assumption that $2r+1 \le d$.) From the maximality of $\{w_1, w_2, \dots, w_t\}$, it follows that $d(w_i, u_\pi) \le 2r$ for some $i$ with $1 \le i \le t$. But then, $$d(w_i, u) \le d(w_i, u_\pi) + d(u_\pi, u) \le 2r + (d - 2r-1) = d-1.$$ So $u \in {N_{d-1}(w_i)}$. To state this claim differently, the sets ${N_{d-1}(w_1)}, \dots, {N_{d-1}(w_t)}$ together cover ${\Gamma_{d}(v)}$. These sets are all small while the set they cover is large, so there must be many of them: $$(1-\epsilon)n \le |{\Gamma_{d}(v)}| \le \sum_{i=1}^t |{N_{d-1}(w_i)}| \le \sum_{i=1}^t \epsilon n = t \epsilon n,$$ which implies that $t \ge \frac{(1-\epsilon)n}{\epsilon n} = \epsilon^{-1} - 1$. The vertices $v, w_1, w_2, \dots, w_t$ are each at distance at least $2r+1$ from each other, so the sets ${N_{r}(v)}, {N_{r}(w_1)}, \dots, {N_{r}(w_t)}$ are disjoint. By the hypothesis of this lemma, each of these sets has size at least $N$, and we have shown that there are at least $\epsilon^{-1}$ sets. So their union has size at least $N\epsilon^{-1}$. Their union is contained in ${N_{3r+1}(v)}$, so we have $|{N_{3r+1}(v)}| \ge N\epsilon^{-1}$, as desired. We are now ready to prove Theorem \[thm:intro-upper\]. The strategy is to realize that the lower bounds on $|{N_{r}(v)}|$, which we get from Lemma \[lemma:arnau\], are also lower bounds on $n$, the number of vertices in the graph. By applying Lemma \[lemma:arnau\] iteratively for as long as we can, we can get a lower bound on $n$ in terms of $\epsilon$ and $d$, which translates into an upper bound on $d$ in terms of $\epsilon$ and $n$. More precisely, set $r_1 = 1$ and $r_k = 3r_{k-1} + 1$, a recurrence which has closed-form solution $r_k = \frac{3^k - 1}{2}$. Lemma \[lemma:min-degree\] tells us that in an $\epsilon$-distance-uniform graph $G$ with critical distance $d$, ${N_{r_1}(v)} \ge \epsilon^{-1}$. Lemma \[lemma:arnau\] is the inductive step: if, for all $v$, ${N_{r_k}(v)} \ge \epsilon^{-k}$, then ${N_{r_{k+1}}(v)} \ge \epsilon^{-(k+1)}$, as long as $2r_k + 1 \le d$. The largest $k$ for which $2r_k + 1 \le d$ is $k = {{\left\lfloor{\log_3 d}\right\rfloor}}$. So we can inductively prove that $$n \ge {N_{r_{k+1}}(v)} \ge \epsilon^{-({{\left\lfloor{\log_3 d}\right\rfloor}} + 1)}$$ which can be rearranged to get $$\frac{\log n}{\log \epsilon^{-1}} -1 \ge {{\left\lfloor{\log_3 d}\right\rfloor}}.$$ This implies that $$d \le 3^{\frac{\log n}{\log \epsilon^{-1}}} = 2^{O\left(\frac{\log n}{\log \epsilon^{-1}}\right)},$$ proving Theorem \[thm:intro-upper\]. Lower bound =========== To show that this bound on $d$ is tight, we need to construct an $\epsilon$-distance-uniform graph with a large critical distance $d$. We do this by defining a puzzle game whose state graph has this property. The Hanoi game -------------- We define a *Hanoi state* to be a finite sequence of nonnegative integers $\vec x = (x_1, x_2, \dots, x_k)$ such that, for all $i > 1$, $x_i \ne x_{i-1}$. Let $${{\mathcal H}}_{r,k} = \big\{ \vec x \in \{0,1,\dots, r\}^k : \vec x \mbox{ is a Hanoi state}\big\}.$$ For convenience, we also define a *proper Hanoi state* to be a Hanoi state $\vec x$ with $x_1 \ne 0$, and ${{\mathcal H}}_{r,k}^* \subset {{\mathcal H}}_{r,k}$ to be the set of all proper Hanoi states. While everything we prove will be equally true for Hanoi states and proper Hanoi states, it is more convenient to work with ${{\mathcal H}}_{r,k}^*$, because $|{{\mathcal H}}_{r,k}^*| = r^k$. In the *Hanoi game on ${{\mathcal H}}_{r,k}$*, an initial state $\vec a \in {{\mathcal H}}_{r,k}$ and a final state $\vec b \in {{\mathcal H}}_{r,k}$ are chosen. The state $\vec a$ must be transformed into $\vec b$ via a sequence of moves of two types: 1. An *adjustment* of $\vec x \in {{\mathcal H}}_{r,k}$ changes $x_k$ to any value in $\{0,1,\dots, r\}$ other than $x_{k-1}$. For example, $(1,2,3,4)$ can be changed to $(1,2,3,0)$ or $(1,2,3,5)$, but not $(1,2,3,3)$. 2. An *involution* of $\vec x \in {{\mathcal H}}_{r,k}$ finds the longest tail segment of $\vec x$ on which the values $x_k$ and $x_{k-1}$ alternate, and swaps $x_k$ with $x_{k-1}$ in that segment. For example, $(1,2,3,4)$ can be changed to $(1,2,4,3)$, or $(1,2,1,2)$ to $(2,1,2,1)$. We define the Hanoi game on ${{\mathcal H}}_{r,k}^*$ in the same way, but with the added requirement that all states involved should be proper Hanoi states. This means that involutions (or, in the case of $k=1$, adjustments) that would change $x_1$ to $0$ are forbidden. The name “Hanoi game” is justified because its structure is similar to the structure of the classical Tower of Hanoi puzzle. In fact, though we have no need to prove this, the Hanoi game on ${{\mathcal H}}_{3,k}^*$ is isomorphic to a Tower of Hanoi puzzle with $k$ disks. It is well-known that the $k$-disk Tower of Hanoi puzzle can be solved in $2^k-1$ moves, moving a stack of $k$ disks from one peg to another. In [@hinz92], a stronger statement is shown: only $2^k-1$ moves are required to go from any initial state to any final state. A similar result holds for the Hanoi game on ${{\mathcal H}}_{r,k}$: \[lemma:hanoi-diameter\] The Hanoi game on ${{\mathcal H}}_{r,k}$ (or ${{\mathcal H}}_{r,k}^*$) can be solved in at most $2^k-1$ moves for any initial state $\vec a$ and final state $\vec b$. We induct on $k$ to show the following stronger statement: for any initial state $\vec a$ and final state $\vec b$, a solution of length at most $2^k-1$ exists for which any intermediate state $\vec x$ has $x_1 = a_1$ or $x_1 = b_1$. This auxiliary condition also means that if $\vec a, \vec b \in {{\mathcal H}}_{r,k}^*$, all intermediate states will also stay in ${{\mathcal H}}_{r,k}^*$. When $k=1$, a single adjustment suffices to change $\vec a$ to $\vec b$, which satisfies the auxiliary condition. For $k>1$, there are two possibilities when changing $\vec a $ to $\vec b$: - If $a_1 = b_1$, then consider the Hanoi game on ${{\mathcal H}}_{r,k-1}$ with initial state $(a_2, a_3, \dots, a_k)$ and final state $(b_2, b_3, \dots, b_k)$. By the inductive hypothesis, a solution using at most $2^{k-1} - 1$ moves exists. Apply the same sequence of adjustments and involutions in ${{\mathcal H}}_{r,k}$ to the initial state $\vec a$. This has the effect of changing the last $k-1$ entries of $\vec a$ to $(b_2, b_3, \dots, b_k)$. To check that we have obtained $\vec b$, we need to verify that the first entry is left unchanged. The auxiliary condition of the inductive hypothesis tells us that all intermediate states have $x_2 = a_2$ or $x_2 = b_2$. Any move that leaves $x_2$ unchanged also leaves $x_1$ unchanged. A move that changes $x_2$ must be an involution swapping the values $a_2$ and $b_2$; however, $x_1 = a_1 \ne a_2$, and $x_1 = b_1 \ne b_2$, so such an involution also leaves $x_1$ unchanged. Finally, the new auxiliary condition is satisfied, since we have $x_1 = a_1 = b_1$ for all intermediate states. - If $a_1 \ne b_1$, begin by taking $2^{k-1}-1$ moves to change $\vec a$ to $(a_1, b_1, a_1, b_1, \dots)$ while satisfying the auxiliary condition, as in the first case. An involution takes this state to $(b_1, a_1, b_1, a_1, \dots)$; this continues to satisfy the auxiliary condition. Finally, $2^{k-1}-1$ more moves change this state to $\vec b$, as in the first case, for a total of $2^k-1$ moves. If we obtain the same results as in the standard Tower of Hanoi puzzle, why use the more complicated game in the first place? The reason is that in the classical problem, we cannot guarantee that any starting state would have a final state $2^k-1$ moves away. With the rules we define, as long as the parameters are chosen judiciously, each state $\vec a \in {{\mathcal H}}_{r,k}$ is part of many pairs $(\vec a, \vec b)$ for which the Hanoi game requires $2^k-1$ moves to solve. The following lemma almost certainly does not characterize such pairs, but provides a simple sufficient condition that is strong enough for our purposes. \[lemma:hanoi-game\] The Hanoi game on ${{\mathcal H}}_{r,k}$ (or ${{\mathcal H}}_{r,k}^*$) requires exactly $2^k-1$ moves to solve if $\vec a$ and $\vec b$ are chosen with disjoint support: that is, $a_i \ne b_j$ for all $i$ and $j$. Since Lemma \[lemma:hanoi-diameter\] proved an upper bound of $2^k-1$ for all pairs $(\vec a, \vec b)$, we only need to prove a lower bound in this case. Once again, we induct on $k$. When $k=1$, a single move is necessary to change $\vec a$ to $\vec b$ if $\vec a \ne \vec b$, verifying the base case. Consider a pair $\vec a, \vec b \in {{\mathcal H}}_{r,k}$ with disjoint support, for $k > 1$. Moreover, assume that $\vec a$ and $\vec b$ are chosen so that, of all pairs with disjoint support, $\vec a$ and $\vec b$ require the least number of moves to solve the Hanoi game. (Since we are proving a lower bound on the number of moves necessary, this assumption is made without loss of generality.) In a shortest path from $\vec a$ to $\vec b$, every other move is an adjustment: if there were two consecutive adjustments, the first adjustment could be skipped, and if there were two consecutive involutions, they would cancel out and both could be omitted. Moreover, the first move is an adjustment: if we began with an involution, then the involution of $\vec a$ would be a state closer to $\vec b$ yet still with disjoint support to $\vec b$, contrary to our initial assumption. By the same argument, the last move must be an adjustment. Given a state $\vec x \in {{\mathcal H}}_{r,k}$, let its *abbreviation* be $\vec x' = (x_1, x_2, \dots, x_{k-1}) \in {{\mathcal H}}_{r,k-1}$. An adjustment of $\vec x$ has no effect on $\vec x'$, since only $x_k$ is changed. If $x_k \ne x_{k-2}$, then an involution of $\vec x$ is an adjustment of $\vec x'$, changing its last entry $x_{k-1}$ to $x_k$. Finally, if $x_k = x_{k-2}$, then an involution of $\vec x$ is also an involution of $\vec x'$. Therefore, if we take a shortest path from $\vec a$ to $\vec b$, omit all adjustments, and then abbreviate all states, we obtain a solution to the Hanoi game on ${{\mathcal H}}_{r,k-1}$ that takes $\vec a'$ to $\vec b'$. By the inductive hypothesis, this solution contains at least $2^{k-1} - 1$ moves, since $\vec a'$ and $\vec b'$ have disjoint support. Therefore the shortest path from $\vec a$ to $\vec b$ contains at least $2^{k-1}-1$ involutions. Since the first, last, and every other move is an adjustment, there must be $2^{k-1}$ adjustments as well, for a total of $2^k-1$ moves. Now let the *Hanoi graph $G_{r,k}^*$* be the graph with vertex set ${{\mathcal H}}_{r,k}^*$ and edges joining each state to all the states that can be obtained from it by a single move. Since an adjustment can be reversed by another adjustment, and an involution is its own inverse, $G_{r,k}^*$ is an undirected graph. For any state $\vec a \in {{\mathcal H}}_{r,k}^*$, there are at least $(r-k)^k$ other states with disjoint support to $\vec a$, out of $|{{\mathcal H}}_{r,k}^*| = r^k$ other states, forming a $\left(1 - \frac{k}{r}\right)^k > 1 - \frac{k^2}{r}$ fraction of all the states. By Lemma \[lemma:hanoi-game\], each such state $\vec b$ is at distance $2^k-1$ from $\vec a$ in the graph $G_{r,k}^*$, so $G_{r,k}^*$ is $\epsilon$-distance uniform with $\epsilon = \frac{k^2}{r}$, $n = r^k$ vertices, and critical distance $d = 2^k-1$. Having established the graph-theoretic properties of $G_{r,k}^*$, we now prove Theorem \[thm:intro-lower\] by analyzing the asymptotic relationship between these parameters. Begin by assuming that $n = 2^{2^m}$ for some $m$. Choose $a$ and $b$ such that $a+b=m$ and $$\frac{2^{2b}}{2^{2^a}} \le \epsilon < \frac{2^{2(b+1)}}{2^{2^{a-1}}},$$ which is certainly possible since $\frac{2^0}{2^{2^m}} = \frac1n \le \epsilon$ and $\frac{2^{2m}}{2^{2^0}} > 1 \ge \epsilon$. Setting $r = 2^{2^a}$ and $k = 2^b$, the Hanoi graph $G_{r,k}^*$ has $n$ vertices and is $\epsilon$-distance uniform, since $\frac{k^2}{r} \le \epsilon$. Moreover, our choice of $a$ and $b$ guarantees that $\epsilon < \frac{4k^2}{\sqrt{r}}$, or $\log \epsilon^{-1} \ge \frac12 \log r - 2 \log 2k$. Since $n = r^k$, $\log n = k \log r$, so $$\log \epsilon^{-1} \ge \frac{1}{2k} \log n - 2 \log 2k.$$ We show that $k \ge \frac{\log n}{6 \log \epsilon^{-1}}$. Since $\epsilon \le \frac1{\log n}$, this is automatically true if $k \ge \frac{\log n}{6 \log \log n}$, so assume that $k < \frac{\log n}{6 \log \log n}$. Then $$\frac{1}{3k} \log n > 2 \log \log n > 2 \log 2k,$$ so $$\log \epsilon^{-1} \ge \frac1{2k} \log n - 2 \log 2k > \frac{1}{2k} \log n - \frac1{3k} \log n = \frac1{6k} \log n,$$ which gives us the desired inequality $k \ge \frac{\log n}{6 \log \epsilon^{-1}}$. The Hanoi graph $G_{r,k}^*$ has critical distance $d = 2^k - 1 = 2^{\Omega(\frac{\log n}{\log \epsilon^{-1}})}$, so the proof is finished in the case that $n$ has the form $2^{2^m}$ for some $n$. For a general $n$, we can choose $m$ such that $2^{2^m} \le n < 2^{2^{m+1}} = \left(2^{2^m}\right)^2$, which means in particular that $2^{2^m} \ge \sqrt n$. If $\epsilon < \frac{2}{\sqrt n}$, then the requirement of a critical distance of $2^{\Omega(\frac{\log n}{\log \epsilon^{-1}})}$ is only a constant lower bound, and we may take the graph $K_n$. Otherwise, by the preceding argument, there is a $\frac{\epsilon}{2}$-distance-uniform Hanoi graph with $2^{2^m}$ vertices; its critical distance $d$ satisfies $$d \ge 2^{\Omega\left(\frac{\log \sqrt{n}}{\log (\epsilon/2)^{-1}}\right)} = 2^{\Omega\left(\frac{\log n}{\log \epsilon^{-1}}\right)}.$$ To extend this to an $n$-vertex graph, take the blow-up of the $2^{2^m}$-vertex Hanoi graph, replacing every vertex by either $\lfloor n/2^{2^m} \rfloor$ or $\lceil n/2^{2^m} \rceil$ copies. Whenever $v$ and $w$ were at distance $d$ in the original graph, the copies of $v$ and $w$ will be at distance $d$ in the blow-up. The difference between floor and ceiling may slightly ruin distance uniformity, but the graph started out $\frac{\epsilon}{2}$-distance-uniform, and $\lceil n/2^{2^m} \rceil$ differs from $\lfloor n/2^{2^m} \rfloor$ at most by a factor of 2. Even in the worst case, where for some vertex $v$ the $\frac{\epsilon}{2}$-fraction of vertices not at distance $d$ from $v$ all receive the larger number of copies, the resulting $n$-vertex graph will be $\epsilon$-distance-uniform. Points on a sphere ------------------ In this section, we identify $G_{r,k}$, the graph of the Hanoi game on ${{\mathcal H}}_{r,k}$, with a graph that arises from a geometric construction. Fix a dimension $r$. We begin by placing $r+1$ points on the $r$-dimensional unit sphere arbitrarily in general position (though, for the sake of symmetry, we may place them at the vertices of an equilateral $r$-simplex). We identify these points with a graph by taking the 1-skeleton of their convex hull. In this starting configuration, we simply get $K_{r+1}$. Next, we define a truncation operation on a set of points on the $r$-sphere. Let $\delta>0$ be sufficiently small that a sphere of radius $1-\delta$, concentric with the unit sphere, intersects each edge of the 1-skeleton in two points. The set of these intersection points is the new arrangement of points obtained by the truncation; they all lie on the smaller sphere, and for convenience, we may scale them so that they are once again on the unit sphere. An example of this is shown in Figure \[fig:truncation\]. [0.3]{} ![An example of truncation[]{data-label="fig:truncation"}](tetrahedron "fig:"){width="\textwidth"} [0.3]{} ![An example of truncation[]{data-label="fig:truncation"}](truncated-tetrahedron "fig:"){width="\textwidth"} Starting with a set of $r+1$ points on the $r$-dimensional sphere and applying $k$ truncations produces a set of points such that the 1-skeleton of their convex hull is isomorphic to the graph $G_{r,k}$. We induct on $k$. When $k=1$, the graph we get is $K_{r+1}$, which is isomorphic to $G_{r,1}$. From the geometric side, we add an auxiliary statement to the induction hypothesis: given points $p, q_1, q_2$ such that, in the associated graph, $p$ is adjacent to both $q_1$ and $q_2$, there is a 2-dimensional face of the convex hull containing all three points. This is easily verified for $k=1$. Assuming that the induction hypotheses are true for $k-1$, fix an isomorphism of $G_{r,k-1}$ with the set of points after $k-1$ truncations, and label the points with the corresponding vertices of $G_{r,k-1}$. We claim that the graph produced after one more truncation has the following structure: 1. A vertex that we may label $(\vec x, \vec y)$ for every ordered pair of adjacent vertices of $G_{r,k-1}$. 2. An edge between $(\vec x, \vec y)$ and $(\vec y, \vec x)$. 3. An edge between $(\vec x, \vec y)$ and $(\vec x, \vec z)$ whenever both are vertices of the new graph. The first claim is immediate from the definition of truncation: we obtain two vertices from the edge between $\vec x$ and $\vec y$. We choose to give the name $(\vec x, \vec y)$ to the vertex closer to $\vec x$. The edge between $\vec x$ and $\vec y$ remains an edge, and now joins the vertices $(\vec x, \vec y)$ and $(\vec y, \vec x)$, verifying the second claim. By the auxiliary condition of the induction hypothesis, the vertices labeled $\vec x$, $\vec y$, and $\vec z$ lie on a common 2-face whenever $\vec x$ is adjacent to both $\vec y$ and $\vec z$. After truncation, $(\vec x, \vec y)$ and $(\vec x, \vec z)$ will also be on this 2-face; since they are adjacent along the boundary of that face, and extreme points of the convex hull, they are joined by an edge, verifying the third claim. To finish the geometric part of the proof, we verify that the auxiliary condition remains true. There are two cases to check. For a vertex labeled $(\vec x, \vec y)$, if we choose the neighbors $(\vec x, \vec z)$ and $(\vec x, \vec w)$, then any two of them are joined by an edge, and therefore they must lie on a common 2-dimensional face. If we choose the neighbors $(\vec x, \vec z)$ and $(\vec y, \vec x)$, then the points continue to lie on the 2-dimensional face inherited from the face through $\vec x$, $\vec y$, and $\vec z$ of the previous convex hull. Now it remains to construct an isomorphism between the 1-skeleton graph of the truncation, which we will call $T$, and $G_{r,k}$. We identify the vertex $(\vec x, \vec y)$ of $T$ with the vertex $(x_1, x_2, \dots, x_{k-1}, y_{k-1})$ of $G_{r,k}$. Since $x_{k-1} \ne y_{k-1}$ after any move in the Hanoi game, this $k$-tuple really is a Hanoi state. Conversely, any Hanoi state $\vec z \in {{\mathcal H}}_{r,k}$ corresponds to a vertex of $T$: let $\vec x = (z_1, z_2, \dots, z_{k-1})$, and let $\vec y$ be the state obtained from $\vec x$ by either an adjustment of $z_{k-1}$ to $z_k$, if $z_k \ne z_{k-2}$, or else an involution, if $z_k = z_{k-2}$. Therefore the map we define is a bijection between the vertex sets. Both $T$ and $G_{r,k}$ are $r$-regular graphs, therefore it suffices to show that each edge of $T$ corresponds so an edge in $G_{r,k}$. Consider an edge joining $(\vec x, \vec y)$ with $(\vec x, \vec z)$ in $T$. This corresponds to vertices $(x_1, x_2, \dots, x_{k-1}, y_{k-1})$ and $(x_1, x_2, \dots, x_{k-1}, z_{k-1})$ in $G_{r,k}$; these are adjacent, since we can obtain one from the other by an adjustment. Next, consider an edge joining $(\vec x, \vec y)$ to $(\vec y, \vec x)$. If $\vec x$ and $\vec y$ are related by an adjustment in $G_{r,k-1}$, then they have the form $(x_1, \dots, x_{k-2}, x_{k-1})$ and $(x_1, \dots, x_{k-2}, y_{k-1})$. The vertices corresponding to $(\vec x, \vec y)$ and $(\vec y, \vec x)$ in $G_{r,k}$ are $(x_1, \dots, x_{k-2}, x_{k-1}, y_{k-1})$ and $(x_1, \dots, x_{k-2}, y_{k-1}, x_{k-1})$, and one can be obtained from the other by an involution. Finally, if $\vec x$ and $\vec y$ are related by an involution in $G_{r,k-1}$, then that involution swaps $x_{k-1}$ and $y_{k-1}$. Therefore such an involution in $G_{r,k}$ will take $(x_1, \dots, x_{k-1}, y_{k-1})$ to $(y_1, \dots, y_{k-1}, x_{k-1})$, and the vertices corresponding to $(\vec x, \vec y)$ and $(\vec y, \vec x)$ are adjacent in $G_{r,k}$. [10]{} Noga Alon, Erik D. Demaine, Mohammad T. Hajiaghayi, and Tom Leighton. Basic network creation games. , 27(2):656–668, 2013. C Alvarez and A Messegu[é]{}. Network creation games: Structure vs anarchy. , 2017. Davide Bilò, Luciano Gualà, Stefano Leucci, and Guido Proietti. The max-distance network creation game on general host graphs. , 573:43–53, 2015. Davide Bilò, Luciano Gualà, and Guido Proietti. Bounded-distance network creation games. , 3(3):Art. 16, 20, 2015. B[é]{}la Bollob[á]{}s. The diameter of random graphs. , 267(1):41–52, 1981. Ulrik Brandes, Martin Hoefer, and Bobo Nick. Network creation games with disconnected equilibria. In [*International Workshop on Internet and Network Economics*]{}, pages 394–401. Springer, 2008. Erik D. Demaine, Mohammadtaghi Hajiaghayi, Hamid Mahini, and Morteza Zadimoghaddam. The price of anarchy in network creation games. , 8(2):Art. 13, 13, 2012. Shayan Ehsani, Saber Shokat Fadaee, Mohammadamin Fazli, Abbas Mehrabian, Sina Sadeghian Sadeghabad, Mohammadali Safari, and Morteza Saghafian. A bounded budget network creation game. , 11(4):Art. 34, 25, 2015. Alex Fabrikant, Ankur Luthra, Elitza Maneva, Christos H. Papadimitriou, and Scott Shenker. On a network creation game. In [*Proceedings of the twenty-second annual symposium on Principles of distributed computing*]{}, pages 347–351. ACM, 2003. Andreas M. Hinz. Shortest paths between regular states of the [T]{}ower of [H]{}anoi. , 63(1-2):173–181, 1992. Terence Tao and Van H. Vu. , volume 105 of [*Cambridge studies in advanced mathematics*]{}. Cambridge University Press, 2006. [^1]: University of Illinois at Urbana-Champaign, Department of Mathematics. E-mail: `mlavrov@illinois.edu`. [^2]: Carnegie Mellon University, Department of Mathematical Sciences. E-mail: `ploh@cmu.edu`. [^3]: Polytechnic University of Catalonia, Computer Science Department. E-mail: `messegue@cs.upc.edu`.
{ "pile_set_name": "ArXiv" }
--- abstract: | Based directly on the microscopic lattice dynamics, a simple high temperature expansion can be devised for non-equilibrium steady states. We apply this technique to investigate the disordered phase and the phase diagram for a driven bilayer lattice gas at half filling. Our approximation captures the phases first observed in simulations, provides estimates for the transition lines, and allows us to compute signature observables of non-equilibrium dynamics, namely, particle and energy currents. Its focus on non-universal quantities offers a useful analytic complement to field-theoretic approaches.\ **[KEY WORDS]{}: Non-equilibrium steady states; driven lattice gases; high temperature series expansion.** address: | $^1$Department of Physics and Engineering,\ Washington and Lee University, Lexington, VA 24450;\ $^2$Center for Stochastic Processes in Science and Engineering,\ Physics Department, Virginia Tech, Blacksburg, VA 24061-0435, USA. author: - 'I. Mazilu$^1$ and B. Schmittmann$^2$' date: 'January 22, 2003 ' title: High temperature expansion for a driven bilayer system --- epsf.sty Introduction ============ Many-particle systems in a state of thermal equilibrium are the exception, rather than the rule. Physical reality is overwhelmingly in a far-from-equilibrium state. Examples range from living cells and weather patterns to ripples on water and sand. As we leave the framework of standard Gibbs ensemble theory for equilibrium systems, we have to search for new avenues and tools, seeking to understand and classify non-equilibrium behavior. As a first step along this road, the study of the simplest generalizations of equilibrium systems, i.e., [*non-equilibrium steady states*]{} (NESS), has been particularly fruitful [@SZ-rev; @other-revs]. Progress has relied predominantly on simulations, mean-field theory and renormalization group analyses for simple model systems. A class of models which exhibit especially interesting behavior are driven diffusive systems. Microscopically, these are lattice gases, consisting of one or more species of particles and holes, whose densities are conserved. An external driving force, combined with suitable boundary conditions, maintains a NESS. In the simplest case [@KLS], a uniform bias, or drive, $E$, is imposed on an Ising lattice gas such that a nonzero steady-state mass current is induced. This model differs significantly from the usual Ising model: it displays generic long-range correlations [@KLS; @ZWLV; @GLMS], and belongs to a non-equilibrium universality class [@crit] with upper critical dimension $d_{c}=5$. The ordered phase is phase-separated into two strips of high vs low density aligned with the bias. In contrast to equilibrium, bulk and interfacial properties are inextricably intertwined here [@lowT]. To avoid the complications due to the presence of interfaces, a bilayer structure was suggested [@KKM]: in the two-dimensional case, a second lattice was introduced, allowing for particle-hole exchanges between each site and its mirror image. This bilayer system is half filled with particles, and both layers are driven in the same direction. In the absence of any energetic couplings between the two layers, it was hoped that typical ordered configurations would show [*homogeneous*]{} densities on each layer, one almost full and the other nearly empty. Remarkably, however, this expectation proved too naive: Monte Carlo simulations [@2l-early] showed a sequence of[* two*]{} phase transitions, as the temperature is lowered: the first transition takes the system from a disordered (D) phase to a strip-like (S) structure showing phase-separation [*within each layer*]{}, with interfaces parallel to the drive and ‘on top of’ one another. The anticipated “full-empty” (FE) phase, with uniform densities on both layers, only emerges after a second transition which occurs at a lower temperature. Once an interaction $J$, of either sign, between nearest neighbors on different layers is introduced, the full phase diagram in ($J,E$) space can be mapped out [@HZS; @CW], using Monte Carlo simulations. As one might expect, the S (FE) phase dominates for attractive (repulsive) cross-layer coupling $J$. Remarkably, however, there is a small but finite region where the S-phase prevails even though the cross-layer coupling is weakly repulsive (cf. Fig. 1). The presence of this domain puts the two transitions, observed for $J=0$, into perspective. We note for completeness that universal properties along the lines of continuous transitions have been analyzed in [@TSZ] with the help of renormalized field theory. To provide additional motivation for the study of layered structures, we note that multilayer models have a long history in equilibrium statistical mechanics[@ballentine; @binder; @hansen]. On the theoretical side, they allow for the study of dimensional crossover [@dim-cross]; on the more applied side, they provide natural models for the analysis of intercalated systems [@intercalation], interacting solid surfaces or thin films [ferrenberg]{}. Since intercalated systems are often driven by chemical gradients or electric fields, to speed the diffusion of foreign atoms into the host material, it is quite natural to study driven layered structures. Simulations relie, of course, on discrete lattice models. In contrast, field theories operate in the continuum, and thus, all discrete degrees of freedom have to be coarse-grained before these powerful techniques can be applied. In the process, non-universal information is lost, such as, e.g., the location of transition lines in the phase diagram. It is therefore desirable to identify a second analytic approach which is based directly on the microscopic model and thus complements both, simulations and continuum theories. Fortunately, high temperature expansion techniques [@HTS-revs] can be generalized to interacting driven lattice gases [@ZWLV; @SZ; @LZS]. For the single-layer case, two-point correlation functions can be computed approximately [@ZWLV; @SZ] and display the expected power law decays in the steady state. With some care, the approximate location of order-disorder transitions can be extracted and compared to simulation results [@SZ]. Given the nature of the approximation, [*quantitative*]{} accuracy cannot be expected, but the [*qualitative*]{} agreement of data and approximation is remarkably good. While the high temperature expansion is quite successful for the usual driven lattice gas, it is not clear to what extent it is capable of capturing the main features of other driven systems. This motivates the work presented in this paper, namely, the analysis of the bilayer system with this technique. Within a first-order approximation, we compute the two-point correlation functions and several related quantities, such as the particle current and the energy flux through the system. We extract the approximate location of the continuous transition lines and compare our results to the Monte Carlo data. As in the single-layer case, the qualitative features of the transition lines are reproduced as well as can be expected. Some limitations of the method will be discussed. This paper is organized as follows. We first introduce the bilayer model.After a brief summary of the high temperature expansion, we derive the closed set of equations satisfied by the two-point functions. We then obtain the solutions and extract the transition lines. Next, we show how the mass and energy currents through the system can be expressed in terms of pair correlations. We conclude with some comments and open questions. The bilayer model ================= A variant of the driven Ising model [@KLS], the model consists of two square lattices, one stacked above the other, resulting in a bilayer structure of size $L^{2}\times 2$. Each lattice site $\vec{r}\equiv (x,y,z)$, with $x,y=1,2,...,L$ and $z=0,1$, carries a spin variable $s(\vec{r})=\pm 1 $. Often, we also use lattice gas language, mapping spins into particles or holes, via $s(\vec{r})\equiv 2n(\vec{r})-1$. The local occupation variable $% n(\vec{r})$ takes the values $1$ or $0$, indicating whether a particle is present or not. The total magnetization, $\sum_{\vec{r}}s(\vec{r})$, is fixed at zero so that the Ising critical point can be accessed. Within each layer, nearest-neighbor spins interact through a ferromagnetic exchange coupling $J_{0}>0$; in contrast, the cross-layer interaction $J$, which couples spins $s(x,y,0)$ and $s(x,y,1)$, can take both signs. These choices are motivated by the physics of intercalated systems [@intercalation]. Thus, the Hamiltonian of the system can be written in the form $$H=-J_{0}\sum_{z}\sum_{nn}s(x,y,z)s(x^{\prime },y^{\prime },z)-2J\sum_{x,y}s(x,y,0)s(x,y,1) \label{H}$$where $\sum_{nn}$ denotes the sum over all nearest-neighbor pairs $(x,y,z)$ and $(x^{\prime },y^{\prime },z)$ within the same plane. A heat bath at temperature $T$ is coupled to the system, in order to model thermal fluctuations. We use fully periodic boundary conditions in all directions; hence the factor of $2$ in front of the cross-layer coupling $J$. In the absence of the drive, particles hop to empty nearest-neighbor sites according to the usual Metropolis [@Metropolis] rates, $\min \left\{ 1,\exp \left( -\beta \Delta H\right) \right\} $, where $\Delta H$ is the energy difference due to the jump. Respecting the conservation of density, the phase diagram of this system is easily found. At high temperatures, a disordered phase persists, characterized by correlations which fall off exponentially. At a critical temperature $T_{c}(J)$, a continuous transition occurs into the S (FE) phase for $J>0$ ($J<0$). At $J=0$, the critical temperature takes the Onsager value [@Onsager] $% T_{c}(0)=2.269...J_{0}/k_{B}$. For finite $J$, $T_{c}(J)$ is even in $J$, due to a simple gauge symmetry, and increases monotonically with $\left| J\right| $. For $J\rightarrow \pm \infty $, nearest-neighbor spin pairs, with the partners located on different layers, combine into dimers who couple to neighboring dimers with strength $2J_{0}$. As a result, the critical temperature approaches the limit $T_{c}(\pm \infty )=2T_{c}(0)$. The line $J=0$, $T<T_{c}(0)$ is a line of first-order transitions between the S and FE phases. It ends in a bicritical point at $J=0$, $T=T_{c}(0)$. To drive the system out of equilibrium, we apply a bias (an “electric” field) $\vec{E}$ along the positive $x$-axis. The contents of two sites, $% \vec{r}$ and $\vec{r}+\hat{a}$, separated by a (unit) lattice vector $ \hat{a}$, are exchanged according to the rate $$c(\vec{r},\vec{r}+\hat{a};\left\{ s\right\} )=\min \left\{ 1,\exp \left[ -\beta \Delta H+\beta \,\hat{a} \cdot \vec{E}\,(n(\vec{r})-n(\vec{r}+\hat{a}))% \right] \right\} \label{rates}$$ The argument $\left\{ s\right\} $ reminds us that the rate depends on a local neighborhood of the central pair. Due to $E$, particle hops against the drive become unfavorable. In conjunction with periodic boundary conditions in the $x$- and $y$-directions, the system settles into a non-equilibrium steady state with a net particle current. The phase diagram, resulting from Monte Carlo simulations at $J_{0}=1$ and infinite $E$, is shown in Fig. 1. The same phases and transitions are found, but the bicritical point and its attached first order line are shifted to higher $T$ and into the $J<0$ region. Thus, the S phase is observed to be stable in a finite window of negative interlayer coupling, so that two transitions must occur along the $J=0$ axis. This discovery represents the most unexpected new characteristic of this driven diffusive system. We also note the decrease of the critical temperatures for very large $\left| J\right| $. In a recent paper [@CW], this phase diagram was extended to include [*unequal intra-layer*]{} attractive couplings. In this case, the bicritical point is shifted even further into the negative region of $J$ as the coupling transverse to the bias increases. We now turn to the analysis of this model in terms of a high temperature expansion. = 0.7= 0.7 High temperature expansion ========================== The dynamics underlying the Monte Carlo simulations is easily expressed via a master equation. The latter provides a convenient starting point for a high temperature expansion. For simplicity, we take the thermodynamic limit within each plane, i.e., $L\rightarrow \infty $. Following [@ZWLV], we first derive the equations of motion for the two-point functions. By virtue of the familiar hierarchy, they are coupled to the three-point functions; however, we will argue that these are negligible (while non-zero, they are numerically rather small), so as to arrive at a closed system of equations for the two-point correlations. Temperature appears in these equations through the rates, via the combinations $\beta J$, $\beta J_{0}$, and $\beta E$. To preserve the non-equilibrium nature of our dynamics, we expand in $% \beta J$ and $\beta J_{0}$, keeping $\beta E$ finite. Technically, this requires that $E$ always dominates the energetic contribution, i.e., $% E>\Delta H$ for all jumps along $E$. To first order, a linear, ${\em % inhomogeneous}$ system of equations results, which can be solved exactly [@SZ] and forms the basis of our analysis. The equations of motion and their solution. ------------------------------------------- Before turning to any detailed calculations, let us introduce the key quantities. The [*two-point correlation function*]{} is defined as: $$G(\vec{r}-\vec{r}\text{ }^{\prime })=\left\langle s(\vec{r})s(\vec{r}\text{ }% ^{\prime })\right\rangle \label{CF}$$ where $\left\langle \cdot \right\rangle $ denotes the configurational average. Due to translation invariance, $G$ depends only on the difference of the two vectors. Moreover, $G$ is invariant under reflection of one or several lattice directions; e.g., $G(x,y,z)=G(-x,y,z)$, etc. The correlation function at the origin is obviously unity, $G(\vec{0})=\left\langle s^{2}(% \vec{r})\right\rangle =1$. We also introduce the Fourier transform of $G$, i.e., the[** **]{}[*structure factor*]{}: $$S(k,p,q)\equiv \sum_{z=0,1}\sum_{x,y=-\infty }^{\infty }G(x,y,z)e^{-i(kx+py+qz)} \label{SF}$$ Since we take the thermodynamic limit $L\rightarrow \infty $, the wave vectors $k$ and $p$ are continuous, but restricted to the first Brillouin zone $[-\pi ,\pi ]$, while $q$ is discrete, taking only the two values $0$ and $\pi $. For completeness, we also give the inverse transform, $$\begin{aligned} G(x,y,z) &=&\frac{1}{2(2\pi )^{2}}\sum_{q=0,\pi }\int_{-\pi }^{+\pi }dk\int_{-\pi }^{+\pi }dpS(k,p,q)e^{i(kx+py+qz)} \nonumber \\ &\equiv &\int S(k,p,q)e^{i(kx+py+qz)} \label{FT}\end{aligned}$$ where the second line just defines some simplified notation. To set up the high temperature expansion, we first define the actual expansion parameters of our theory, namely $$\begin{aligned} K_{0} &\equiv &\beta J_{0} \nonumber \\ K &\equiv &\beta J \label{Ks}\end{aligned}$$For $K=K_{0}=0$, the steady-state distribution is exactly known [spitzer]{} to be uniform for all $E$: $P^{\ast }\propto 1$, so that we are expanding about a well-defined zeroth order solution. The correlation functions and structure factors for this limit are trivial, namely, $G(\vec{r% })=\delta _{\vec{r},\vec{0}}$ where $\delta $ denotes the Kronecker symbol, and $S(k,p,q)=1$. Returning to the interacting case, we note that $G(\vec{r}) $, for $\vec{r}\neq \vec{0}$, is already of first order in the small parameter. Similarly, we can write the structure factor as a sum of two terms. The first term is just the zeroth order solution, while the second, $% \tilde{S}$, carries the information about the interactions, $$S(k,p,q)=1+\tilde{S}(k,p,q) \label{S}$$so that we can recast $G(\vec{r})$, for $\vec{r}\neq \vec{0}$, in the form $$G(x,y,z)=\int \widetilde{S}(k,p,q)e^{i(kx+py+qz)}\text{\ for }x,y,z\neq 0 \label{G}$$ The exact equations of motion for $G$ are easily derived from the master equation [@ZWLV]: $$%TCIMACRO{\dfrac{d}{dt}}% %BeginExpansion {\displaystyle{d \over dt}}% %EndExpansion \left\langle s(\vec{r})s(\vec{r}\text{ }^{\prime })\right\rangle =\sum_{\vec{% x},\vec{x}^{\prime }}\left\langle s(\vec{r})s(\vec{r}\text{ }^{\prime })% \left[ s(\vec{x})s(\vec{x}\text{ }^{\prime })-1\right] c\left( \vec{x},\vec{x% }\text{ }^{\prime };\left\{ s\right\} \right) \right\rangle \label{eom}$$ Here, the sum runs over[* nearest-neighbor*]{} pairs ($\vec{x},\vec{x}$ $% ^{\prime }$) such that $\vec{x}\in \left\{ \vec{r},\vec{r}\text{ }^{\prime }\right\} $ but $\vec{x}$ $^{\prime }\notin \left\{ \vec{r},\vec{r}\text{ }% ^{\prime }\right\} $. Stationary correlations are obtained by setting the left hand side to zero. Clearly, jumps along and against all three lattice directions will contribute to the right hand side of Eq. (\[eom\]). To proceed, let us write the jump rates in a form which makes their dependence on the spin configuration $\left\{ s\right\} $ explicit, so that the configurational averages in Eq. (\[eom\]) can be performed. For [*infinite*]{} drive, a particle jumps along the field with rate unity, but never against it, so that the transition rates [*parallel to the field*]{} can be written as: $$c_{\Vert }^{\infty }\left( \vec{r},\vec{r}+\hat{x};\left\{ s\right\} \right) =% %TCIMACRO{\dfrac{1}{4}}% %BeginExpansion {\displaystyle{1 \over 4}}% %EndExpansion \left[ s(\vec{r})-s(\vec{r}+\hat{x})+2\right] \label{c_par_Einf}$$ Here, $\hat{x}$ is a unit vector in the positive $x$-direction. In the case of [*finite*]{} drive, our restriction $E>\Delta H$ ensures that jumps along $E$ still occur with unit rate, while those against $E$ are suppressed by a factor of $\exp \left[ -\beta \left( \Delta H+E\right) % \right] $. Defining $$\varepsilon \equiv e^{-\beta E} \label{eps}$$ Eq. (\[c\_par\_Einf\]) must be amended to $$c_{\Vert }\left( \vec{r},\vec{r}+\hat{x};\left\{ s\right\} \right) =% %TCIMACRO{\dfrac{1}{4}}% %BeginExpansion {\displaystyle{1 \over 4}}% %EndExpansion \left[ s(\vec{r})-s(\vec{r}+\hat{x})+2\right] +% %TCIMACRO{\dfrac{\varepsilon }{4}}% %BeginExpansion {\displaystyle{\varepsilon \over 4}}% %EndExpansion \left[ s(\vec{r}+\hat{x})-s(\vec{r})+2\right] \exp (-\beta \Delta H) \label{c_par}$$ Transverse to the field we have two jump rates, corresponding to the two transverse directions ($y$ and $z$). Both of these are regulated by the energy difference due to a jump: $$c_{\bot }\left( \vec{r},\vec{r}+\hat{a};\left\{ s\right\} \right) =\min \left\{ 1,\exp \left( -\beta \Delta H\right) \right\} \label{c_perp}$$ We are now ready to expand the rates in powers of $K$ and $K_{0}$ while keeping $\varepsilon $ finite: $$\begin{aligned} c_{\Vert }\left( \vec{r},\vec{r}+\hat{x};\left\{ s\right\} \right) &=&% %TCIMACRO{\dfrac{1}{4}}% %BeginExpansion {\displaystyle{1 \over 4}}% %EndExpansion \left[ s(\vec{r})-s(\vec{r}+\hat{x})+2\right] +% %TCIMACRO{\dfrac{\varepsilon }{4}}% %BeginExpansion {\displaystyle{\varepsilon \over 4}}% %EndExpansion \left[ s(\vec{r}+\hat{x})-s(\vec{r})+2\right] \left( 1-\beta \Delta H\right) +O(\beta ^{2}) \label{c_par_exp} \\ c_{\bot }\left( \vec{r},\vec{r}+\hat{a};\left\{ s\right\} \right) &=&1+\beta c_{2}\left( \vec{r},\vec{r}+\hat{a};\left\{ s\right\} \right) +O(\beta ^{2}) \label{c_perp_exp}\end{aligned}$$ with $$c_{2}\left( \vec{r},\vec{r}+\hat{a};\left\{ s\right\} \right) =-\frac{1}{2}% (\Delta H+\left| \Delta H\right| )$$ Given these simple forms for the rates, we can now derive the equations of motion satisfied by the pair correlations directly from Eq. (\[eom\]), following [@ZWLV]. A few details are outlined in the Appendix. Keeping only corrections to first order in $K$, $K_{0}$ and neglecting three-point correlations, we obtain a [*closed set of linear equations*]{} for $G(x,y,z)$.[** **]{}Since the dynamics is restricted to nearest-neighbor processes, it is not surprising that the equations involve an anisotropic lattice Laplacian acting on $G(x,y,z)$. For $x,y,z$ near the origin, the Laplacian may include the origin and will thus generate inhomogeneities in the system of equations. The detailed form depends on the chosen boundary conditions, and, of course, on the three parameters $K$, $K_{0}$, and $\varepsilon $. Below, we show the set of equations for fully periodic boundary conditions. The first three equations result from nearest neighbors of the origin, $\vec{% r}=(1,0,0)$, $(0,1,0)$, and $(0,0,1)$: $$\begin{aligned} \partial _{t}G(1,0,0) &=&(1+\varepsilon )[G(2,0,0)-G(1,0,0)]+4[G(1,1,0)-G(1,0,0)] \nonumber \\ &&+4[G(1,0,1)-G(1,0,0)]+2\varepsilon K_{0}+8K_{0} \nonumber \\ \partial _{t}G(0,1,0) &=&2(1+\varepsilon )[G(1,1,0)-G(0,1,0)]+2[G(0,2,0)-G(0,1,0)] \nonumber \\ &&+4[G(0,1,1)-G(0,1,0)]+4\varepsilon K_{0}+6K_{0} \label{G-10} \\ \partial _{t}G(0,0,1) &=&2(1+\varepsilon )[G(1,0,1)-G(0,0,1)]+4[G(0,1,1)-G(0,0,1)] \nonumber \\ &&+8K+8\varepsilon K \nonumber\end{aligned}$$    By virtue of invariance under reflections, these equations also hold for the other nearest neighbors $\vec{r}=(-1,0,0)$, $(0,-1,0)$, and $% (0,0,-1) $. The following three equations arise from the next-nearest neighbor sites, $\vec{r}=(1,1,0)$, $(0,1,1)$, and $(1,0,1)$, and their reflections: $$\begin{aligned} \partial _{t}G(1,1,0) &=&(1+\varepsilon )[G(2,1,0)+G(0,1,0)-2G(1,1,0)]+2[G(1,2,0)+G(1,0,0) \nonumber \\ &&-2G(1,1,0)]+\newline 4[G(1,1,1)-G(1,1,0)]-2K_{0}-2\varepsilon K_{0}\newline \nonumber \\ \partial _{t}G(0,1,1) &=&2(1+\varepsilon )[G(1,1,1)-G(0,1,1)]+2[G(0,2,1)+G(0,0,1)-2G(0,1,1)] \nonumber \\ &&+\newline 4[G(0,1,0)-G(0,1,1)]-4[K_{0}+K]\newline \label{G-11} \\ \partial _{t}G(1,0,1) &=&(1+\varepsilon )[G(2,0,1)+G(0,0,1)-2G(1,0,1)]+4[G(1,1,1)-G(1,0,1)] \nonumber \\ &&+\text{\newline }4[G(1,0,0)-G(1,0,1)]-4\varepsilon K-4K_{0} \nonumber\end{aligned}$$ Increasing the separation of the participating sites further, to $\vec{r}% =(2,0,0)$ and $(0,2,0)$, we obtain: $$\begin{aligned} \partial _{t}G(2,0,0) &=&(1+\varepsilon )[G(3,0,0)+G(1,0,0)-2G(2,0,0)]+4[G(2,1,0)-G(2,0,0)] \nonumber \\ &&+4[G(2,0,1)-G(2,0,0)]-2\varepsilon K_{0} \nonumber \\ \partial _{t}G(0,2,0) &=&2(1+\varepsilon )[G(1,2,0)-G(0,2,0)]+2[G(0,3,0)+G(0,1,0)-2G(0,2,0)] \label{G-20} \\ &&+4[G(0,2,1)-G(0,2,0)]\newline -2K_{0} \nonumber\end{aligned}$$ And finally, all $G$’s with $\left| x\right| +\left| y\right| +\left| z\right| >2$ satisfy homogeneous equations: $$\begin{aligned} \partial _{t}G(i,j,k) &=&(1+\varepsilon )[G(i+1,j,k)+G(i-1,j,k)-2G(i,j,k)] \nonumber \\ &&+2[G(i,j+1,k)+G(i,j-1,k)-2G(i,j,k)]+\newline 4[G(i,j,k-1)-G(i,j,k)] \label{G-rest}\end{aligned}$$ The last equation contains the full anisotropic lattice Laplacian, acting on $G(i,j,k)$, without any inhomogeneities being generated. We note, for further reference, that the right hand sides of Eqns (\[G-10\]-\[G-rest\]) contain contributions from exchanges along and against the three lattice directions. Starting from Eq. (\[eom\]), it is of course easy to keep track of terms originating in transverse vs parallel jumps. Below, this distinction will become important when we turn to energy currents. To solve this system, we closely follow the method presented in [@SZ]. Returning to Eq. (\[S\]), we need to focus only on $\tilde{S}$, since this quantity carries the information about the interactions. Recalling Eq. ([G]{}), we first express $G$ through its Fourier transform $\tilde{S}$, exploiting translation invariance and linearity. Then, we invoke the completeness of complex exponentials to project out an equivalent set of (algebraic) equations for $\tilde{S}$. To follow through with this program, we first define the anisotropic lattice Laplacian in Fourier space, $$\delta (k,p,q)\equiv 2(1+\varepsilon )(1-\cos k)+4(1-\cos p)+4(1-\cos q) \label{Lap}$$and second, introduce the three (as yet unknown) quantities: $$\begin{aligned} I_{1} &\equiv &\int \tilde{S}(1-\cos k) \nonumber \\ I_{2} &\equiv &\int \tilde{S}(1-\cos p) \label{Is} \\ I_{3} &\equiv &\int \tilde{S}(1-\cos q) \nonumber\end{aligned}$$With these definitions, the system can be expressed in terms of $\tilde{S}$, resulting in: $$\begin{aligned} 2\varepsilon K_{0}+8K_{0} &=&\int \tilde{S}\delta \exp (ik)+(1+\varepsilon )I_{1} \nonumber \\ 4\varepsilon K_{0}+6K_{0} &=&\int \tilde{S}\delta \exp (ip)+2I_{2} \nonumber \\ 8\varepsilon K\newline +8K &=&\int \tilde{S}\delta \exp (iq)+4I_{3} \nonumber \\ -2\varepsilon K_{0}-2K_{0} &=&\int \tilde{S}\delta \exp (i(k+p)) \nonumber \\ -4(K_{0}+K) &=&\int \tilde{S}\delta \exp (i(p+q)) \\ -4\varepsilon K-4K_{0} &=&\int \tilde{S}\delta \exp (i(k+q)) \nonumber \\ -2\varepsilon K_{0} &=&\int \tilde{S}\delta \exp (2ik) \nonumber \\ -2K_{0} &=&\int \tilde{S}\delta \exp (2ip) \nonumber \\ 0 &=&\int \tilde{S}\delta \exp (i(kx+py+qz))\text{ for }\left| x\right| +\left| y\right| +\left| z\right| >2 \nonumber\end{aligned}$$To proceed, we treat $I_{1},I_{2},I_{3}$ for the time being as simple coefficients and move them to the left-hand side. Finally, we need one additional equation for $x=y=z=0$, which is easily obtained: $$\int \tilde{S}\delta =\int \tilde{S}[2(1+\varepsilon )(1-\cos k)+4(1-\cos p)+4(1-\cos q)]=2(1+\varepsilon )I_{1}+4I_{2}+4I_{3}$$Now, we are ready to invoke the completeness relation for complex exponentials, namely, $$\sum_{x,y,z}\exp [i(kx+py+qz)]=2(2\pi )^{2}\delta (k)\delta (p)\delta _{q,0}$$which allows us to solve for $\tilde{S}$: $$\tilde{S}(k,p,q)=% %TCIMACRO{\dfrac{L(k,p,q)}{\delta (k,p,q)} }% %BeginExpansion {\displaystyle{L(k,p,q) \over \delta (k,p,q)}} %EndExpansion \label{S-tilde}$$where $$\begin{aligned} L(k,p,q) &\equiv &2(1+\varepsilon )\left( 1-\cos k\right) I_{1}+4\left( 1-\cos p\right) I_{2}+4\left( 1-\cos q\right) I_{3} \nonumber \\ &&+\left( 2\varepsilon K_{0}+8K_{0}\right) 2\cos k+\left( 4\varepsilon K_{0}+6K_{0}\right) 2\cos p \nonumber \\ &&+\left( 8\varepsilon K\newline +8K\right) \cos q-\left( 2\varepsilon K_{0}+2K_{0}\right) 4\cos k\cos p \label{L} \\ &&-\left( 4\varepsilon K+4K_{0}\right) 2\cos k\cos q-4(K_{0}+K)2\cos p\cos q \nonumber \\ &&-4\varepsilon K_{0}\cos 2k-4K_{0}\cos 2p \nonumber\end{aligned}$$However, Eq. (\[S-tilde\]) is not yet a fully explicit solution for $% \tilde{S}$, due to the appearance of the three integrals $I_{1}$, $I_{2}$, and $I_{3}$ in $L$. To determine these three coefficients, we need three linearly independent equations. One of these equations is given by the value of $G$ at the origin, $1=G(0,0,0)=\int (1+\tilde{S})$, and the remaining two can be obtained directly from the definitions of $I_{1}$ and $I_{3}$ in Eq. (\[Is\]): $$\begin{aligned} 0 &=&\int %TCIMACRO{\dfrac{L(k,p,q)}{\delta (k,p,q)} }% %BeginExpansion {\displaystyle{L(k,p,q) \over \delta (k,p,q)}} %EndExpansion \nonumber \\ 0 &=&-I_{1}+\int %TCIMACRO{\dfrac{L(k,p,q)}{\delta (k,p,q)}}% %BeginExpansion {\displaystyle{L(k,p,q) \over \delta (k,p,q)}}% %EndExpansion (1-\cos k) \label{matrix} \\ 0 &=&-I_{3}+\int %TCIMACRO{\dfrac{L(k,p,q)}{\delta (k,p,q)}}% %BeginExpansion {\displaystyle{L(k,p,q) \over \delta (k,p,q)}}% %EndExpansion (1-\cos q) \nonumber\end{aligned}$$After inserting Eq. (\[L\]) for $L$, this leads to a set of three inhomogeneous, linear equations for the three unknowns $I_{1}$, $I_{2}$, and $I_{3}$, which are easily solved. Since the details of the associated matrix inversion are straightforward but tedious, we relegate a few details to the Appendix. We just note the following overall features: ([*i*]{}) All three coefficients are functions of $K$, $K_{0}$, and $\varepsilon $; ([*ii*]{}) for the whole range of fields $\varepsilon $ and for $K_{0}=1$ and $K=\pm 1$ (attractive and repulsive inter-layer interactions), $I_{1}$ and $I_{2}$ are negative, while $I_{3}$ is positive for $K=-1$ and negative for $K=+1$. This concludes the calculation of the structure factor. To summarize, we find $$S(k,p,q)=1+% %TCIMACRO{\dfrac{L(k,p,q)}{\delta (k,p,q)}}% %BeginExpansion {\displaystyle{L(k,p,q) \over \delta (k,p,q)}}% %EndExpansion +O(K^{2},K_{0}^{2},KK_{0}) \label{S-final}$$ Even at the lowest nontrivial order, this solution carries a significant amount of information about the phase diagram of our system. In particular, we can extract an approximate shape of the critical lines, as we will show in the following. The critical lines. ------------------- The location of a continuous phase transition is marked by the divergence of a suitably chosen structure factor, as a function of the external control parameters. For example, we can locate the order-disorder phase transition of the usual, two-dimensional Ising model by seeking those values of temperature (at zero magnetic field) for which the structure factor, $S(\vec{% k})$, diverges. In the absence of a conservation law, the only singularity occurs at the Onsager temperature if $\vec{k}=0$, indicating that the system orders into a spatially homogeneous state. For a lattice gas, however, $S(% \vec{0})$ is fixed by the conservation law, and we need to seek the onset of phase [*separation*]{}, i.e., the emergence of macroscopic spatial inhomogeneities in the system. In this case, singular behavior occurs in $% \lim_{\vec{k}\rightarrow 0}S(\vec{k})$, provided the system is half-filled and tuned to the Onsager temperature. For the bilayer system, we need to locate, and distinguish, [*two types*]{} of continuous transitions, namely, from disorder (D) into the strip (S) and the full-empty (FE) phases, respectively. Since the D-S transition is marked by the appearance of phase-separated strips in each layer, aligned with the driving force, it can be located by seeking singularities in $% \lim_{p\rightarrow 0}S(0,p,0)$. In contrast, the D-FE transition exhibits homogeneous, but opposite magnetizations in the two planes, so that it can be found by considering $S(0,0,\pi )$. In fact, these two structure factors were precisely the order parameters chosen in the MC studies [@HZS]. Yet, another subtlety must be considered: in a typical high temperature expansion such as ours, only a finite number of terms can be computed. Hence, any perturbative result for the structure factor must be finite, and instead, the radius of convergence of the expansion must be estimated. Even this is not practical here, since we have only two terms of the series. To circumvent these restrictions [@SZ], we extract the singularity by looking for [*zeros*]{} of $S^{-1}$, to first order in $K$ and $K_{0}$. Starting from Eq. (\[S-final\]), we obtain $$S^{-1}(k,p,q)=1-% %TCIMACRO{\dfrac{L(k,p,q)}{\delta (k,p,q)}}% %BeginExpansion {\displaystyle{L(k,p,q) \over \delta (k,p,q)}}% %EndExpansion +O(K^{2},K_{0}^{2},KK_{0}) \label{S-1}$$and seek to locate the zeros of $\lim_{p\rightarrow 0}S^{-1}(0,p,0)$ for the D-S transition, and of $S^{-1}(0,0,\pi )$ for the D-FE transition. Of course, we should ensure that these are the first zeros which are encountered upon lowering the temperature. Therefore, we consider, more generally, the behavior of $S^{-1}(k,p,q)$ at small $k,p$ and fixed $q$. The denominator of Eq. (\[S-1\]), being the lattice Laplacian, is positive definite: $$\lim_{k,p\rightarrow 0}\delta (k,p,q)=\ (1+\varepsilon )k^{2}+2p^{2}+4(1-\cos q)+O(k^{4},p^{4},k^{2}p^{2})$$and vanishes only at the origin. Similarly, we obtain $$\begin{aligned} \lim_{k,p\rightarrow 0}L(k,p,q) &=&16K_{0}\left( 1-\cos q\right) +4\left( 1-\cos q\right) I_{3} \\ &&+k^{2}\left[ (1+\varepsilon )I_{1}+10\varepsilon K_{0}-4K_{0}+\left( 4\varepsilon K+4K_{0}\right) \cos q\right] \nonumber \\ &&+p^{2}\left[ 2I_{2}+6K_{0}+(4K_{0}+4K)\cos q\right] +O(k^{4},p^{4},k^{2}p^{2}) \nonumber\end{aligned}$$We note, briefly, that the anisotropic momentum dependence of numerator and denominator leads to power law correlations in the $x$- and $y$-directions [@ZWLV; @SZ; @AGMA]. Combining our results so far, it is apparent that the zeros of $S^{-1}$ are identical to those of $\delta -L$ in Eq. (\[S-1\]). To simplify notation, we write $$\lim_{k,p\rightarrow 0}\left[ \delta (k,p,q)-L(k,p,q)\right] \equiv \tau _{\Vert }(q)k^{2}+2\tau _{\bot }(q)p^{2}+4\tau _{z}(1-\cos q)+O(k^{4},p^{4},k^{2}p^{2})$$and read off $$\begin{aligned} \tau _{\Vert }(q) &=&(1+\varepsilon )\left( 1-I_{1}\right) -10\varepsilon K_{0}+4K_{0}-\left( 4\varepsilon K+4K_{0}\right) \cos q \nonumber \\ \tau _{\bot }(q) &=&1-I_{2}-3K_{0}-(2K_{0}+2K)\cos q \label{tau} \\ \tau _{z} &=&1-I_{3}-4K_{0} \nonumber\end{aligned}$$In a field-theoretic context [@TSZ], these quantities play the role of diffusion coefficients: $\tau _{\Vert }$ and $\tau _{\bot }$ control the in-plane diffusion in the parallel and transverse directions, respectively, while $\tau _{z}$ controls the cross-plane hopping. For high temperatures, i.e., small values of $K_{0}=\beta J_{0}$ and $% K=\beta J$, all three $\tau $-coefficients are positive. Seeking zeros of these expressions, as $K_{0}$ and $K$ increase, we need to consider the two cases $q=0$ and $q=\pi $ separately. For $q=0$, we find that $\tau _{\bot }(0)$ has a single zero at a critical $\beta _{c}^{S}$, for given $J_{0}$, $% J $ and $\varepsilon $. At these parameter values, $\tau _{\Vert }(0)$ and $% \tau _{z}$ remain positive. Similarly, for $q=\pi $, the coefficient $\tau _{z}$ is the one which vanishes first as $\beta $ increases, reaching zero at a critical $\beta _{c}^{FE}$. Converting into temperatures, we obtain two functions, $T_{c}^{S}(J_{0},J,\varepsilon )$ and $T_{c}^{FE}(J_{0},J,% \varepsilon )$, and we need to identify the larger of the two: If $\max % \left[ T_{c}^{S}(J_{0},J,\varepsilon ),T_{c}^{FE}(J_{0},J,\varepsilon )% \right] =T_{c}^{S}(J_{0},J,\varepsilon )$, the order-disorder transition is of the D-S type. Otherwise, if $\max \left[ T_{c}^{S}(J_{0},J,\varepsilon ),T_{c}^{FE}(J_{0},J,\varepsilon )\right] =T_{c}^{FE}(J_{0},J,\varepsilon )$, the FE phase is selected upon crossing criticality. While the two critical lines, $T_{c}^{S}(J_{0},J,\varepsilon )$ and $% T_{c}^{FE}(J_{0},J,\varepsilon )$, can in principle be found analytically, the details are not particularly illuminating. Instead, we present a range of numerical results below. For example, for infinite $E$ ($\varepsilon =0$), we obtain $$\begin{aligned} k_{B}T_{c}^{S}(J_{0},J,0) &=&4.39J_{0}+2.11J \label{eps=0} \\ k_{B}T_{c}^{FE}(J_{0},J,0) &=&4.14J_{0}\ -1.36J \nonumber\end{aligned}$$For finite $E$ with $\varepsilon =\exp (-\beta E)=0.5$, all coefficients decrease and we find $$\begin{aligned} k_{B}T_{c}^{S}(J_{0},J,0.5) &=&4.15J_{0}+2.03J \label{eps=0.5} \\ k_{B}T_{c}^{FE}(J_{0},J,0.5) &=&4.05J_{0}\ -1.70J \nonumber\end{aligned}$$In each case, the bicritical point is defined through the solution of $% T_{c}^{S}(J_{0},J,\varepsilon )=T_{c}^{FE}(J_{0},J,\varepsilon )$. For comparison, we also quote the equilibrium ($E=0$) results (see the Appendix for details): $$\begin{aligned} k_{B}T_{c}^{S}(J_{0},J,1) &=&4J_{0}+2J \label{equ} \\ k_{B}T_{c}^{FE}(J_{0},J,1) &=&4J_{0}-2J \nonumber\end{aligned}$$which exhibit the expected $J\rightarrow -J$ symmetry. Recalling that the MC simulations were performed at fixed, positive in-plane coupling $J_{0}$, we need to consider only the dependence on the cross-plane coupling $J$ which may take either sign. All of our results show that, for positive $J$, the D-S transition dominates while, for [*sufficiently negative*]{} $J$, a D-FE transition is found. In the following, we discuss the non-equilibrium ($E\neq 0$) results in more detail. Fig. 2 shows the critical lines for two typical values of the parameters, $% \varepsilon =0.5$ and $J_{0}=1$. Being the result of a first order approximation, the critical lines must of course be linear in $J$. Therefore, quantitative agreement with the simulation data cannot be expected; nevertheless, several important features are reproduced: [*the existence of two ordered phases* ]{}and the[* shift of the bicritical point*]{} to higher values of $T$ and negative $J$. As a result, the S phase survives for small, negative $J$, despite being energetically unfavorable. This phenomenon can be explained qualitatively [@HZS] by noting that [*long-range negative*]{} correlations transverse to $E$ dominate the ordering process for positive $J$, and this mechanism continues to be effective for a small region of negative $J$. For large and negative $J$, the disordered state orders into the full-empty (FE) phase, characterized by the planes being mainly full or empty. Finally, we comment on the dependence of the critical lines, specifically $% T_{c}^{S}(1,1,\varepsilon )$ and $T_{c}^{FE}(1,-1,\varepsilon )$, on the field parameter $\varepsilon =\exp \left( -\beta E\right) $, shown in Fig. 3. For $E=0$, both temperatures are equal, by virtue of the $J\rightarrow -J$ symmetry of the equilibrium system. As the field becomes stronger, the critical temperature of the D-S transition increases, in contrast to the critical temperature of the D-FE  transition which decreases. This behavior agrees qualitatively with the trend observed in the simulations [@HZS; @CH]. = 0.7= 0.7 There are several other quantities of physical interest which are immediately related to the two-point correlations, such as the steady-state particle and energy currents. To extract these, we first discuss the inverse Fourier transform of the structure factor, focusing specifically on the nearest-neighbor correlations. = 0.7= 0.7 Related physical observables. ----------------------------- [*Nearest-neighbor correlations.* ]{}These are easily found from our solution for the structure factor, Eq. (\[S-final\]). For example, the nearest-neighbor correlation in the field direction is given by: $$G(1,0,0)=\int \tilde{S}(k,p,q)\cos k+O(K^{2},K_{0}^{2},KK_{0})$$Since $\int \tilde{S}=0$ by virtue of $G(0,0,0)=1$, we obtain $$G(1,0,0)=-\int \tilde{S}(k,p,q)\left( 1-\cos k\right) +O(K^{2},K_{0}^{2},KK_{0})=-I_{1}$$and similarly, $$\begin{aligned} G(0,1,0) &=&-I_{2} \\ G(0,0,1) &=&-I_{3} \nonumber\end{aligned}$$These three integrals are already known since they were required for the discussion of the critical lines. Specifically, for $\varepsilon =0.5$ we find, neglecting corrections of $O(K^{2},K_{0}^{2},KK_{0})$:$$\begin{aligned} G(1,0,0) &=&0.949K_{0}+0.030K \nonumber \\ G(0,1,0) &=&0.849K_{0}-0.034K \\ G(0,0,1) &=&-0.055K_{0}+1.702K \nonumber\end{aligned}$$For reference, we also quote the first order results for the equilibrium ($% E=0$) correlations: $$\begin{aligned} G^{eq}(1,0,0) &=&G^{eq}(0,1,0)=K_{0} \\ G^{eq}(0,0,1) &=&2K \nonumber\end{aligned}$$ In the following graphs (Fig. 4a-c), we show the drive dependence of the three [*nearest-neighbor*]{} correlation functions, at $K_{0}=1$ and $K=\pm 1 $, to illustrate their behavior in two typical domains (attractive and repulsive cross-layer coupling). Of course, these values of $K$ and $K_{0}$ are not “small”, but in a linear approximation they just serve to fix a scale. Consistent with the interpretation of the drive as an additional noise which tends to break bonds, all correlations are reduced compared to their equilibrium value. Further, as the field is switched on, the $% J\rightarrow -J$ symmetry of the equilibrium system is broken, and the correlations for repulsive and attractive cross-layer coupling differ from one another. The details of how they differ provides some insight into the ordered phases which will eventually emerge. The first plot (Fig. 4a) shows the correlation function for a nearest-neighbor bond within a given plane, aligned with the drive direction. It is interesting to note that the correlations for repulsive cross-layer coupling are more strongly suppressed than their counterparts for attractive $J$. This feature becomes more transparent when we consider nearest-neighbor correlations transverse to the drive, but still within the same plane (Fig. 4b). For attractive cross-layer coupling, we note that $% G(1,0,0)$ is considerably enhanced over $G(0,1,0)$, while the two correlations are roughly equal in the repulsive case. This indicates a tendency to form droplets of correlated spins which are elongated in the field direction for $J=+1$ while remaining approximately isotropic for $J=-1$, hinting at the nature of the associated ordered phases (strip-like vs uniform within each layer). This picture is completed when we consider the cross-plane correlations $G(0,0,1)$ (Fig. 4c): These are positive in the attractive, and negative in the repulsive case, demonstrating the tendency towards equal vs opposite local magnetizations on the two layers. Given that we have performed only a first order calculation, the results really carry a remarkable amount of information about the system. Encouraged by these observations, we now consider two other quantities, namely, the particle and energy currents. epsf = 1.= 1. (a) = 1.= 1. (b) = 1.= 1. (c) [*The particle current.* ]{}Due to the bias in conjunction with periodic boundary conditions, the bilayer system carries a net particle current, $% \left\langle j\right\rangle $. Since only nearest-neighbor exchanges are possible, this current is proportional to the density (number per site) of available particle-hole pairs in the field direction. The transition rate $% c_{\Vert }$ along this direction, given in Eq. (\[c\_par\]), then counts the fraction of these pairs which will actually exchange per unit time. Specifically, in configuration $\{s\}$, the particle current can be written as $$j(\{s\})=\frac{1}{2L^{2}}\sum_{\vec{r}}\frac{s(\vec{r})-s(\vec{r}+\hat{x}% )}{2}c_{\Vert }(\vec{r},\vec{r}+\hat{x};\{s\})\newline$$For infinite $E$, this expression simplifies considerably, since jumps against the field will be completely suppressed. After a few straightforward algebraic manipulations, the [*average current*]{} can be expressed through the pair correlations along the field direction. To [*first order*]{} in $K$ and $K_{0}$, we obtain$$\left\langle j\right\rangle =% %TCIMACRO{\dfrac{1}{4}}% %BeginExpansion {\displaystyle{1 \over 4}}% %EndExpansion (1-\varepsilon )[1-G(1,0,0)]+O(K^{2},K_{0}^{2},K_{0}K)$$which shows that it is non-zero only if $E\neq 0$. Further, it takes its maximum value at infinite temperature and is reduced by (attractive) nearest-neighbor interactions. The graph (Fig. 5) shows the field-dependence of this current, for $K_{0}=1$ and $K=\pm 1$. Since nearest-neighbor correlations along the field are much larger for positive $J$, indicating a predominance of particle-particle or hole-hole pairs, the current is reduced relative to the repulsive case. = 0.7= 0.7 [*Energy currents.* ]{}Another interesting quantity associated with driven dynamics is the change in configurational [*energy*]{} during one Monte Carlo step. In the steady state, by definition, the average configurational energy is of course constant. However, particle-hole exchanges [*parallel*]{} to the field direction tend to increase the energy, since the drive can easily break bonds. In contrast, exchanges [*transverse*]{} to $E$ are purely energetically driven and hence, prefer to satisfy bonds so that the energy decreases [@SZ-rev]. In summary, we have $$\left\langle %TCIMACRO{\dfrac{dH}{dt}}% %BeginExpansion {\displaystyle{dH \over dt}}% %EndExpansion \right\rangle _{\Vert }=-\left\langle %TCIMACRO{\dfrac{dH}{dt}}% %BeginExpansion {\displaystyle{dH \over dt}}% %EndExpansion \right\rangle _{\perp }>0$$Even if a particle current were absent, the presence of energy currents would signal the[* non-equilibrium* ]{}steady state. Since the configurational energy involves only nearest-neighbor bonds, it is obvious that only the time evolution of nearest-neighbor correlations plays a role in these two fluxes. Specifically, we have $$L^{-2}\left\langle %TCIMACRO{\dfrac{dH}{dt}}% %BeginExpansion {\displaystyle{dH \over dt}}% %EndExpansion \right\rangle _{\Vert }=-J_{0}\newline \left( \frac{\partial }{\partial t}\right) _{\Vert }\,\left[ G(1,0,0)+G(0,1,0)\right] -2J\left( \frac{\partial }{\partial t}\right) _{\Vert }\,G(0,0,1)$$where the subscript on the time derivatives reminds us to select only those processes which are due to parallel exchanges alone. These can be easily identified from the terms contributing to Eq. (\[eom\]) or (\[G-10\]). Of course, there is an analogous equation for $\left\langle dH/dt\right\rangle _{\perp }$. Collecting the relevant contributions and multiplying both sides by the inverse temperature $\beta $ to express everything in terms of $K_{0}$ and $K$, we find: $$\begin{aligned} L^{-2}\left\langle %TCIMACRO{\dfrac{d\beta H}{dt}}% %BeginExpansion {\displaystyle{d\beta H \over dt}}% %EndExpansion \right\rangle _{\Vert } &=&-K_{0}\left\{ (1+\varepsilon )\left[ G(2,0,0)-G(1,0,0)\right] +2(1+\varepsilon )\left[ G(1,1,0)-G(0,1,0)\right] +6\varepsilon K_{0}\right\} \nonumber \\ &&-2K\left\{ 2(1+\varepsilon )\left[ G(1,0,1)-G(0,0,1)\right] +8\varepsilon K\right\}\end{aligned}$$ The correlation functions spanning next- and next-next nearest neighbors which appear here can again be determined from our solution for the structure factors (see Appendix). The result, at $K_{0}=1$ and $K=\pm 1$, is shown in Fig. 6 as a function of $\varepsilon $. As expected, this flux is always non-negative and monotonically increasing as a function of $E$. We note that the current for $K=-1$ is slightly larger than its counterpart for $K=+1$. Since it is a complicated function of the couplings and several correlations, we cannot offer a simple intuitive explanation of this property. = 0.7= 0.7 Concluding remarks ================== Based directly on the microscopic lattice dynamics, the high temperature series provides us with a simple analytic tool which complements field theoretic approaches. Even in a first order approximation, it is remarkable how many features of the MC results are – at least qualitatively – reproduced. To summarize our results briefly, we derive, and solve, a set of equations for the stationary pair correlation functions and their Fourier transforms, the equal-time structure factors. By matching the series expansion of the latter with the expected critical singularity, we find two critical lines, separating the disordered phase from the strip phase (S) and the full-empty phase (FE), respectively. We also observe the shift of the bicritical point which marks the juncture of these two lines, in very good qualitative agreement with the simulations. To illustrate the non-equilibrium character of the steady state, we compute the particle current and the energy flux through the system. The particle current is determined by the nearest-neighbor correlations in the field direction, and takes its maximum value in the absence of interactions. Our findings for the energy current confirm intuitive expectations: parallel exchanges tend to lower, while transverse exchanges tend to increase, the configurational energy. A brief comment on boundary conditions is in order. Even though it is quite natural to use periodic boundary conditions in all lattice directions, it is not unreasonable to consider other choices, especially in the $z$-direction. To recall, periodicity in $z$ implies that the site ($x,y,0$) is connected to the site ($x,y,1$) via [*two*]{} bonds which enter into [*both*]{} the energetics [*and*]{} and the dynamics (i.e., there are two channels for a particle to move from one layer to the other). Alternately, we can choose open boundary conditions in $z$ and consider only a single energetic bond and single dynamical channel between these two sites. Mixtures of these two cases can also be constructed: i.e., imposing periodic boundary conditions on the energetics, but allowing only a single channel for particle moves, or vice versa. The first (second) “mixed” case is reducible to the case of open (periodic) boundary conditions, with $J$ replaced by $2J$ ($J/2$). Even though details are not presented here, we did, in fact, compute the critical lines for different cross-plane boundary conditions. The main conclusions of our study, namely, the existence of the two continuous phase transitions and the shift of the bicritical point, hold for all of these variations. The high temperature expansion presented here has two shortcomings. First, our results provide no insight into the first-order transitions between the FE and S phases which were observed in the simulations. As in all high temperature series, the first singularity which is encountered as $T$ is lowered determines the radius of convergence. A low-temperature approach would be necessary to capture transitions between ordered phases. Second, our series is currently limited to just one nontrivial term. In order to compute the second order correction to the pair correlations, we would need to know the full stationary distribution, $P^{\ast }$, to first order. Writing the stationary master equation in the form $0=LP^{\ast }$ where $L$ is the linear operator (“Liouvillean”) defined by the transition rates, this requires the full inverse of $L$, to zeroth order. Finding this inverse is a highly nontrivial (and as yet unsolved) problem. Inspite of these drawbacks, the high temperature expansion is one of the few analytic tools which provide insight into non-equilibrium steady states. It is conceptually and mathematically straightforward, and – at least at the qualitative level – surprisingly reliable. Since it is based directly on the microscopic lattice dynamics, it still carries information about nonuniversal properties which would be lost upon taking a continuum limit. It is therefore a valuable complement to both simulations and field theoretic methods. [**Acknowledgements.**]{} We thank U.C. Täuber and R.K.P. Zia for fruitful discussions. Financial support from the NSF through the Division of Materials Research is gratefully acknowledged. Equations for the two-point correlations. ----------------------------------------- To illustrate the general procedure, we provide a few details here [ZWLV]{}. As an example, we choose the two-point correlation $G(1,1,0)$. We start with the equation of motion for the pair correlations: $${\displaystyle{d \over dt}}\left\langle s(\vec{r})s(\vec{r}\,^{\prime })\right\rangle =\sum_{\vec{x},\vec{x}^{\prime }}\left\langle s(\vec{r})s(\vec{r}\,^{\prime })\left[ s(\vec{x})s(\vec{x}\,^{\prime })-1\right] \,c\left( \vec{x},\vec{x}\,^{\prime };\left\{ s\right\} \right) \right\rangle$$where the sum runs over nearest-neighbor pairs ($\vec{x},\vec{x}\,^{\prime }$) such that $\vec{x}\in \{\vec{r},\vec{r}\,^{\prime }\}$ but $\vec{x}\,^{\prime }\notin \{\vec{r},\vec{r}\,^{\prime }\}$. To obtain the equation satisfied by $G(1,1,0)$, we choose, e.g. $\vec{r}\equiv (0,0,0)$ and $\vec{r}\,^{\prime }\equiv (1,1,0)$. The two participating spins have a total of $8$ distinct nearest neighbors: $6$ of these lie in the $z=0$ plane, and $2$ are found on the $z=1$ plane. One possible ($\vec{x},\vec{x}\,^{\prime }$) pair is, for example, the pair $\vec{x}\equiv \vec{r}$ and $\vec{x}\,^{\prime }\equiv (1,0,0)$. The corresponding exchange occurs along the field direction, and hence, we must use the rate $c_{\Vert }$ from Eq. (\[c\_par\]). Considering [*only*]{} the contribution due to this particular pair of sites, we obtain $$\begin{aligned} \partial _{t}G(1,1,0) &=&{\displaystyle{1 \over 4}}\left\langle s(0,0,0)s(1,1,0)\left[ s(0,0,0)s(1,0,0)-1\right] \right. \\ &&\times \left. \left\{ \left[ s(0,0,0)-s(1,0,0)+2\right] +\varepsilon \left[ s(1,0,0)-s(0,0,0)+2\right] \exp (-\beta \Delta H)\right\} \right\rangle +...\end{aligned}$$Here, $\ ...$ stands for the contributions due to all other possible ($\vec{x},\vec{x}\,^{\prime }$) pairs which can be handled in an analogous manner. After multiplying out a few terms and neglecting $3$-point functions, the expression above simplifies to $$\begin{aligned} \partial _{t}G(1,1,0) &=&{\displaystyle{1 \over 2}}\left[ G(0,1,0)-G(1,1,0)\right] \\ &&+{\displaystyle{\varepsilon \over 4}}\left\langle \left[ s(1,0,0)s(1,1,0)-s(0,0,0)s(1,1,0)\right] \left[ s(1,0,0)-s(0,0,0)+2\right] \exp (-\beta \Delta H)\right\rangle +...\end{aligned}$$Note that we have replaced $\left\langle s(1,0,0)s(1,1,0)\right\rangle $ by the corresponding correlation function, $G(0,1,0)$. Next, we expand $\exp (-\beta \Delta H)$ in powers of $K$ and $K_{0}$, according to Eq. ([c\_par\_exp]{}). The zeroth order contribution is easily accounted for, leaving us with the $O(\beta )$ correction: $$\begin{aligned} \partial _{t}G(1,1,0) &=&{\displaystyle{1 \over 2}}\left( 1+\varepsilon \right) \left[ G(0,1,0)-G(1,1,0)\right] \\ &&-\beta {\displaystyle{\varepsilon \over 4}}\left\langle \left[ s(1,0,0)s(1,1,0)-s(0,0,0)s(1,1,0)\right] \left[ s(1,0,0)-s(0,0,0)+2\right] \left( \Delta H\right) \right\rangle +...\end{aligned}$$The change in energy, $\Delta H$, involves the nearest-neighbor spins of the selected pair. Since these terms are already explicitly first order in $\beta $, they are averaged using the zeroth order approximation to the steady state solution. The latter is uniform so that the averages are trivial. Collecting, we find that the contribution of this particular exchange to $G(1,1,0)$ is $$\partial _{t}G(1,1,0)={\displaystyle{1 \over 2}}\left( 1+\varepsilon \right) \left[ G(0,1,0)-G(1,1,0)\right] -\varepsilon K_{0}+...$$Including all the other ($\vec{x},\vec{x}\,^{\prime }$) pairs, we arrive at the complete equation: $$\begin{aligned} \partial _{t}G(1,1,0) &=&(1+\varepsilon )[G(2,1,0)+G(0,1,0)-2G(1,1,0)]+2[G(1,2,0)+G(1,0,0) \nonumber \\ &&-2G(1,1,0)]+\newline 4[G(1,1,1)-G(1,1,0)]-2K_{0}-2\varepsilon K_{0}\newline \nonumber\end{aligned}$$Of course, this procedure is easily extended to any other two-point function. Moreover, it is straightforward to track which terms in the equations arise from parallel, and which from transverse, exchanges. This distinction is crucial for the computation of the energy fluxes. Matrix inversion. -----------------  We seek to invert the system of equations (\[matrix\]) for the three unknown expressions $I_{1}$, $I_{2}$, and $I_{3}$. We follow the method first outlined in [@SZ]. With a little algebra, it becomes apparent that the coefficients and inhomogeneities in these equations involve integrals of the form $$\begin{aligned} Q_{lmn}(\varepsilon )\equiv \int \frac{(1-\cos k)^{l}(1-\cos p)^{m}(1-\cos q)^{n}}{\delta } \nonumber\end{aligned}$$ with non-negative integer $l,m,n$ and $l+m+n\leq 3$. The task of determining these integrals is simplified by a series of relations, namely, $$\begin{aligned} 1 &=&\int {\displaystyle{\delta \over \delta }}=2(1+\varepsilon )Q_{100}+4Q_{010}+4Q_{001} \\ 1 &=&\int (1-\cos k){\displaystyle{\delta \over \delta }}=2(1+\varepsilon )Q_{200}+4Q_{110}+4Q_{101} \\ 1 &=&\int (1-\cos p){\displaystyle{\delta \over \delta }}=2(1+\varepsilon )Q_{110}+4Q_{020}+4Q_{011} \\ {{\frac{3}{2}}} &=&\int (1-\cos k)^{2}{\displaystyle{\delta \over \delta }}=2(1+\varepsilon )Q_{300}+4Q_{210}+4Q_{201} \\ {{\frac{3}{2}}} &=&\int (1-\cos p)^{2}{\displaystyle{\delta \over \delta }}=2(1+\varepsilon )Q_{120}+4Q_{030}+4Q_{021} \\ 1 &=&\int (1-\cos k)(1-\cos p){\displaystyle{\delta \over \delta }}=2(1+\varepsilon )Q_{210}+4Q_{120}+4Q_{111} \\ 1 &=&\int (1-\cos q){\displaystyle{\delta \over \delta }}=2(1+\varepsilon )Q_{101}+4Q_{011}+4Q_{002}\end{aligned}$$The computation of the remaining integrals, while tedious, is completely straightforward. Once these coefficients are known, Eqns (\[matrix\]) can be inverted. The equilibrium solution ------------------------ Since our expansion presumed $E>\Delta H$, we may not simply set $\varepsilon =1$ in our equations of motion for the two-point correlations. Instead, one should rederive the whole set carefully, noting the absence of the driving field. Of course, this is trivial, since only the first order term in the expansion of the equilibrium (Boltzmann) distribution is required here. In this order, only nearest-neighbor correlations can be nonzero, so that the only non-vanishing $G$’s are $$\begin{aligned} G^{eq}(0,0,0) &=&1 \\ G^{eq}(\pm 1,0,0) &=&G^{eq}(0,\pm 1,0)=K_{0} \\ G^{eq}(0,0,1) &=&2K\end{aligned}$$Performing the Fourier transform to structure factors and exploiting the boundary conditions in the $z$-direction, we find immediately that $$\begin{aligned} S(k,p,q) &=&\sum_{z=0,1}\sum_{x,y=-\infty }^{\infty }G(x,y,z)e^{-i(kx+py+qz)} \\ &=&1+2K_{0}\left( \cos k+\cos p\right) +2K\cos q\end{aligned}$$resulting in $$\begin{aligned} \lim_{k,p\rightarrow 0}S^{-1}(k,p,q) &=&1-2K_{0}\left( 2-\frac{1}{2}k^{2}-\frac{1}{2}p^{2}\right) -2K\cos q+O(k^{4},p^{4}) \\ &\equiv &\tau (q)+O(k^{2},p^{2})\end{aligned}$$with $$\tau (q)=1-4K_{0}-2K\cos q$$For $q=0$, this vanishes at $$\ k_{B}T_{c}^{S}=4J_{0}+2J$$and for $q=\pi $, the zero shifts to $$\ k_{B}T_{c}^{FE}=4J_{0}-2J$$Thus, D-S transitions are observed for $J>0$, and D-FE transitions dominate for $J<0$. The bicritical point is located at $k_{B}T_{c}(J=0)=4J_{0}$. Other correlations near the origin. ----------------------------------- These are required to compute the energy fluxes along the parallel and transverse directions.  Specifically, we need the following correlation functions: $G(1,1,0)$, $G(1,0,1)$, $G(0,1,1)$, $G(2,0,0)$, and $G(0,2,0)$. We want to write these correlation functions in terms of the already calculated integrals $I_{1},I_{2},I_{3}$ and also in terms of the set of $Q_{lmn}$ integrals defined earlier. We start with the definition for $G(2,0,0)$ and substitute the expression for the structure factor: $$\begin{aligned} \ G(2,0,0) &=&\int \tilde{S}\exp (2ik)=2\int \tilde{S}(1-\cos k)^{2}-4I_{1}\newline \\ &=&2\left\{ 2(1+\varepsilon )I_{1}Q_{200}+4\varepsilon (5K_{0}+2K)Q_{300} \right. \\ && + \left. 4(I_{2}+5K_{0}+2K)Q_{210}+4\left( I_{3}+4K_{0}\right) Q_{201} \right. \\ && - \left. 8K_{0}(1+\varepsilon )Q_{310}-8\left( K+K_{0}\right) Q_{211} \right. \\ && - \left. 8\left( \varepsilon K+K_{0}\right) Q_{301}-8\varepsilon K_{0}Q_{400}-8K_{0}Q_{220}-2I_{1}\right\}\end{aligned}$$and similarly, $$\begin{aligned} G(0,2,0) &=&2\int \tilde{S}(1-\cos p)^{2}-4I_{2}\newline \\ &=&2\left\{ 2(1+\varepsilon )I_{1}Q_{020}+4\varepsilon (5K_{0}+2K)Q_{120} \right. \\ && + \left. 4(I_{2}+5K_{0}+2K)Q_{030}+4\left( I_{3}+4K_{0}\right) Q_{021} \right. \\ && - \left. 8K_{0}(1+\varepsilon )Q_{130}-8\left( K+K_{0}\right) Q_{031} \right. \\ && - \left. 8\left( \varepsilon K+K_{0}\right) Q_{121}-8\varepsilon K_{0}Q_{220}-8K_{0}Q_{040}-2I_{2}\right\}\end{aligned}$$The remaining correlation functions follow in the same way: $$\begin{aligned} G(1,1,0) &=&\int \tilde{S}(1-\cos k)(1-\cos p)-I_{1}-I_{2}\newline \ \\ &=&\left\{ 2(1+\varepsilon )I_{1}Q_{110}+4\varepsilon (5K_{0}+2K)Q_{210}\right. \\ &&\left. +4(I_{2}+5K_{0}+2K)Q_{120}+4\left( I_{3}+4K_{0}\right) Q_{111}\right. \\ &&\left. -8K_{0}(1+\varepsilon )Q_{220}-8\left( K+K_{0}\right) Q_{121}\right. \\ &&\left. -8\left( \varepsilon K+K_{0}\right) Q_{211}-8\varepsilon K_{0}Q_{310}-8K_{0}Q_{130}-I_{1}-I_{2}\right\}\end{aligned}$$$$\begin{aligned} G(1,0,1) &=&\int \tilde{S}(1-\cos k)(1-\cos q)-I_{1}-I_{3}\newline \ \\ &=&\left\{ 2(1+\varepsilon )I_{1}Q_{101}+4\varepsilon (5K_{0}+2K)Q_{201}\right. \\ &&\left. +4(I_{2}+5K_{0}+2K)Q_{111}+4\left( I_{3}+4K_{0}\right) Q_{102}\right. \\ &&\left. -8K_{0}(1+\varepsilon )Q_{211}-8\left( K+K_{0}\right) Q_{112}\right. \\ &&\left. -8\left( \varepsilon K+K_{0}\right) Q_{202}-8\varepsilon K_{0}Q_{301}\right. \\ &&\left. -8K_{0}Q_{121}-I_{1}-I_{3}\right\}\end{aligned}$$$$\begin{aligned} \ G(0,1,1) &=&\int \tilde{S}(1-\cos p)(1-\cos q)-I_{2}-I_{3} \\ &=&\left\{ 2(1+\varepsilon )I_{1}Q_{011}+4\varepsilon (5K_{0}+2K)Q_{111}\right. \\ &&\left. +4(I_{2}+5K_{0}+2K)Q_{021}+4\left( I_{3}+4K_{0}\right) Q_{012}\right. \\ &&\left. -8K_{0}(1+\varepsilon )Q_{121}-8\left( K+K_{0}\right) Q_{022}\right. \\ &&\left. -8\left( \varepsilon K+K_{0}\right) Q_{112}-8\varepsilon K_{0}Q_{211}\right. \\ &&\left. -8K_{0}Q_{031}-I_{2}-I_{3}\right\}\end{aligned}$$After the additional $Q$-integrals have been determined, the energy currents are easily found. B. Schmittmann and R.K.P Zia, in [*Phase Transitions and Critical Phenomena*]{},  Vol 17, eds. C. Domb and J.L. Lebowitz (Academic Press, London, 1995). D. Mukamel, in [*Soft and Fragile Matter:Nonequilibrium Dynamics, Metastability and Flow*]{}, eds. M.E. Cates and M.R. Evans (Institute of Physics Publishing, Bristol, 2000); J. Marro and R. Dickman, [*Nonequilibrium Phase Transitions in Lattice Models*]{} (Cambridge University Press, Cambridge, 1999). S. Katz, J.L. Lebowitz, and H. Spohn, [*Phys. Rev. B*]{} [**28**]{}:1655 (1983) and and[* J. Stat. Phys.*]{} [**34**]{}:497 (1984). M.Q. Zhang, J.-S. Wang, J.-L. Lebowitz, and J.L. Vallès, [*J. Stat. Phys.*]{} [**52**]{}:1461 (1988). P.L. Garrido, J.L. Lebowitz, C. Maes, and H. Spohn, [*Phys. Rev. A*]{} [**42**]{}:1954 (1990). H.-K. Janssen and B. Schmittmann, [*Z. Phys. B*]{} [**64**]{}:503 (1986); K.-t. Leung and J.L. Cardy, [*J. Stat. Phys.*]{} [**44**]{}:567 (1986) and [**45**]{}:1087 (1986) (erratum). J.L. Vallès, K.-t. Leung, and R.K.P. Zia, [*J. Stat. Phys.*]{} [**56**]{}:43 (1989). K.K. Mon, private communication (1991). A. Achahbar, P.L Garrido and J. Marro, [*Phys Lett. A* ]{}[**172**]{}:29 (1992); A. Achahbar and J. Marro,[* *]{}[*J.  Stat.  Phys.*]{} [**78**]{}:1493 (1995). C.C. Hill, R.K.P. Zia and B.Schmittmann, [*Phys. Rev. Lett.* ]{}[**77**]{}:514 (1996). See also B. Schmittmann, C.C. Hill, and R.K.P. Zia, [*Physica A*]{} [**239**]{}:382 (1997). C.-P. Chng and J.-S. Wang, [*Phys. Rev. E*]{} [**61**]{}:4962 (2000). U.C. Täuber, B. Schmittmann, and R.K.P. Zia [*J. Phys. A*]{}[** 34:**]{}L583  (2001). L. E. Ballentine [*Physica* ]{}[**30:**]{}1231(1964). K. Binder [*Thin Solid Films*]{}[* *]{}[**20:**]{}367(1974). P.L. Hansen, J. Lemmich, J.H. Ipsen, and O.G. Mouritsen, [*J. Stat. Phys.*]{} [**73**]{}:723 (1993). This article also gives a brief history and further references. T.W. Capehart and M.E. Fisher, [*Phys. Rev. B*]{} [**13**]{}:5021 (1976). M. S. Dresselhaus and G. Dresselhaus, [*Adv. Phys.*]{} [**30**]{}:139 (1981); G. R. Carlow and R. F. Frindt, [*Phys. Rev. B*]{} [**50**]{}:11107 (1994). See also G. R. Carlow, [*Intercalation Channels in Staged Ag Intercalated TiS*]{}$_{2}.$ Ph.D Thesis, Simon Frasier University (1992). A. Ferrenberg and D.P. Landau, [*J. Appl. Phys.* ]{} [**70**]{}:6215 (1991). See, especially, C. Domb, and D.S. Gaunt and A.J. Guttmann, in [*Phase Transitions and Critical Phenomena*]{},  Vol 3, eds C. Domb and M.S. Green (Academic, London, 1974); and A.J.Guttmann, in [*Phase Transitions and Critical Phenomena*]{},  Vol 13, eds. C. Domb and J.L. Lebowitz (Academic Press, London, 1989). B. Schmittmann and R.K.P. Zia, [*J. Stat.Phys.*]{} [**91**]{}:525(1998). L.B. Shaw, B. Schmittmann and R. K. P. Zia, [*J. Stat. Phys.* ]{}[**95**]{}:981 (1999). N. Metropolis, A.W. Rosenbluth, M.M. Rosenbluth, A.H. Teller and E. Teller, [*J. Chem. Phys.*]{} [**21**]{}:1097 (1953). L. Onsager [*Phys. Rev.*]{} [**65**]{}:117 (1944); B.M. McCoy and T.T. Wu, [*The Two-dimensional Ising Model*]{} (Harvard University Press, Cambridge, MA, 1973). F. Spitzer, [*Adv. Math.* ]{}[**5**]{}:246 (1970). J.J. Alonso, P.L. Garrido, J. Marro, and A. Achahbar, [*J. Phys. A*]{} [**28**]{}:4669 (1995). C.C. Hill, [*Phase Transitions in Driven Bi-layer Systems*]{}. Honors Thesis, Virginia Tech (1996).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We construct a catalog, of snowflake type metric circles, that describes all metric quasicircles up to [bi-Lipschitz]{} equivalence. This is a metric space analog of a result due to Rohde. Our construction also works for all bounded turning metric circles; these need not be doubling. As a byproduct, we show that a metric quasicircle with Assouad dimension strictly less than two is bi-Lipschitz equivalent to a planar quasicircle.' address: - 'Department of Mathematics, University of Cincinnati, OH 45221' - 'Department of Mathematics and Statistics, University of Helsinki, P.O. Box 68 (Gustaf Hällströmin katu 2b) FI-00014, Helsinki, Finland' author: - David A Herron - Daniel Meyer bibliography: - 'mrabbrev.bib' - 'bib.bib' title: 'Quasicircles and Bounded Turning Circles Modulo bi-Lipschitz Maps' --- [^1] [^1]: The first author was partially supported by the Charles Phelps Taft Research Center. The second author was supported by the Academy of Finland, projects SA-134757 and SA-118634.
{ "pile_set_name": "ArXiv" }
.1.5cm [Non-normal and Stochastic Amplification of Magnetic 0.1cm]{} [Energy in the Turbulent Dynamo: Subcritical Case]{} 1.0cm [Sergei Fedotov]{}$^{1}$ 1.0cm $^1$ Department of Mathematics, UMIST - University of Manchester Institute of Science and Technology, Manchester, M60 1QD UK, e-mail: Sergei.Fedotov@umist.ac.uk Web-page: http://www.ma.umist.ac.uk/sf/index.html 0.5cm Submitted to Phys. Rev. Lett. 1.0cm **Abstract**0.3cm Our attention focuses on the stochastic dynamo equation with non-normal operator that gives an insight into the role of stochastics and non-normality in galactic magnetic field generation. The main point of this Letter is a discussion of the generation of a large-scale magnetic field that cannot be explained by traditional linear eigenvalue analysis. We present a simple stochastic model for the thin-disk axisymmetric $\alpha \Omega $ dynamo involving three factors: (a) non-normality generated by differential rotation, (b) nonlinearity reflecting how the magnetic field affects the turbulent dynamo coefficients, and (c) stochastic perturbations. We show that even for the *subcritical case,* there are three possible mechanisms for the generation of magnetic field. The first mechanism is a deterministic one that describes an interplay between transient growth and nonlinear saturation of the turbulent $\alpha -$effect and diffusivity. It turns out that the trivial state is nonlinearly unstable to small but finite initial perturbations. The second and third are stochastic mechanisms that account for the interaction of non-normal effect generated by differential rotation with random additive and multiplicative fluctuations. In particular, we show that in the *subcritical  case* the average magnetic energy can grow exponentially with time due to the multiplicative noise associated with the $\alpha -$effect. The generation and maintenance of large scale magnetic fields in stars and galaxies has attracted enormous attention in past years [@Mof]-[@RShS] (see also a recent review [@Widrow]). The main candidate to explain the process of conversion of the kinetic energy of turbulent flow into magnetic energy is the mean field dynamo theory [@KrR]. The standard dynamo equation for the large scale magnetic field $\mathbf{B}(t,% \mathbf{x})$ reads $\partial \mathbf{B/}\partial t=$ curl$(\alpha \mathbf{B}% )+\beta \Delta \mathbf{B}+$curl$(\mathbf{u}\times \mathbf{B}),$ where $% \mathbf{u}\ $is the mean velocity field, $\alpha \ $is the coefficient of the $\alpha $-effect and $\beta $ is the turbulent magnetic diffusivity. This equation has been widely used for analyzing the generation of the large-scale magnetic field. Traditionally the mathematical procedure consists of looking for exponentially growing solutions of the dynamo equation with appropriate boundary conditions. While this approach has been quite successful in the prediction of large scale magnetic field generation, it fails to predict the *subcritical* onset of a large-scale magnetic field for some turbulent flow. Although the trivial solution $\mathbf{B}=0$ is linearly stable for the *subcritical* case, the non-normality of the linear operator in the dynamo equation for some turbulent flow configurations leads to the transient growth of initial perturbations [@nonnormal]. It turns out that the non-linear interactions and random fluctuations might amplify this transient growth further. Thus, instead of the generation of the large scale magnetic field being a consequence of the linear instability of trivial state $\mathbf{B}=0$, it results from the interaction of transient amplifications due to the non-normality with nonlinearities and stochastic perturbations. The importance of the transient growth of magnetic field for the induction equation has been discussed recently in [@FI1; @Proctor]. Comprehensive reviews of *subcritical* transition in hydrodynamics due to the non-normality of the linearized Navier-Stokes equation, and the resulting onset of shear flow turbulence, can be found in [@Grossmann; @SH]. The main purpose of this Letter is to study the non-normal and stochastic amplification of the magnetic field in galaxies. Our intention is to discuss the generation of the large-scale magnetic field that cannot be explained by traditional linear eigenvalue analysis. It is known that non-normal dynamical systems have an extraordinary sensitivity to stochastic perturbations that leads to great amplifications of the average energy of the dynamical system [@F0]. Although the literature discussing the mean field dynamo equation is massive, the effects of non-normality and random fluctuations are relatively unexplored. Several attempts have been made to understand the role of random fluctuations in magnetic field generation. The motivation was the observation of rich variability of large scale magnetic fields in stars and galaxies. Small scale fluctuations parameterized by stochastic forcing were the subject of recent research by Farrell and Ioannou [@FI1]. They examined the mechanism of stochastic field generation due to the transient growths for the induction equation. They did not use the standard closure involving $\alpha $ and $\ \beta $ parameterization. Hoyon with his colleagues has studied the effect of random alpha-fluctuations on the solution of the kinematic mean-field dynamo[@Hoyng]. However they did not discuss the non-normality of the dynamo equation and the possibility of stochastic transient growth of magnetic energy. Both attempts have involved only the linear stochastic theory. Numerical simulations of magnetoconvection equations with noise and non-normal transient growth have been performed in [@Proctor] It is the purpose of this Letter to present a simple stochastic dynamo model for the thin-disk axisymmetric $\alpha \Omega $ dynamo involving three factors: non-normality, non-linearity and stochastic perturbations. Recently it has been found [@Fedotov] that the interactions of these factors leads to noise-induced phase transitions in a “toy” model mimicking a laminar-to-turbulent transition. In this Letter we discuss three possible mechanisms for the generation of a magnetic field that are not based on standard linear eigenvalue analysis of the dynamo equation. The first mechanism is a deterministic one that describes an interplay between linear transient growth and nonlinear saturation of both turbulent parameters: $% \alpha $ and $\beta $. The second and third are stochastic mechanisms that account for the interaction of the non-normal effect generated by differential rotation with random additive and multiplicative fluctuations. Here we study the nonnormality and stochastic perturbation effects on the growth of galactic magnetic field by using a Moss’s “no-z” model for galaxies [@Beck]. Despite its simplicity the “no-z” model proves to be very robust and gives reasonable results compared with real observations. We consider a thin turbulent disk of conducting fluid of uniform thickness $\ 2h\ $and radius $\ R\ $($R\gg h$), which rotates with angular velocity $\ \Omega (r)$ [@ZRS; @RShS]. We consider the case of $\ \alpha \Omega -$dynamo for which the differential rotation dominates over the $\alpha $-effect. Neglecting the radial derivatives one can write the stochastic equations for the azimuthal, $B_{\varphi }\left( t\right) ,\ $and radial,$\ B_{r}\left( t\right) ,$ components of the axisymmetric magnetic field $$\frac{dB_{r}}{dt}=-\frac{\alpha (|\mathbf{B}|,\xi _{\alpha }(t))}{h}% B_{\varphi }-\frac{\pi ^{2}\beta (|\mathbf{B}|)}{4h^{2}}B_{r}+\xi _{f}(t),$$ $$\frac{dB_{\varphi }}{dt}=gB_{r}-\frac{\pi ^{2}\beta (|\mathbf{B}|)}{4h^{2}}% B_{\varphi },\ \label{governing}$$ where $\alpha (|\mathbf{B}|,\xi _{\alpha }(t))$ is the random non-linear function describing the $\alpha -$effect, $\beta (|\mathbf{B}|)$ is the turbulent magnetic diffusivity, $g=rd\Omega /dr\ $is the measure of differential rotation (usually $rd\Omega /dr<0).$ Nonlinearity of the functions $\alpha (|\mathbf{B}|,\xi _{\alpha }(t))$ and $% \beta (|\mathbf{B}|)$ reflects how the growing magnetic field $\mathbf{B}$ affects the turbulent dynamo coefficients. This nonlinear stage of dynamo theory is a topic of great current interest, and, numerical simulations of the non-linear magneto-hydrodynamic equations are necessary to understand it. There is an uncertainty about how the dynamo coefficients are suppressed by the mean field and current theories seem to disagree about the exact form of this suppression [@backreaction]. Here we describe the dynamo saturation by using the simplified forms [@Widrow] $$\alpha (|\mathbf{B}|,\xi _{\alpha }(t)=(\alpha _{0}+\xi _{\alpha }(t))\varphi _{\alpha }(|\mathbf{B}|),\;\;\;\beta (|\mathbf{B}|)=\beta _{0}\varphi _{\beta }(|\mathbf{B}|), \label{nonlinear}$$ where $\varphi _{\alpha ,\beta }(|\mathbf{B}|)$ is a decaying function such that $\varphi _{\alpha ,\beta }(0)=1.$ In what follows we use [@Widrow] $$\varphi _{\alpha }(|\mathbf{B}|)=\left( 1+k_{\alpha }(B_{\varphi }/B_{eq})^{2}\right) ^{-1},\;\;\;\varphi _{\beta }(|\mathbf{B}|)=\left( 1+% \frac{k_{\beta }}{1+(B_{eq}/B_{\varphi })^{2}}\right) ^{-1}, \label{backreaction}$$  where $k_{\alpha }$ and $k_{\beta }$ are constants of order one, and $% B_{eq}$ is the equipartition strength. It should be noted that for the $% \alpha \Omega -$dynamo the azimuthal component $B_{\varphi }\left( t\right) $ is much larger$\ $than the radial field$\ B_{r}\left( t\right) ,$ therefore, $\mathbf{B}^{2}\simeq B_{\varphi }^{2}.$ We did not include the strong dependence of $\alpha $ and $\beta $ on the magnetic Reynolds number $R_{m}$. The back reaction of the magnetic field on the differential rotation is also ignored. The multiplicative noise $\xi _{\alpha }(t)$ describes the effect of rapid random fluctuations of $\alpha .$ We assume that they are more important than the random fluctuations of the turbulent magnetic diffusivity $\beta $ [@Hoyng]. The additive noise $\xi _{f}(t)$ represents the stochastic forcing of unresolved scales [@FI1]. Both noises are independent Gaussian random processes with zero means $<\xi _{\alpha }(t)>=0,$ $<\xi _{f}(t)>=0$ and correlations: $$<\xi _{\alpha }(t)\xi _{\alpha }(s)>=2D_{\alpha }\delta (t-s),\;\;\;<\xi _{f}(t)\xi _{f}(s)>=2D_{f}\delta (t-s). \label{noise}$$ The intensity of the noises is measured by the parameters $D_{\alpha }$ and $% D_{f}$. One can show [@Fedotov] that the additive noise in the second equation in (\[governing\]) is less important. The governing equations (\[governing\]) can be nondimensionalized by using an equipartition field strength $B_{eq},$ a length $h$, and a time $\Omega _{0}^{-1},$ where $\Omega _{0}$ is the typical value of angular velocity. By using  the dimensionless parameters $$g\rightarrow -\Omega _{0}|g|,\;\;\;\delta =\frac{R_{\alpha }}{R_{\omega }}% ,\;\;\;\varepsilon =\frac{\pi ^{2}}{4R_{\omega }},\;\;\;R_{\alpha }=\frac{% \alpha _{0}h}{\beta },\;\;\;R_{\omega }=\frac{\Omega _{0}h^{2}}{\beta },$$ we can write the stochastic dynamo equations in the form of SDE’s $$dB_{r}=-(\delta \varphi _{\alpha }(B_{\varphi })B_{\varphi }+\varepsilon \varphi _{\beta }(B_{\varphi })B_{r})dt-\sqrt{2\sigma _{1}}\varphi _{\alpha }(B_{\varphi })B_{\varphi }dW_{1}+\sqrt{2\sigma _{2}}dW_{2},$$ $$dB_{\varphi }=-(|g|B_{r}+\varepsilon \varphi _{\beta }(B_{\varphi })B_{\varphi })dt\ , \label{basic}$$ where $W_{1}$ and $W_{2}$ are independent standard Wiener processes. The dynamical system (\[basic\]) is subjected to the multiplicative and additive noises with the corresponding intensities: $$\sigma _{1}=\frac{D_{\alpha }}{h^{2}\Omega _{0}},\;\;\;\sigma _{2}=\frac{% D_{f}}{B_{eq}^{2}\Omega _{0}}. \label{intensity}$$ It is well-known that the presence of noise can dramatically change the properties of a dynamical system [@LH]. Since the differential rotation dominates over the $\alpha $-effect ($R_{\alpha }\ll |R_{w}|),$ the system (\[basic\]) involves two small parameters $\delta =$ $R_{\alpha }/R_{\omega }$ and $\varepsilon =1/R_{\omega }$ whose typical values are $0.01-0.1$ ($% R_{\omega }=10-100,$ $\ R_{\alpha }=0.1-1).$ These parameters play very important roles in what follows. For small values $\delta $ and $\varepsilon $ , the linear operator in (\[basic\]) is a highly non-normal one $($ $% |g|\sim 1).$ This can lead to a large transient growth of the azimuthal component $B_{\varphi }\left( t\right) $ in a *subcritical case.* We then expect a high sensitivity to stochastic perturbations. Similar deterministic low-dimensional models have been proposed to explain the *subcritical* transition in the Navier-Stokes equations (see, for example, [@Trefethen; @GS]). The main difference is that the nonlinear terms in (\[governing\]) are not energy conserving. The probability density function $p(t,B_{r},B_{\varphi })$ obeys the Fokker-Planck equation associated with (\[basic\]) [@Gardiner] $$\frac{\partial p}{\partial t}=-\frac{\partial }{\partial B_{r}}\left[ \left( \delta \varphi (B_{\varphi })B_{\varphi }+\varepsilon \varphi (B_{\varphi })B_{r}\right) p\right] -\frac{\partial }{\partial B_{\varphi }}\left[ \left( |g|B_{r}+\varepsilon \varphi (B_{\varphi })B_{\varphi }\right) p% \right] +$$ $$(\sigma _{1}\varphi ^{2}(B_{\varphi })B_{\varphi }^{2}+\sigma _{2})\frac{% \partial ^{2}p}{\partial B_{r}^{2}}.\$$ Using this equation in the linear case one can find a closed system of ordinary differential equations for the moments $<B_{r}^{2}>,$ $% <B_{r}B_{\varphi }>,$ and $\ <B_{\varphi }^{2}>$ $$\frac{d}{dt}\left( \begin{array}{c} <B_{r}^{2}> \\ <B_{r}B_{\varphi }> \\ <B_{\varphi }^{2}> \end{array} \right) =\left( \begin{array}{ccc} -2\varepsilon & -2\delta & \sigma _{1} \\ -|g| & -2\varepsilon & -\delta \\ 0 & -2|g| & -2\varepsilon \end{array} \right) \left( \begin{array}{c} <B_{r}^{2}> \\ <B_{r}B_{\varphi }> \\ <B_{\varphi }^{2}> \end{array} \right) +\left( \begin{array}{c} \sigma _{2} \\ 0 \\ 0 \end{array} \right) . \label{moments}$$ The linear system of equations (\[moments\]) allows us to determine the initial evolution of the average magnetic energy $E(t)=<B_{r}^{2}>+<B_{r}B_{% \varphi }>+<B_{\varphi }^{2}>.$ Similar equations emerge in a variety of physical situations, such as models of stochastic parametric instability that explain why the linear oscillator subjected to multiplicative noise can be unstable [@BF]. Now we are in a position to discuss three possible scenarios for the *subcritical* generation of galactic magnetic field. **Deterministic subcritical generation.** Let us examine the deterministic transient growth of the magnetic field in the *subcritical case.*To illustrate the non-normality effect consider first the linear case without noise terms. The dynamical system (\[basic\]) takes the form $$\frac{d}{dt}\left( \begin{array}{c} B_{r} \\ B_{\varphi } \end{array} \right) =\left( \begin{array}{cc} -\varepsilon & -\delta \\ -|g| & -\varepsilon \end{array} \right) \left( \begin{array}{c} B_{r} \\ B_{\varphi } \end{array} \right) . \label{linear}$$ Since $\delta <<1$, $\varepsilon <<1$ and $|g|\sim 1$, this system involves a highly non-normal matrix. Even in the* subcritical case (*$% 0<\delta <\varepsilon ^{2}/|g|$ see below*)* when all eigenvalues are negative, $B_{\varphi }$ exhibits a large degree of transient growth before the exponential decay. Assuming that $B_{r}(t)=e^{\gamma t}$and $B_{\varphi }(0)=b\ e^{\gamma t}$we find two eigenvalues $\gamma _{1.2}=-\varepsilon \pm \sqrt{\delta |g|}$ (the corresponding eigenvectors are almost parallel). The *supercritical* excitation condition $\gamma _{1}>0$ can be written as $\sqrt{\delta |g|}>\varepsilon $ or $\sqrt{R_{\alpha }R_{\omega }|g|}>\pi ^{2}/4$ [@RShS]. Consider the *subcritical* case when $0<\delta <\varepsilon ^{2}/|g|.$ The solution of the system (\[linear\]) with the initial conditions $B_{r}(0)=-2c\sqrt{\delta /|g|},$ $B_{\varphi }(0)=0\ $ is $$B_{r}(t)=-c\sqrt{\frac{\delta }{|g|}}(e^{\gamma _{1}t}+e^{\gamma _{2}t}),\;\;\;B_{\varphi }(t)=c(e^{\gamma _{1}t}-e^{\gamma _{2}t}).\$$ Thus $B_{\varphi }(t)$ exhibits large transient growth over a timescale of order $1/\varepsilon $ before decaying exponentially. In Fig. 1 we plot the azimuthal component $B_{\varphi }$ as a function of time for $|g|=1,$ $% \delta =10^{-4}$ and $\varepsilon =2\cdot 10^{-2}$ and different initial values of $B_{r}$ ($B_{\varphi }(0)=0$). =0.6 Of course without nonlinear terms any initial perturbation decays. However if we take into account the back reaction suppressing the effective dissipation ($\varphi _{\beta }(|\mathbf{B}|)$ is a decaying function), one can expect an entirely different global behaviour. In the deterministic case there can be three stationary solutions to (\[basic\]). In Fig. 2 we illustrate the role of transient growth and nonlinearity in the transition to a non-trivial state using (\[backreaction\]) with $k_{\alpha }=0.5$ and $k_{\beta }=3$. We plot the azimuthal component $B_{\varphi }$ as a function of time with the initial condition $B_{\varphi }(0)=0.$ We use the same values of parameters $|g|,$ $\delta $ and $\varepsilon $ and three initial values of $B_{r}(0)$ as in Fig. 1. One can see from Fig. 2 that the trivial solution $B_{\varphi }=B_{r}=0$ is nonlinearly unstable to small but finite initial perturbations of $B_{r},$ such as, $B_{r}(0)=-0.03$. For fixed values of the parameters in nonlinear system (\[basic\]), there exists a threshold amplitude for the initial perturbation, above which $B_{\varphi }(t)$ grows and below which it eventually decays. =0.6 **Stochastic subcritical generation due to additive noise.** This scenario has been already discussed in the literature [@FI1] (see also [@F0] for hydrodynamics). The physical idea is that the average magnetic energy is maintained by additive Gaussian random forcing representing unresolved scales. It is clear that the non-zero additive noise ($\sigma _{2}\neq 0$) ensures the stationary solution to (\[moments\]). If we assume for simplicity $\sigma _{1}=0$ and $\delta =0$ then the dominant stationary moment is $$<B_{\varphi }^{2}>_{st}=\frac{g^{2}\sigma _{2}}{4\varepsilon ^{3}}. \label{stationary}$$ We can see that due to the non-normality of the system (\[linear\]) the average stationary magnetic energy $E_{st}\sim <B_{\varphi }^{2}>_{st}$ exhibits a high degree of sensitivity with respect to the small parameter $% \varepsilon :E_{st}\sim \varepsilon ^{-3}$ [@F0; @Fedotov]. **Stochastic subcritical generation due to multiplicative noise.** Here we discuss the divergence of the average magnetic energy $% E(t)=<B_{r}^{2}>+<B_{r}B_{\varphi }>+<B_{\varphi }^{2}>$ with time $t$ due to the random fluctuations of the $\alpha -$parameter. Although the first moments tend to zero in the *subcritical case,* the average energy $% E(t)$ grows as $e^{\lambda t}$ when the intensity of noise $\sigma _{1}$ exceeds a critical value. The growth rate $\lambda $ is the positive real root of the characteristic equation for the system (\[moments\]) $$(\lambda +2\varepsilon )^{3}-4\delta |g|(\lambda +2\varepsilon )-2\sigma _{1}|g|=0. \label{ch}$$ For $\delta =0,$ the growth rate is $\lambda _{0}=-2\varepsilon +(2\sigma _{1}|g|)^{1/3}$ as long as it is positive, and the excitation condition can be written as $\sigma _{1}>\sigma _{cr}=4\varepsilon ^{3}/|g|.$ It means that the generation of average magnetic energy occurs for $\alpha _{0}=0$ ! It is interesting to compare this criterion with the classical *supercritical* excitation condition: $\delta |g|>\varepsilon ^{2}$[@RShS]. To assess the significance of this parametric instability it is useful to estimate the magnitude of the critical noise intensity $\sigma _{cr}.$ First let us estimate the parameter $\varepsilon =\pi ^{2}\beta /(4\Omega _{0}h^{2}).$ The turbulent magnetic diffusivity is given by $\beta \simeq lv/3,$ where $v$ is the typical velocity of turbulent eddy $v\simeq 10$ km s$% ^{-1},$ and $l$ is the turbulent scale, $l\simeq 100$ pc. For spiral galaxies, the typical values of the thickness, $h,$ and the angular velocity, $\Omega _{0},$ are $h\simeq 800$ pc and  $\Omega _{0}\simeq 10^{-15}$ s$^{-1};$ $|g|\simeq 1$[@RShS]. It gives an estimate for $% \varepsilon \simeq 3.2\times 10^{-2}$ , that is, $\sigma _{cr}\simeq 1.\,\allowbreak 3\times 10^{-4}.$ In general $\lambda (\delta )$ $=\lambda _{0}+$ $(4/3|g|(2\sigma _{1}|g|)^{-1/3})\delta +$ $o(\delta ).$ This analysis predicts an amplification of the average magnetic energy in a system (\[basic\]) where no such amplification is observed in the absence of noise. The value of the critical noise intensity parameter $\sigma _{cr},$ above which the instability occurs, is proportional to $\varepsilon ^{3}$, that is, very small indeed. To some extent, the amplification process exhibits features similar to those observed in the linear oscillator submitted to parametric noise [@BF]. To avoid the divergence of the average magnetic energy, it is necessary to go beyond the kinematic regime and consider the effect of nonlinear saturations. In summary, we have discussed galactic magnetic field generation that cannot be explained by traditional linear eigenvalue analysis of dynamo equation. We have presented a simple stochastic model for the $\alpha \Omega $ dynamo involving three factors: (a) non-normality due to differential rotation, (b) nonlinearity of the turbulent dynamo $\alpha $ effect and diffusivity $\beta $, and (c) additive and multiplicative noises. We have shown that even for the *subcritical case,* there are three possible scenarios for the generation of large scale magnetic field. The first mechanism is a deterministic one that describes an interplay between transient growth and nonlinear saturation of the turbulent $\alpha -$effect and diffusivity. We have shown that the trivial state $\mathbf{B}=0$ can be nonlinearly unstable with respect to small but finite initial perturbations. The second and third are stochastic mechanisms that account for the interaction of non-normal effect generated by differential rotation with random additive and multiplicative fluctuations. We have shown that multiplicative noise associated with the $\alpha -$effect leads to exponential growth of the average magnetic energy even in the *subcritical case.* **Acknowledgements** I am grateful to Anvar Shukurov for constructive discussions. [99]{} H. K. Moffatt, *Magnetic Field Generation in Electrically Conducting Fluids* (Cambridge University Press, New York, 1978). F. Krause and K. H. Radler, *Mean-Field Magnetohydrodynamics and Dynamo Theory* (Academic-Verlag, Berlin, 1980). Ya. B. Zeldovich, A.A. Ruzmaikin and D. D. Sokoloff, *Magnetic Fields in Astrophysics* (Gordon and Breach Science Publishers, New York, 1983). A.A. Ruzmaikin, A. M. Shukurov and D. D. Sokoloff, *Magnetic Fields in Galaxies* (Kluwer Academic Publishers, Dordrecht, 1988). L. Widrow, Rew. Mod. Phys. **74**, 775 (2002). An operator is said to be non-normal if it does not commute with its adjoint in the corresponding scalar product. Typical examples of transient growth can be represented by the functions: $% f_{1}(t)=t\exp (-t)$ and $f_{2}(t)=$ $\exp (-t)$ $-\exp (-2t)$ for $t\geq 0.$ B. Farrel, and P. Ioannou, 1999, ApJ, **522**, 1088. J. R. Gog, I. Opera, M. R. E. Proctor, and A. M. Rucklidge, Proc. R. Soc. Lond. A , **455**, 4205 (1999). S. Grossmann, Rev. Mod. Physics, **72**, 603 (2000). P. J. Schmid, D. S. Henningson, *Stability and Transition in Shear Flows,* (Springer, Berlin, 2001). B. F. Farrell, and P. J. Ioannou, Phys. Fluids, **5**, 2600 (1993); Phys. Rev. Lett. **72**, 1188 (1994); J. Atmos. Sci., **53**, 2025 (1996). P. Hoyng, D. Schmitt, and L. J. W. Teuben, Astron. Astrophys. **289**, 265 (1994). S. Fedotov, I. Bashkirtseva, and L. Ryashko, Phys. Rev. E., **66**, 066310 (2002). R. Beck, A. Brandenburg, D. Moss, A. Shukurov, and D. Sokoloff, Ann. Rev. Astron. Astrophys. **34**, 155 (1996). L. N. Trefethen, A. E. Trefethen, S. C. Reddy, and T. A. Driscol, Science **261**, 578 (1993). T. Gebhardt and S. Grossmann, Phys. Rev. E **50**, 3705 (1994); J. S. Baggett, T. A. Driscoll, and L. N. Trefethen, Phys. Fluids **7**, 833 (1995); J. S. Baggett and L. N. Trefethen, Phys. Fluids **9**, 1043 (1997). C. W. Gardiner, *Handbook of Stochastic Methods*, 2nd ed. (Springer, New York, 1996). W. Horsthemke and R. Lefever, *Noise-Induced Transitions* (Springer, Berlin, 1984). S. I. Vainshtein  and F. Cattaneo, ApJ 393, **165** (1992); F. Cattaneo, and D. W. Hughes, Phys. Rev. E **54**, R4532 (1996); A. V. Gruzinov and P. H. Diamond, Phys. Rev. Lett. **72**, 1651 (1994); A. Brandenburg, ApJ **550**, 824 (2001); I. Rogachevskii and N. Kleeorin Phys. Rev. E **64**, 056307 (2001); E. G. Blackman, and G. B. Field, Phys. Rev. Lett. **89**, 265007 (2002). R. Bourret, Physica **54**, 623 (1971); U. Frisch, and A. Pouquet, *ibid,* **65**, 303 (1973).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The aim of this paper is to study the well-posedness and the existence of global attractors for a family of Cahn-Hilliard equations with a mobility depending on the chemical potential. Such models arise from generalizations of the (classical) Cahn-Hilliard equation due to <span style="font-variant:small-caps;">M. E. Gurtin</span>.' author: - 'Maurizio Grasselli[^1], Alain Miranville[^2], Riccarda Rossi[^3], Giulio Schimperna[^4]' date: 'March 11th, 2010' title: | Analysis of the Cahn-Hilliard equation\ with a chemical potential dependent mobility[^5] --- \#1 \#2 \#3 [width \#1pt height \#2pt depth \#3pt]{} \#1 \#2 \#3 [width \#1pt height \#2pt depth \#3pt]{} \#1 \#2 \#3 [width \#1pt height \#2pt depth \#3pt]{} Introduction {#s:1} ============ In this paper, we address the initial and boundary value problem for the following [*generalized Cahn-Hilliard equation*]{}: $$\label{e:1} \chi_t -\Delta \alpha\left(\delta \chi_t -\Delta \chi +\phi(\chi) \right)=0 \qquad \text{in }\Omega \times (0,T),$$ where $\delta \geq 0$, $\Omega \subset {\mathbb{R}}^3$ is a bounded domain, $T>0$ a finite time horizon, and $\alpha:{\mathbb{R}}\to {\mathbb{R}}$ a strictly increasing function. The classical Cahn-Hilliard equation reads $$\chi _t-\Delta w=0,\ \ w=-\Delta \chi +\phi (\chi ) \qquad \text{in }\Omega \times (0,T),$$ where $\chi $ is the order parameter (corresponding to a density of atoms), $w$ is the chemical potential (defined as a variational derivative of the free energy with respect to the order parameter), and $\phi $ is the derivative of a double-well potential. This equation plays an essential role in materials science and describes phase separation processes in binary alloys (see, e.g., [@Cah; @CahH; @NC]). By considering a mechanical version of the second law of thermodynamics and introducing a new balance law for interactions at a microscopic level, <span style="font-variant:small-caps;">M. E. Gurtin</span> proposed in [@Gu] the following equations: $$\begin{cases} \displaystyle{ \chi_t-{\rm div}(A(\chi ,\nabla \chi ,\chi _t,w)\nabla w)=0,} \\ \displaystyle{ w=\delta (\chi ,\nabla \chi ,\chi _t,w)\chi _t-\Delta \chi +\phi (\chi )} \end{cases} \qquad \text{in } \Omega \times (0,T),$$ Taking $\delta $ constant and $A=a(w)I$, with $a: {\mathbb{R}}\to {\mathbb{R}}$ a positive function, we then obtain an equation of the form , in which $\alpha$ is some primitive of the function $a$. In the viscous case $\delta >0$, such equations have been studied in [@rossi05; @rossi06]. Therein, results on the well-posedness and the existence of global attractors have been obtained. Our main aim in this paper is to treat the case $\delta =0$. We also consider the viscous case $\delta >0$ under different (and more general) assumptions on $\alpha $ and $\phi $ from those in [@rossi05; @rossi06]. In particular, we prove the existence of solutions both in the non-viscous case $\delta =0 $ (cf. Theorem \[th:1\]) and in the viscous case $\delta >0$ (see Theorem \[th:2\]). In the latter setting, under more restrictive assumptions on the nonlinearities $\alpha$ and $\phi$, we also obtain (cf. Theorem 3.1) well-posedness and continuous dependence results for (the Cauchy problem for) . For $\delta>0$ we are also able to study the asymptotic behavior of the system and establish the existence of the global attractor (see Theorem \[th:4\]) in a quite general frame of assumptions on $\alpha$ and $\phi$, which may allow for non-uniqueness of solutions. That is why, for this long-time analysis we rely on the notion of generalized semiflows proposed by <span style="font-variant:small-caps;">J.M. Ball</span> in [@Ball97], and on the extension given in [@rossi-segatti-stefanelli08]. Finally, relying on the short-trajectory approach developed in [@malek-prazak], we also conclude the existence of exponential attractors and, thus, of finite-dimensional global attractors. We recall that an exponential attractor is a compact and semi-invariant set which has finite fractal dimension and attracts the trajectories exponentially fast; note that the global attractor may attract the trajectories at a slow (polynomial) rate (see, e.g., [@BabinVishik92; @EFNT; @MZH]). This paper is organized as follows. In Section 2, we define our notation and give some preliminary results. Then, in Section 3, we state our main results, whose proofs are carried out in the remaining sections. Finally, in Appendix, we introduce the approximation scheme for our problem and justify the a priori estimates (formally) developed throughout the paper. Preliminaries {#ss:2.1} ============= #### Notation and functional setup. Throughout the paper, we consider a bounded domain $\Omega \subset {\mathbb{R}}^3$, with sufficiently smooth boundary $\partial \Omega$, and write $|\mathcal{O}|$ for the Lebesgue measure of any (measurable) subset $ \mathcal{O} \subset \Omega$. Furthermore, given a Banach space $B$, we denote by $\Vert\cdot\Vert_B$ the norm in $B$ and by $_{B'}\langle\cdot,\cdot\rangle_B$ the duality pairing between $B'$ and $B$. We use the notation $$H:=L^2 (\Omega),\quad V:=H^1(\Omega), \quad Z:=\left\{ v \in H^{2}(\Omega) \: : \: \partial_{n} v=0 \right\},$$ and identify $ H $ with its dual space $ H' $, so that $ Z \subset V \subset H \subset V'\subset Z' $, with dense and compact embeddings. We denote by $ \mathcal{H} $, $ \mathcal{V} $, $ \mathcal{Z} $, $ \mathcal{V'} $, and $ \mathcal{Z'} $ the subspaces of the elements $v$ of $H$, $ V $, $Z$, $ V' $, and $Z'$, respectively, with zero mean value $ {m}(v) =\frac{1}{|\Omega|} { \sideset{_{Z' }}{_{ Z}} {\mathop{\langle v , 1 \rangle}}} $. We consider the operator $$\label{def:opA} A: V \rightarrow V', \qquad _{V'}\langle Au, v \rangle _V:=\int_{\Omega} \nabla u \cdot\nabla v \quad \forall u, v \in V,$$ and note that $ Au \in {\mathcal{V'}}$ for every $ u \in V $. Indeed, the restriction of $ A $ to $ {\mathcal{V}}$ is an isomorphism, so that we can introduce its inverse operator $ {\mathcal{N}}: {\mathcal{V'}}\rightarrow {\mathcal{V}}$. We recall the relations $$\begin{aligned} && \label{Aa} { \sideset{_{V' }}{_{ V}} {\mathop{\langle Au , {\mathcal{N}}(v) \rangle}}} = { \sideset{_{V' }}{_{ V}} {\mathop{\langle v , u \rangle}}}\quad \forall u \in V, \; \forall v \in {\mathcal{V'}}, \\ && \label{Bb} { \sideset{_{V' }}{_{ V}} {\mathop{\langle u , {\mathcal{N}}(v) \rangle}}} =\int_{\Omega} \nabla({\mathcal{N}}(u)) \cdot\nabla({\mathcal{N}}(v)) \, {\mathrm{d}}x = { \sideset{_{V' }}{_{ V}} {\mathop{\langle v , {\mathcal{N}}(u) \rangle}}} \quad \forall u, v \in {\mathcal{V'}},\end{aligned}$$ and that, on account of Poincaré’s inequality for zero mean value functions, the following norms on $V$ and $V'$: $$\begin{aligned} && \| u {\|_{V}}^{2}:= { \sideset{_{V' }}{_{ V}} {\mathop{\langle Au , u \rangle}}} + \,{m}(u)^2 \quad \forall u \in V, \nonumber \\ && \| v {\|_{V'}}^{2}:= { \sideset{_{V' }}{_{ V}} {\mathop{\langle v , {\mathcal{N}}(v -{m}(v)) \rangle}}} + \,{m}(v)^2\quad \forall v \in V', \nonumber \end{aligned}$$ are equivalent to the standard ones. It follows from the above formulae that $$\| v {\|_{V'}}^2= { \sideset{_{V' }}{_{ V}} {\mathop{\langle v , {\mathcal{N}}(v) \rangle}}}= \|{\mathcal{N}}(v) {\|_{V}}^2 \quad \forall v \in {\mathcal{V'}}.$$ It is well known that the operator $A$  extends to an operator (which will be denoted by the same symbol) $A: H \to \mathcal{Z'} $. The inverse of the restriction of $A$ to $ \mathcal{H}$ is the extension of ${\mathcal{N}}$ to an operator ${\mathcal{N}}: \mathcal{Z'} \to \mathcal{H} $. By means of the latter, we define the space $$\label{e:sob-neg} \begin{gathered} {\mathcal{W}^{-2,q}(\Omega)}:= \left\{v \in \mathcal{Z'}\, : \ {\mathcal{N}}(v) \in L^q (\Omega) \right\} \qquad \text{for a given $q>1$,} \\ \text{with the norm $\| v \|_{{\mathcal{W}^{-2,q}(\Omega)}}:=\| {\mathcal{N}}(v)\|_{L^q (\Omega)}$.} \end{gathered}$$ The following result shows that, for $q\in (2,6)$ (which is the index range relevant to the analysis to be developed in what follows, cf. ), the space ${\mathcal{W}^{-2,q}(\Omega)}$ can be identified with the dual of the space $${\mathcal{W}^{2,q'}(\Omega)}= \left\{ z \in \mathcal{V}\, : \, Az \in L^{q'}(\Omega)\right\}\,,$$ $q'$ being the conjugate exponent of $q$. We endow the latter space with the norm $ \|z \|_{{\mathcal{W}^{2,q'}(\Omega)}}:= \|Az\|_{L^{q'}(\Omega)}$, which is equivalent to the standard $W^{2,q'}$-norm by the (generalized) Poincaré inequality. \[le:sob-dual\] For $q\in (2,6)$, the operator $\mathrm{J}: {\mathcal{W}^{-2,q}(\Omega)} \to ({\mathcal{W}^{2,q'}(\Omega)})' $ defined by $$\label{def:operator} { \sideset{_{({\mathcal{W}^{2,q'}(\Omega)})' }}{_{ {\mathcal{W}^{2,q'}(\Omega)}}} {\mathop{\langle \mathrm{J}(v) , z \rangle}}}:= { \sideset{_{V' }}{_{ V}} {\mathop{\langle Az , {\mathcal{N}}(v) \rangle}}} \qquad \text{for all $z \in {\mathcal{W}^{2,q'}(\Omega)}$ and $v \in {\mathcal{W}^{-2,q}(\Omega)}$}$$ is an isomorphism. We preliminarily note that, since $q \in (2,6)$, the conjugate exponent $q'$ belongs to $(6/5,2)$ and, consequently, one has the following embeddings: $$\label{e:embeddings} \mathcal{H} \subset \mathcal{V'} \subset {\mathcal{W}^{-2,q}(\Omega)}, \qquad L^{q'}(\Omega) \subset \mathcal{V'}, \qquad \mathcal{Z} \subset {\mathcal{W}^{2,q'}(\Omega)}\,.$$ Clearly, the operator $\mathrm{J}$ is well defined, linear, and continuous, since, for all $z \in {\mathcal{W}^{2,q'}(\Omega)}$ and $v \in {\mathcal{W}^{-2,q}(\Omega)}$, $$\label{ineq-cont} \left|{ \sideset{_{({\mathcal{W}^{2,q'}(\Omega)})' }}{_{ {\mathcal{W}^{2,q'}(\Omega)}}} {\mathop{\langle \mathrm{J}(v) , z \rangle}}} \right| \leq \| Az\|_{L^{q'}(\Omega)} \|{\mathcal{N}}(v) \|_{L^q(\Omega)} \leq \|z \|_{{\mathcal{W}^{2,q'}(\Omega)}} \| v\|_{{\mathcal{W}^{-2,q}(\Omega)}}\,.$$ Furthermore, for every $v \in {\mathcal{W}^{-2,q}(\Omega)}$, one can choose $z_v = {\mathcal{N}}(|{\mathcal{N}}(v)|^{q-2}{\mathcal{N}}(v) )$ (note that $z_v$ is well defined and belongs to $\mathcal{V}$, since $|{\mathcal{N}}(v)|^{q-2}{\mathcal{N}}(v) \in L^{q'}(\Omega) \subset \mathcal{V'}$ by the second of ). Then, $$\begin{aligned} & { \sideset{_{({\mathcal{W}^{2,q'}(\Omega)})' }}{_{ {\mathcal{W}^{2,q'}(\Omega)}}} {\mathop{\langle \mathrm{J}(v) , z_v \rangle}}}= \|{\mathcal{N}}(v)\|_{L^q (\Omega)}^{q}, \\ & \|Az_v\|_{L^{q'}(\Omega)}= \||{\mathcal{N}}(v)|^{q-2}{\mathcal{N}}(v)\|_{L^{q'}(\Omega)}= \|{\mathcal{N}}(v)\|_{L^q (\Omega)}^{q-1}, \end{aligned}$$ so that $$\|\mathrm{J}(v)\|_{({\mathcal{W}^{2,q'}(\Omega)})'} \geq \frac{\left|{ \sideset{_{({\mathcal{W}^{2,q'}(\Omega)})' }}{_{ {\mathcal{W}^{2,q'}(\Omega)}}} {\mathop{\langle \mathrm{J}(v) , z_v \rangle}}} \right|}{\|z_v \|_{{\mathcal{W}^{2,q'}(\Omega)}}} = \frac{\|{\mathcal{N}}(v)\|_{L^q (\Omega)}^{q}}{\|{\mathcal{N}}(v)\|_{L^q (\Omega)}^{q-1}}=\|{\mathcal{N}}(v)\|_{L^q (\Omega)}= \|v\|_{{\mathcal{W}^{-2,q}(\Omega)}}\,.$$ In view of , we conclude that $\mathrm{J}$ is an isometry. In particular, it is injective and the image $\mathrm{J}({\mathcal{W}^{-2,q}(\Omega)} )$ is closed in $({\mathcal{W}^{2,q'}(\Omega)})'$. To conclude that $\mathrm{J}$ is surjective, we will prove that $$\label{image:dense} \text{$\mathrm{J}({\mathcal{W}^{-2,q}(\Omega)} )$ is dense in $({\mathcal{W}^{2,q'}(\Omega)})'$.}$$ Indeed, let $\bar{z} \in {\mathcal{W}^{2,q'}(\Omega)} $ be such that $$\label{e:test-densita} { \sideset{_{({\mathcal{W}^{2,q'}(\Omega)})' }}{_{ {\mathcal{W}^{2,q'}(\Omega)}}} {\mathop{\langle \mathrm{J}(v) , \bar{z} \rangle}}}=0 \ \ \text{for all $v \in {\mathcal{W}^{-2,q}(\Omega)}$.}$$ In particular, holds for all $v \in \mathcal{H}$, so that, also in view of , $$0 = { \sideset{_{V' }}{_{ V}} {\mathop{\langle A\bar{z} , {\mathcal{N}}(v) \rangle}}} ={ \sideset{_{V' }}{_{ V}} {\mathop{\langle v , \bar{z} \rangle}}} = \int_{\Omega} \bar{z} v \quad \text{for all $v \in \mathcal{H}$\,.}$$ From the above relation, we easily conclude that $\bar{z}=0$, whence . #### A generalization of Poincaré’s inequality. The following result will play an important role in the derivation of the *a priori estimates* of Section \[s:3.1\]. \[le-poinca\] Let $X$ and $Y$ be Banach spaces, with $X$ reflexive, and assume that $$\label{e:compact-embed} \text{$X \Subset Y$ with compact embedding.}$$ Consider $$\begin{aligned} & \label{funz-g} G: X \to Y \quad \text{a linear, weakly-weakly continuous functional,} \\ & \label{funz-psi} \begin{aligned} \Psi: X \to [0,+\infty) \quad & \text{a $1$-positively homogeneous,} \\ & \text{sequentially weakly lower-semicontinuous functional.} \end{aligned}\end{aligned}$$ Assume that $G$ and $\Psi$ comply with the following [*compatibility condition*]{}: for all $v \in X$, $$\label{e:comp} Gv=0 \quad \text{and} \quad \Psi(v) =0 \ \Rightarrow \ v=0\,,$$ and that $$\label{e:norm-equivalent} \exists\, C\geq 1 : \ \ \forall\, v \in X \quad \frac{1}C \left( \| v\|_{Y} + \| Gv\|_{Y} \right) \leq \|v\|_{X} \leq C\left( \| v\|_{Y} + \| Gv\|_{Y} \right)\,.$$ Then, $$\label{e:poinc-gen} \exists \, K>0: \ \ \forall\, v\in X \quad \|v\|_{X} \leq K\left( \| Gv\|_{Y} + \Psi(v) \right)\,.$$ [*Proof.*]{} Assume, by contradiction, that  does not hold: then, there exists a sequence $\{ v_n \} \subset X$ such that, for every $n \in {\mathbb{N}}$, $$\label{e:contradd} \|v_n\|_{X} > n \left( \| Gv_n\|_{Y} + \Psi(v_n) \right)\,.$$ In particular, this yields that $\|v_n\|_{X} \neq 0$ for all $n$. Letting $w_n:= v_n / \|v_n\|_{X}$ and using the $1$-homogeneity of $\Psi$, we deduce from  that $$\| Gw_n\|_{Y} + \Psi(w_n) <\frac1n \quad \text{for every $n \in {\mathbb{N}}$}\,,$$ giving $$\label{e:2.1.12}\text{ $Gw_n \to 0$ in $Y$ \ and \ $\Psi(w_n) \to 0$ as $n \to +\infty$.}$$ On the other hand, by the reflexivity of $X$, there exists a subsequence $\{ w_{n_k}\}$ weakly converging in $X$ to some $\bar{w}$. In view of –, we find $$w_{n_k} \to w \ \ \text{in $Y$}, \qquad Gw_{n_k} {\rightharpoonup}Gw \ \ \text{in $Y$}, \qquad \Psi(w) \leq \lim_{k \to +\infty} \Psi(w_{n_k})\,.$$ Hence, yields that $Gw=0$ and $\Psi(w)=0$, so that, by , $w=0$. Thus, by and , $$\lim_{k \to +\infty}\|w_{n_k}\|_{X}\leq C\lim_{k \to +\infty} \left( \|w_{n_k} \|_{Y} + \| Gw_{n_k}\|_{Y} \right) =0,$$ in contrast with the fact that $\| w_n\|_{X}=1$ for all $n \in {\mathbb{N}}$. #### A compactness criterion. Let $$\label{e:setting} \begin{gathered} \text{$\mathcal{O} \subset {\mathbb{R}}^d$, $d\geq 1$, be an open set with $|\mathcal{O}|<+\infty$,} \\ \text{ $B$ be a separable Banach space, and $ 1 \leq p <+\infty$.} \end{gathered}$$ We recall that a sequence $ \{ u_n \} \subset L^{p}(\mathcal{O};B) $ is *$p$-uniformly integrable* (or simply *uniformly integrable* if $p=1$) if $$\label{lour} \forall\, \varepsilon > 0 \quad \exists\, \delta > 0:\quad \forall\, J \subset \mathcal{O} \quad |J| < \delta\ \Rightarrow\ \sup_{n \in {\mathbb{N}}} \int_{J} \|u_n(y) \|_{B}^{p} {\, \mathrm{d}}y \leq \varepsilon.$$ We quote the following result (cf. [@Dunford-Schwartz58 Thm. III.6]) which will be extensively used in what follows. \[t:ds\] In the setting of , given a sequence $ \{ u_n \} \subset L^{p}(\mathcal{O};B) $, assume that there exist a subsequence $\{u_{n_k}\}$ and a measurable function $u :\mathcal{O} \to B$ such that $$u_{n_k} (y) \to u(y) \ \ \text{in $B$} \ \ \text{for almost all}\ y \in \mathcal{O}\,.$$ Then, $u_{n_k} \to u$ in $L^{p}(\mathcal{O};B) $ if and only if it is $ p $-uniformly integrable. Finally, for the reader’s convenience, here below we report the celebrated lower semicontinuity result due to <span style="font-variant:small-caps;">A.D. Ioffe</span> [@ioffe77]. \[th-ioffe\] Let $f: \mathcal{O} \times {\mathbb{R}}^n \times {\mathbb{R}}^m \to [0,+\infty]$, $n,\,m\geq 1$, be a measurable non-negative function such that $$\begin{aligned} \label{hyp:ioffe1} & f(x,\cdot,\cdot) \ \ \ \text{is lower semicontinuous on ${\mathbb{R}}^n \times {\mathbb{R}}^m$ for every $x \in \mathcal{O}$,} \\ & f(x,u,\cdot) \ \ \ \text{is convex on $ {\mathbb{R}}^m$ for every $(x,u) \in \mathcal{O} \times {\mathbb{R}}^n$.}\end{aligned}$$ Let $(u_k,v_k), \ (u,v): \mathcal{O} \to {\mathbb{R}}^n \times {\mathbb{R}}^m $ be measurable functions such that $$u_k(x) \to u(x) \quad \text{in measure in $ \mathcal{O}$,} \qquad v_k {\rightharpoonup}v \quad \text{weakly in $L^1 ( \mathcal{O}; {\mathbb{R}}^m)$}.$$ Then, $$\label{integral-lsc} \liminf_{k \to +\infty} \int_{\mathcal{O}} f(x,u_k(x),v_k(x)) \, \mathrm{d}x \geq \int_{\mathcal{O}} f(x,u(x),v(x)) \, \mathrm{d}x\,.$$ Global attractors for generalized semiflows {#ss:2.1.1} ------------------------------------------- As mentioned in the introduction, in order to study the long-time behavior of solutions to the generalized Cahn-Hilliard equation  *in the viscous case*, we rely on the theory of *generalized* semiflows introduced by <span style="font-variant:small-caps;">J.M. Ball</span> in [@Ball97]. In order to make this paper as self-contained as possible, in this section we recall the main definitions and results of this theory, closely following [@Ball97]. \[not:phase-space\] The phase space is a (not necessarily complete) metric space $({X}, {\operatorname{d}_X})$, the distance ${\operatorname{d}_X}$ inducing the *Hausdorff semidistance* $\operatorname{e}$ of two non-empty subsets $A, \, B \subset {X}$ by the formula $\operatorname{e}(A,B):= \sup_{a \in A} \inf_{b \in B} {\operatorname{d}_X}(a,b)$. \[def:generalized-semiflow\] A *generalized semiflow* ${\mathcal{S}}$ on ${X}$ is a family of maps $g:[0,+\infty) \to {X}$ (referred to as “solutions") satisfying the following properties: (P1) : ***(Existence)*** for any $g_0 \in {X}$, there exists at least one $g \in {\mathcal{S}}$ such that $g(0)=g_0$; (P2) : ***(Translates of solutions are solutions)*** for any $g \in {\mathcal{S}}$ and $\tau \geq 0$, the map $g^\tau (t):=g(t+\tau),$ $t \in [0,+\infty),$ belongs to ${\mathcal{S}}$; (P3) : ***(Concatenation)*** for any $ g $, $ h \in {\mathcal{S}}$ and $\tau \geq 0$ with $h(0)=g(\tau)$, then $z \in {\mathcal{S}}$, $ z$ being the map defined by $$\label{def:concaten} z(t):= \begin{cases} g(t) & \text{if $0 \leq t \leq \tau,$} \\ h(t-\tau) & \text{if $ t >\tau$;} \end{cases}$$ (P4) : ***(Upper-semicontinuity w.r.t. the initial data)*** if $\{g_n \} \subset {\mathcal{S}}$ and $g_n (0) \to g_0, $ then there exist a subsequence $\{g_{n_k}\}$ of $\{g_n \}$ and $g \in {\mathcal{S}}$ such that $ g(0)=g_0$ and $g_{n_k}(t) \to g(t)$ for all $t \geq 0.$ #### Orbits, $\omega$-limits and attractors. Given a solution $ g \in {\mathcal{S}}$, we recall that the *$\omega$-limit* $\omega(g)$ of $g$ is defined by $$\omega(g):= \{ x \in {X}\ : \ \exists \{t_n\} \subset [0,+\infty), \ t_n \to +\infty, \ \text{such that} \ \ g(t_n) \to x \}\,.$$ Similarly, the *$\omega$-limit* of a set $E \subset {X}$ is given by $$\begin{aligned} \omega(E):=\big\{ x \in {X}\ : \ & \exists \{g_n\} \subset {\mathcal{S}}\ \text{such that $\{g_n (0)\} \subset E$, $\{g_n (0)\}$ is bounded, and} \\ & \exists \{t_n\} \subset [0,+\infty), \ t_n \to +\infty,\ \text{such that $ g_n (t_n) \to x$} \big\}.\nonumber \end{aligned}$$ Furthermore, we say that $w: {\mathbb{R}}\to {X}$ is a *complete orbit* if, for any $s \in {\mathbb{R}}$, the translate map $w^s$, restricted to the positive half-line $[0,+\infty),$ belongs to $ {\mathcal{S}}$. For every $\,t \geq 0 $, we can introduce the operator $\,{T}(t): 2^{X}\to 2^{X}\,$ by setting $$\label{eq:operat-T} {T}(t)E:=\{ g(t) \ : \ g \in {\mathcal{S}}\ \ \text{with} \ \ g(0) \in E\}\quad \text{for all } E \subset {X},$$ and define, for $\tau \geq 0$, the set $$\gamma^{\tau}(E):= \cup_{t \geq \tau} {T}(t)E\,.$$ The family of operators $\{{T}(t)\}_{t \geq 0}$ defines a *semigroup* on the power set $2^{X}$. Given subsets $U, E \subset {X}$, we say that $U $ *attracts* $E$ if $\operatorname{e}(T(t)E,U) \to 0$ as $t \to +\infty$. Furthermore, we say that $U $ is *fully invariant* if $T(t)U = U$ for every $t \geq 0$. Finally, a set $\mathcal{A} \subset {X}$ is the *global attractor* for ${\mathcal{S}}$ iff it is compact, fully invariant under ${\mathcal{S}}$, and attracts all the bounded sets of ${X}$. #### **Compactness and dissipativity properties.** Let $ {\mathcal{S}}$ be a generalized semiflow. We say that $ {\mathcal{S}}$ is ***eventually bounded*** iff, for every bounded set $B \subset {X}$, there exists $\tau \geq 0$ such that $ \gamma^\tau (B)$ is bounded; ***point dissipative*** iff there exists a bounded set $B_0 \subset {X}$ such that, for any $g \in {\mathcal{S}}$, there exists $\tau \geq 0$ such that $g(t) \in B_0 $ for all $t \geq \tau$. The set $B_0$ is then called a (pointwise) *absorbing* set; ***compact*** iff, for any sequence $\{g_n \} \subset {\mathcal{S}}$ with $\{g_n (0)\}$ bounded, there exists a subsequence $\{g_{n_k} \} $ such that $ \{g_{n_k}(t) \} $ is convergent for any $t >0.$ We note that the notions that we have just introduced are not independent one from another (cf. [@Ball97 Props. 3.1 and 3.2] for more details). #### **Lyapunov function.** The notion of a *Lyapunov function* can be introduced starting from the following definitions: we say that a complete orbit $g \in {\mathcal{S}}$ is *stationary* if there exists $x \in {X}$ such that $g(t)=x$ for all $t \in {\mathbb{R}}$ - such an $x$ is then called a *rest point*. Note that the set of rest points of ${\mathcal{S}}$, denoted by $Z({\mathcal{S}})$, is closed in view of **(P4)**. A function $V: {X}\to {\mathbb{R}}$ is said to be a *Lyapunov function* for ${\mathcal{S}}$ if $V$ is continuous, $V(g(t)) \leq V(g(s))$ for all $g \in {\mathcal{S}}$ and $0 \leq s \leq t$ (i.e., $V$ decreases along all solutions), and, whenever the map $t \mapsto V(g(t))$ is constant for some complete orbit $ g$, then $ g $ is a stationary orbit. #### **Existence of the global attractor.** The following theorem subsumes the main results from [@Ball97] (cf. Thms. 3.3, 5.1, and 6.1 therein) and provides the basic criteria for the existence of the global attractor ${\mathcal{A}}$ for a generalized semiflow ${\mathcal{S}}$. \[thm:ball1\] Let ${\mathcal{S}}$ be an eventually bounded and compact generalized semiflow. Assume that ${\mathcal{S}}$ also admits a Lyapunov function $V$ and that $$\label{rest-bounded} \text{ the set of its rest points ${Z({\mathcal{S}})}$ is bounded.}$$ Then, ${\mathcal{S}}$ is also point dissipative, and, consequently, it possesses a global attractor. Moreover, the attractor ${\mathcal{A}}$ is unique, it is the maximal compact fully invariant subset of ${X}$, and it can be characterized as $$\label{eqn:attrattore} {\mathcal{A}}= \bigcup \{\omega(B) \ : \ \text{$B \subset {X}$ bounded}\}=\omega({X}).$$ Finally, for every $g \in {\mathcal{S}}$, $$\label{e:additional} \omega(g) \subset {Z({\mathcal{S}})}.$$ \[rem:restriction\_to\_invariant\_set\] Actually, it is immediate to check that, if ${\mathcal{S}}$ is compact, eventually bounded, and admits a Lyapunov function, then condition  can be replaced by $$\label{e:saab} \begin{gathered} \exists\, \mathcal{D} \subset {X}\,, \ \ \mathcal{D}\neq \emptyset\,, \ \ \text{such that} \ \ \begin{cases} {T}(t) \mathcal{D} \subset \mathcal{D} \quad \forall t \geq 0, \\ \text{the set $ Z({\mathcal{S}}) \cap \mathcal{D} $ is bounded in ${X}$}. \end{cases} \end{gathered}$$ Then, under these hypotheses, ${\mathcal{S}}$ also possesses a (unique) global attractor ${\mathcal{A}}\subset \mathcal{D}$ and  holds. Main results {#s:2} ============ A global existence result for the non-viscous problem {#ss:2.2} ----------------------------------------------------- #### Assumptions on the nonlinearities. We assume that $$\label{e:hyp1} \tag{H1} \begin{gathered} \alpha: {\mathbb{R}}\to {\mathbb{R}}\qquad \text{is a strictly increasing, differentiable function such that} \\ \exists\, p \geq 0, \ \ \exists\, C_1,\, C_2 >0 \, : \quad \forall\, r \in {\mathbb{R}}\qquad C_1 \left( |r|^{2p}+ 1\right) \leq \alpha'(r) \leq C_2 \left( |r|^{2p}+1\right)\,. \end{gathered}$$ Clearly, the latter growth condition entails that $$\label{e:conse1} \exists\, C_3,\, C_4,\, C_5 >0 \, : \quad \forall\, r \in {\mathbb{R}}\qquad C_3 |r|^{2p+1}-C_4 \leq \alpha(r)\text{\rm sign}(r) \leq C_5 \left( |r|^{2p+1}+1\right)\,.$$ Concerning the nonlinearity $\phi$, we require that $$\label{e:hyp2} \tag{H2} \begin{gathered} \text{dom}(\phi) = I, \ \ \text{$I$ being an open, possibly unbounded, interval $(a,b)$, $ -\infty \leq a < 0 < b \leq +\infty$,} \\ \phi \in {\mathrm{C}}^1 (I), \\ \lim_{r \searrow a} \phi(r) = -\infty,\qquad \lim_{r \nearrow b} \phi(r) = +\infty, \\ \lim_{r \searrow a} \phi'(r) =\lim_{r \nearrow b} \phi'(r) = +\infty\,. \end{gathered}$$ We shall denote by $\widehat{\phi} $ (one of) the antiderivative(s) of $\phi$. It follows from the above assumptions that $\widehat{\phi}$ is bounded from below. Hereafter, for the sake of simplicity, we assume that $$\label{phicappucciopos} \widehat{\phi}(r) \geq 0 \quad \text{for all $r \in I$.}$$ Furthermore, obviously yields that $$\label{e:hyp2-bis} \exists\, C_{\phi,1} >0 \, : \ \ \forall\, r \in I \qquad \phi'(r) \geq -C_{\phi,1}\,,$$ namely, $\phi$ is a Lipschitz perturbation of a non-decreasing function. In particular, we will use the fact that there exists a non-decreasing function $\beta: I \to {\mathbb{R}}$ such that $$\label{e:hyp2-aftermath} \phi(r) = \beta(r) -C_{\phi,1} r \qquad \forall\, r \in I\,.$$ Consequently, $\widehat{\phi}$ is a quadratic perturbation of a convex function. Arguing in the very same way as in [@mirzel04] (where the case $I= (-1,1)$ was considered), it can be proved that, under these conditions, the following crucial estimate holds: $$\label{e:2.2.14} \forall\, m \in (a,b)\ \ \exists\, C_m,\ C_m'>0 \, : \ \ \forall\, r \in (a-m,b-m) \quad |\phi(r+m)| \leq C_m \phi(r+m)r +C_m'\,.$$\ Finally, we also assume that $$\label{hyp:3} \tag{H3} \exists\, \sigma \in (0,1), \ \ \exists\, C_6>0\, : \quad \forall\, r \in (a,b) \quad |\phi(r)|^{\sigma} \leq C_6 \left( \widehat{\phi}(r) +1\right)\,,$$ and that the following [*compatibility condition*]{} holds between $\sigma$ and the growth index $p$ of $\alpha$ in : $$\label{hyp:4} \tag{H4} \sigma > \max\left\{\frac{6p-3}{6p+2},0\right\} \,.$$ Hence, if $p\leq 1/2$, then any $\sigma \in (0,1)$ is admissible, while if, for instance, $p=1$, then the range of admissible $\sigma$’s is $(3/8, 1)$, and it is $(9/{14},1)$ for $p=2$. \[not:2.1\] Hereafter, we will use, for every $p\geq 0$, the short-hand notation $$\label{e:index-notation} \rho_p:= \frac{2p+2}{2p+1}, \qquad \kappa_p:=\frac{6p+6}{2p+1}, \qquad \eta_{p\sigma}= \frac{6-\sigma}{(3-3\sigma)(2p+1)}\,.$$ For later convenience, we note that $\rho_p$ and $\kappa_p$ are decreasing functions of $p$ and $$\label{e:2.17} 1<\rho_p<2, \qquad 3<\kappa_p <6 \qquad \text{for every $p\geq 0$.}$$ Furthermore, it can be checked that $$\label{e:really-necessary} \eta_{p\sigma}>1 \quad \text{for every} \ p \geq 0 \ \text{and for all} \ \sigma > \max\left\{\frac{6p-3}{6p+2},0\right\}\,.$$ #### The existence result. We are now able to give the variational formulation of the boundary value problem associated with  in the non-viscous case. \[p:1\]Find a pair $(\chi,w)$ fulfilling $$\begin{aligned} & \label{1-var} \chi_t + A (\alpha(w)) =0 \qquad \text{in ${\mathcal{W}^{-2,\kappa_p}(\Omega)}$} \quad {\text{a.e.\ in}}\ (0,T)\,, \\ & \label{2-var} A\chi + \phi(\chi) =w \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,.\end{aligned}$$ Note that, owing to Lemma \[le:sob-dual\], is equivalent to $$\label{e:difficult} \begin{aligned} { \sideset{_{{\mathcal{W}^{-2,{\kappa_p}}(\Omega)} }}{_{ {\mathcal{W}^{2,{\kappa_p}'}(\Omega)}}} {\mathop{\langle \chi_t , v \rangle}}} & + { \sideset{_{{\mathcal{W}^{-2,{\kappa_p}}(\Omega)} }}{_{ {\mathcal{W}^{2,{\kappa_p}'}(\Omega)}}} {\mathop{\langle A (\alpha(w)) , v \rangle}}}=0 \\ & \text{for all $ v \in {\mathcal{W}^{2,{\kappa_p}'}(\Omega)} \quad {\text{a.e.\ in}}\ (0,T)$.} \end{aligned}$$ \[th:1\] Under assumptions –, for every initial datum $\chi_0$ satisfying $$\label{hyp:initial-datum} \chi_0 \in V, \qquad \widehat{\phi}(\chi_0) \in L^1(\Omega)\,,$$ there exists at least a solution $(\chi,w)$ to Problem \[p:1\], with the regularity $$\begin{aligned} & \label{reg-chi} \chi \in L^2 (0,T;W^{2,6} (\Omega)) \cap L^\infty (0,T;V), \qquad \chi_t \in L^{\eta_{p\sigma}}(0,T; {\mathcal{W}^{-2,\kappa_p}(\Omega)})\,, \\ & \label{reg-w0} w \in L^2 (0,T;V), \qquad \alpha(w) \in L^{\eta_{p\sigma}}(0,T;L^{\kappa_p}(\Omega))\,,\end{aligned}$$ fulfilling the initial condition $$\label{e:init-cond} \chi(0)=\chi_0 \quad \text{in $V$.}$$ A formal proof of this result will be developed in Section \[s:3\] and rigorously justified in Appendix. A global existence result for the viscous problem {#ss:2.3} ------------------------------------------------- We replace our assumptions – on $\phi$ and its antiderivative $\widehat{\phi}$ by $$\label{hyp:5} \tag{H5} \begin{gathered} \widehat{\phi} : {\mathbb{R}}\to {\mathbb{R}}\quad \text{belongs to ${\mathrm{C}}^2 ({\mathbb{R}})$ and satisfies} \\ \exists\, C_7>0\, : \quad \forall\,r \in {\mathbb{R}}\quad |\phi(r)| \leq C_7 \left( \widehat{\phi}(r) +1\right)\,. \end{gathered}$$ The latter assumption means that we consider potentials with at most an exponential growth at $\infty$, and it clearly yields that $\widehat{\phi} $ is bounded from below. Hence, as in , we again assume that $\widehat{\phi}$ takes non-negative values. Furthermore, as in the non-viscous case we require that $$\label{e:hyp2-bis-visco} \tag{H6} \begin{gathered} \exists\, C_{\phi,2} >0 \, : \ \ \forall\, r \in {\mathbb{R}}\qquad \phi'(r) \geq -C_{\phi,2}\,. \end{gathered}$$ This and  imply that $$\label{e:lambda-convex} \ r \in {\mathbb{R}}\mapsto \widehat{\phi}(r) +\frac{C_{\phi,2}}2 r^2 \ \ \text{is convex and bounded from below.}$$ Let us point out that  yields $$\label{e:phi-add} |\widehat{\phi}(r)| \leq |\widehat{\phi}(0)| + |\phi(r)||r|+ \frac{C_{\phi,2}}{2}r^2 \quad \text{for all $r \in {\mathbb{R}}$.}$$ Indeed, it follows from  and an elementary convexity inequality that, for every $r \in {\mathbb{R}}$, $$\widehat{\phi}(0) - \widehat{\phi}(r) - \frac{C_{\phi,2}}{2}r^2 \geq -r\left( \phi(r) + C_{\phi,2}r \right)\,,$$ whence we deduce  with straightforward algebraic manipulations. We will address the analysis of the Cahn-Hilliard equation  in the viscous case under the aforementioned assumptions. The related variational formulation reads \[p:2\]Given $\delta >0$, find a pair $(\chi,w)$ fulfilling $$\begin{aligned} & \label{1-var-better} \chi_t + A (\alpha(w)) =0 \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,, \\ & \label{2-var-better} \delta \chi_t+ A\chi + \phi(\chi) =w \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,. \end{aligned}$$ #### The existence result. \[th:2\] Assume , , and . Then, for every initial datum $\chi_0$ complying with , there exists at least a solution $(\chi,w)$ to Problem \[p:2\], with the regularity $$\begin{aligned} & \label{reg-chi-bis} \chi \in L^2 (0,T;Z) \cap L^\infty (0,T;V) \cap H^1 (0,T;H)\,, \\ & \label{reg-w} w \in L^2 (0,T;V) \cap L^{2p+2}(0,T;L^\infty(\Omega)), \qquad \alpha(w) \in L^{\rho_p}(0,T;Z)\,,\end{aligned}$$ and such that $\chi$ satisfies the initial condition . We refer to Section \[s:3\] for a formal proof of Theorem \[th:2\] and to Appendix for all rigorous calculations. In addition, we also have the following regularity result, which plays a key role in Section \[ss:2.4\]. \[prop:regularized\] Assume , , and . Assume that, in addition, $\phi$ satisfies $$\label{e:addphi} \widehat{\phi} \in \mathrm{C}^2 ({\mathbb{R}}) \ \ \text{and} \ \ \exists\, C_{\phi,3}>0\,: \ \ \forall\, r \in {\mathbb{R}}\quad |\phi{'}(r)| \leq C_{\phi,3}(1+|r|^4)\,.$$ Then, for all $0 <\tau <T$, the pair $(\chi,w)$ has the further regularity $$\begin{aligned} \label{e:further-reg-chi} & \chi \in L^\infty(\tau, T; Z) \cap H^1 (\tau,T; V)\,, \\ & \label{e:further-reg-w} \alpha(w) \in L^{\rho_p} (\tau, T; H^3(\Omega))\,.\end{aligned}$$ In particular, if $\chi_0\in Z$, then the above properties hold for any $\tau\in [0,T)$. \[uniform\] From the proof of Proposition \[prop:regularized\], it is not difficult to recover a uniform estimate of the following form: $$\label{uniform-gronwall} \Vert\chi\Vert_{ L^\infty(\tau, T; Z) \cap H^1 (\tau,T; V)} + \Vert\alpha(w)\Vert_{L^{\rho_p} (\tau, T; H^3(\Omega))} \le Q(\tau^{-1},\Vert \chi_0\Vert_V),$$ where $Q$ is a suitable function which is nondecreasing with respect to both arguments. Well-posedness for the viscous problem -------------------------------------- #### Continuous dependence on the initial data and uniqueness. We will prove uniqueness (and continuous dependence) results for Problem \[p:2\] under more restrictive assumptions on $\alpha$ and on the growth of the function $\phi$. In particular, we are going to consider two sets of assumptions. First, we will suppose that $\phi$ behaves like a polynomial of degree at most $3$. For the sake of simplicity and without loss of generality, we will carry out our analysis in the case when $\phi$ is the derivative of the double-well potential $\widehat{\phi}(r)=(r^2-1)^2/4$. Furthermore, we will replace  by $$\label{e:hyp1-bis} \tag{H7} \begin{gathered} \alpha: {\mathbb{R}}\to {\mathbb{R}}\qquad \text{is a strictly increasing and differentiable function such that} \\ \ \exists\, C_{9},\, C_{10} >0 \, : \quad \forall\, r \in {\mathbb{R}}\qquad C_{9} \leq \alpha'(r) \leq C_{10}\,, \end{gathered}$$ and – by $$\label{e:hyp7-bis} \tag{H8} \phi(r) =r^3-r \qquad \forall\, r \in {\mathbb{R}}\,.$$ \[th:3\] Assume  and . Let $\chi_{0}^{1}$ and $\chi_{0}^{2} $ be two initial data for Problem \[p:2\] fulfilling  and set $M_{*}:= \max_{i=1,2} \{\| \chi_{0}^{i}{\|_{V}}\} $; let $ \chi_{i} $, $ i=1,2 $, be the corresponding solutions. Then, for every $\delta>0$, there exists a positive constant $ S_{\delta} $, also depending on $$\label{e:only-depe} \text{$ M_{*} $, $ T $, $ |\Omega| $, $C_{9}$, and $C_{10} $,}$$ such that $$\label{contdepV} \| \chi_1 (t) - \chi_2 (t) \|_{V} + \| \chi_1 - \chi_2 \|_{H^1 (0,t;H) \cap L^2 (0,t;Z)} \leq S_{\delta} \|\chi_{0}^{1} - \chi_{0}^{2} \|_{V} \quad \forall t \in [0,T].$$ Our second continuous dependence results holds in the more general frame of assumptions of Proposition \[prop:regularized\], but for more regular initial data. Indeed, we have \[th:3.2\] Assume that holds for some $p\in [0,1]$, and that $\phi$ complies with , , and . Let $\chi_{0}^{1}$ and $\chi_{0}^{2} $ be two initial data for Problem \[p:2\] such that $\chi_{0}^{i} \in Z$ and $\widehat{\phi}(\chi_{0}^{i}) \in L^1 (\Omega)$ for $i=1,2$, and let $ \chi_{i} $, $ i=1,2 $, be the corresponding solutions. Then, for every $\delta>0$, there exists a positive constant $ S_{\delta} $, also depending on $T$, $|\Omega|$, $C_1$, $C_2$ and $M^{*}:= \max_{i=1,2} \{\| \chi_{0}^{i}\|_Z \}$, such that estimate holds for all $t \in [0,T]$. Global attractor and exponential attractors for the viscous problem {#ss:2.4} ------------------------------------------------------------------- The *energy functional* associated with Problem \[p:2\] reads $$\label{e:ene-funct} {\mathcal{E}}: {X}\to {\mathbb{R}}, \ \ \ \ {\mathcal{E}}(v):= \frac12 \int_{\Omega} |\nabla v|^2 + \int_{\Omega} \widehat{\phi}(v) \ \ \text{for all $v \in {X}$.}$$ Consequently, we introduce the phase space $({X},{\operatorname{d}_X})$ of energy bounded solutions, defined by $$\label{e:pspace} \begin{aligned} & {X}= \left\{v \in V\, : \ \widehat{\phi}(v) \in L^1 (\Omega) \right\}, \\ & {\operatorname{d}_X}(v_1,v_2) = \| v_1 -v_2 \|_{H^1 (\Omega)} + \left\|\widehat{\phi}(v_1) - {\widehat{\phi}(v_2)} \right\|_{L^1(\Omega)} \qquad \text{for all $v_1, \, v_2 \in {X}$.} \end{aligned}$$ The following definition details the properties of the solutions to Problem \[p:2\] to which our long-time analysis will apply. \[def:solp2\] We say that a function $\chi : [0,+\infty) \to {X}$ is a *solution to Problem \[p:2\] on $(0,+\infty)$* if, for all $T>0$, $\chi$ enjoys regularity  on the interval $(0,T)$ and there exists a function $w$, with regularity  for all $T>0$, such that equations – hold almost everywhere on $\Omega \times (0,+\infty)$. We set $$\label{e:gen-semiflow} \mathcal{S}= \left \{ \chi : [0,+\infty ) \to X \, : \ \text{$\chi$ is a solution to Problem~\ref{p:2} on $(0,+\infty)$} \right\}\,.$$ We assume that, besides , $\alpha$ complies with the following condition, slightly stronger than : $$\label{e:hyp-add-alpha} \tag{H9} \exists\, \mathsf{c}_{\alpha}>0, \ \ \exists\, \Psi: {\mathbb{R}}\to [0,+\infty) \ \text{convex}\,: \ \ \forall \, r \in {\mathbb{R}}\, \quad \alpha(r) r - \mathsf{c}_{\alpha}|r|^{2p+2} = \Psi(r)\,.$$ Hence, our first result asserts that the solution set $\mathcal{S}$ is a *generalized semiflow* in the sense of Definition \[def:generalized-semiflow\]. \[prop:2.1\] Assume , –. Then, 1. every $\chi \in \mathcal{S}$ (cf. ) complies with the *energy identity* $$\label{e:enid} \delta\int_s^t \int_{\Omega} |\chi_t|^2 + \int_s^t \int_{\Omega} \alpha'(w)|\nabla w|^2 + {\mathcal{E}}(\chi(t)) = {\mathcal{E}}(\chi(s)) \quad \text{for all $0 \leq s \leq t$,}$$ the function $w:(0,+\infty) \to V$ being defined by  on $\Omega \times (0,+\infty)$. 2. Assume that $\alpha$ in addition complies with . Then, the set $\mathcal{S}$ is a generalized semiflow in the phase space , and its elements are continuous functions from $[0,+\infty)$ onto $X$. We prove our main result on the long-time behavior of the solutions to Problem \[p:2\] under a further condition on $\phi$, which in particular implies (and thus replaces) , namely $$\label{lim-infty-phi} \tag{H10} \begin{aligned} \lim_{r\to +\infty}\phi(r)=+\infty, \qquad \lim_{r\to -\infty}\phi(r)=-\infty\,, \\ \lim_{r\to +\infty}\phi'(r)=\lim_{r\to -\infty}\phi'(r)=+\infty\,. \end{aligned}$$ \[th:4\] Assume , , , and . For a given $\mathrm{m}_0 >0$, denote by $\mathcal{D}_{\mathrm{m}_0}$ the set $$\label{e:fixed-mean-value} \mathcal{D}_{\mathrm{m}_0}= \left \{ \chi \in X\, : \ |m(\chi)| \leq {\mathrm{m}_0} \right\}\,.$$ Then, the semiflow $\mathcal{S}$ possesses a unique global attractor $\mathcal{A}$ in $\mathcal{D}_{\mathrm{m}_0}$, given by $$\label{e:att} \mathcal{A}: =\bigcup\left \{\omega(D)\, : \ D \subset \mathcal{D}_{\mathrm{m}_0} \ \text{bounded}\right\}\,.$$ Finally, we have the following enhanced regularity for the elements of the $\omega$-limit of every trajectory: $$\label{e:enhanced-regularity} \forall\, p \in [1,+\infty) \ \ \exists\,C_p>0\,: \ \ \forall\,\chi \in \mathcal{S}, \ \ \forall\, \bar{\chi} \in \omega(\chi) \quad \| \bar{\chi}\|_{W^{2,p}(\Omega)} + \| \widehat{\phi}(\bar{\chi})\|_{L^p(\Omega)} \leq C_p\,.$$ \[rem:poly-advantages\] Notice that, in the case $$\label{e:special-case} \text{ $\widehat{\phi}$ is a polynomial of even degree $\mathsf{m} \geq 4$, with a positive leading coefficient,}$$ then conditions and are satisfied. \[zeta\] In addition to hypotheses , , , and  of Theorem \[th:4\], assume that $\phi$ complies with . Then, the enhanced regularity estimate  holds for $\chi$. This regularity is reflected in the further regularity $$\label{e:further-reg-att} \mathcal{A}\subset Z,$$ for the global attractor $\mathcal{A}$, which holds provided that one works with the (smaller) set of solutions to Problem \[p:2\] arising from the approximation procedure which will be detailed in Appendix. In fact, the estimates leading to  can be rigorously justified only for this approximate problem, as we will see in the proof of Proposition \[prop:regularized\], cf. Section \[s:3.3\]. Now, the aforementioned family of “approximable” solutions to Problem \[p:2\] (see, e.g., [@BabinVishik92; @rossi-segatti-stefanelli08; @schimperna07; @segatti06] for analogous constructions) complies with the properties defining a generalized semiflow, except for the concatenation axiom. This has motivated the introduction in [@rossi-segatti-stefanelli08; @segatti06] of the (slightly more general) notion of *weak* global attractor, tailored to the *weak* generalized semiflows without the concatenation property. Hence, relying on the abstract results of [@rossi-segatti-stefanelli08; @segatti06] and arguing as in the proof of Theorem \[th:4\], it is straightforward to prove that the semiflow associated with the approximable solutions to Problem \[p:2\] admits a *weak* global attractor for which  holds. On the other hand, Theorem \[th:5\] below shows that, under the stronger assumptions of Theorem \[th:3.2\], the semiflow possesses the standard global attractor $\mathcal{A}$ satisfying , namely, $\mathcal{A}$ is a compact and invariant set which attracts (in the $V$-metric) all bounded sets of initial data as time goes to infinity. We conclude this section by showing that it is also possible to construct an exponential attractor through the short-trajectories approach developed in [@malek-prazak]. Let us first set for a given $\tau>0$ $$X_\tau=L^2(0,\tau;V),\quad Y_\tau=\left\{u\in L^2(0,\tau;Z)\,:\, u_t \in L^2(0,\tau;H)\right\}$$ and observe that $Y_\tau$ is compactly embedded in $X_\tau$. Under assumptions , , , and , we know that, for any $\chi_0\in V$ and any $T>0$, there exists a pair $(\chi,w)$ which solves Problem \[p:2\] with the regularity , , , (cf. Theorem \[th:2\] and Proposition \[prop:regularized\]). In particular, $\chi\in Y_T$. In addition, thanks to  and arguing in the same way as in the forthcoming Section \[s:3.1\], it is not difficult to show that $\Vert \chi\Vert_{Y_T}$ can be estimated uniformly with respect to $\Vert\chi_0\Vert_V$, . The energy identity also entails the existence of a bounded set $B^0\subset V$ such that, if $(\chi,w)$ is a solution to Problem \[p:2\] with the aforementioned properties, then there exists $t_0>0$, only depending on $\Vert\chi_0\Vert_V$, such that $\chi(t)\in B^0$ for all $t\geq t_0$ and $\chi(t)\in B^0$ for all $t\geq 0$ whenever $\chi_0\in B^0$ (see the proof of the eventual boundedness of $\mathcal{S}$ in Section \[ss:5.2\]). Let us now consider the set $\mathcal{X}_\ell=\{\chi : (0,\ell)\to V\}$ of all the $\ell$-trajectories $\chi$ such that $(\chi,w)$ is a solution to Problem \[p:2\] satisfying , , , . Then, we endow this set with the $X_\ell$-topology (note that it might be a non-complete metric space). Moreover, denoting by $V_w$ the space $V$ endowed with the weak topology, we have $\mathcal{X}_\ell\subset C^0([0,\ell];V_w)$. Consequently, any $\ell$-trajectory makes sense pointwise. From now on, we assume that assumption holds for some $p\in[0,1]$. Thanks to , for any $\ell$-trajectory, there exists $\tau\in (0,\ell)$ such that $\chi(\tau)\in Z$. This is sufficient to conclude that $\chi$ is unique from $\tau$ on, as a consequence of Proposition \[prop:regularized\] and Theorem \[th:3.2\]. Therefore, if $\chi\in \mathcal{X}_\ell$ and $T>\ell$, then there exists a unique $\tilde \chi\in\mathcal{X}_T$ such that $\tilde\chi\vert_{[0,\ell]} = \chi$. Thus, we can define a semigroup $L_t$ on $\mathcal{X}_\ell$ by setting $$(L_t\chi)(\tau):= \tilde\chi(t+\tau),\qquad \tau\in [0,\ell],$$ where $\tilde\chi$ is the unique element of $\mathcal{X}_{\ell+\tau}$ such that $\tilde\chi\vert_{[0,\ell]} = \chi$. Let us now set $$B^0_\ell:=\left\{\chi\in\mathcal{X}_\ell\,:\, \chi(0)\in B^0\right\}.$$ Then, by Proposition \[prop:regularized\], we can infer that the set $\left\{\chi\vert_{[\ell/2,\ell]}\,:\, \chi\in B^0_\ell\right\}$ is bounded in $L^\infty(\ell/2,\ell;Z)$. Hence, we can prove a continuous dependence estimate like , which allows us to apply [@malek-prazak Lemma 2.1] and deduce that $L_t$ is Lipschitz continuous on $B^0_\ell$, uniformly with respect to $t\in [0,\tau]$ for any fixed $\tau>0$. Observe that, arguing as in Section \[s:3.3\], we can prove that $B^1_\ell=\overline{L_\tau(B^0_\ell)}^{X_\ell} \subseteq B^0_\ell$ for some $\tau>0$. From this fact we deduce that the dynamical system $(\mathcal{X}_\ell,L_t)$ has a global attractor $\mathcal{A}_\ell$ (see [@malek-prazak Thm. 2.1]). In addition, $L_\tau: \mathcal{X}_\ell \to Y_\ell$ is Lipschitz continuous for some $\tau>0$. Indeed, recall that $B^1_\ell$ is bounded in $L^\infty(0,\ell;Z)\cap H^1(0,\ell;V)$ and use . Thus, on account of [@malek-prazak Thm. 2.2], we can infer that $\mathcal{A}_\ell$ has finite fractal dimension. In order to go back to the original geometric space $V$, we introduce the evaluation mapping $e: \mathcal{X}_\ell \to V,$ $e(\chi):=\chi(\ell)$. Then, we set $B^1:=e(B^1_\ell)$ and we note that, for any $\chi_0\in B^1$, there is a unique solution to Problem \[p:2\], so that the solution operator $S_t$ is well defined on $B^1$ and $S_t(B^1)\subseteq B^1$, for all $t\geq 0$. In addition, $e$ is (Lipschitz) continuous on $B^1_\ell$ (use and [@malek-prazak Lemma 2.1] once more). Therefore, we use [@malek-prazak Thm. 2.4] to deduce that $\mathcal{A}:=e(\mathcal{A}_\ell)$ is the finite-dimensional global attractor of the dynamical system $(B^1,S_t)$. It remains to prove the existence of an exponential attractor. We already know that $L_t$ is Lipschitz continuous on $B^1_\ell$, uniformly with respect to $t\in [0,\tau]$ for every fixed $\tau>0$ (see above). Thus, we only need to show that $t\mapsto L_t\chi$ is Hölder continuous with values in $V$, uniformly with respect to $\chi\in B^1_\ell$. This follows from [@malek-prazak Lemma 2.2], recalling that $B^1_\ell$ is, in particular, bounded in $H^1(0,\ell;V)$. Hence, $(\mathcal{X}_\ell,L_t)$ has an exponential attractor $\mathcal{E}_\ell$ and $\mathcal{E}:=e(\mathcal{E}_\ell)$ is an exponential attractor for $(B^1,S_t)$. Summing up, we have proved the \[th:5\] Assume that holds for some $p\in[0,1]$. Also, assume , , and . Then, there exists a bounded invariant set $B^1\subset V$ such that Problem \[p:2\] generates a dynamical system $(B^1,S_t)$ which possesses an exponential attractor $\mathcal{E}$. In addition, the system also has a global attractor $\mathcal{A}$ with finite fractal dimension. Note that, in the framework of Theorem \[th:5\], neither assumption  nor are needed. Proofs of Theorems \[th:1\] and \[th:2\] {#s:3} ======================================== #### Scheme of the proofs of Theorems \[th:1\] and \[th:2\]. We will prove Theorems \[th:1\] and \[th:2\] by taking the limit of a suitable approximation scheme for Problems \[p:1\] and \[p:2\]. For the sake of readability, we postpone detailing such a scheme in Appendix. In Section \[s:3.1\], we will instead perform all estimates leading to the aforementioned passage to the limit directly on systems – and –. Note that, at this stage, some of the following calculations will only be formal, cf. Remark \[solo-formale\] below. Their rigorous justification will be given in Appendix, see Section \[ss:a.1\]. Next, in Section \[s:3.2\] (in Section \[s:3.3\], respectively), we will carry out a passage to the limit in some unspecified approximation scheme for Problem \[p:1\] (for Problem \[p:2\], respectively) and conclude the (formal) proof of Theorem \[th:1\] (of Theorem \[th:2\], respectively). In Section \[ss:a.2\], we will adapt the limiting arguments developed in Sections \[s:3.2\] and \[s:3.3\] to the approximation scheme for Problems \[p:1\] and \[p:2\] and carry out the rigorous proofs of the related existence theorems. \[not:3.1\] We will perform the a priori estimates on systems – and –, distinguishing the ones which hold both in the viscous and the non-viscous cases from the ones which depend on the constant $\delta$ in  (which can be either strictly positive or equal to zero), and on our different assumptions on the nonlinearity $\phi$ in the viscous and non-viscous cases. Accordingly, we will use the generic notation $C$ for most of the constants appearing in the forthcoming calculations and depending on the problem data, and $C_\delta$ ($C_0$, respectively) for those constants *substantially* depending on the problem data and on $\delta >0$ (on $\delta =0$, respectively). We will adopt the same convention for the constants $S^i$, $S^i_\delta$, $S_0^i$, $i\geq 1$. A priori estimates {#s:3.1} ------------------ #### First a priori estimate. We test  by $w$, by $\chi_t$, add the resulting relations, and integrate over some time interval $(0,t) \subset (0,T)$. Elementary calculations lead to $$\label{est:1} \int_0^t \int_{\Omega} \alpha'(w) |\nabla w|^2 + \delta \int_0^t \| \chi_t \|_{H}^2 + \frac12 \| \nabla \chi(t) \|_{H}^2 + \int_{\Omega} \widehat{\phi} \left( \chi(t)\right) =\frac12 \| \nabla \chi_0 \|_{H}^2 + \int_{\Omega} \widehat{\phi} \left( \chi_0\right)\,.$$ Recalling , the second of  (which, in particular, yields that $\alpha'$ is bounded from below on ${\mathbb{R}}$ by a positive constant) and the positivity of $\widehat{\phi}$ (cf. ), we conclude that, for some constant $S^1 >0$, $$\label{e:stima1} \| \nabla w \|_{L^2 (0,T;H)} + \| \nabla \chi \|_{L^\infty (0,T;H)} + \| \widehat{\phi}(\chi)\|_{L^\infty (0,T;L^1 (\Omega))} \leq S^1.$$ #### First a priori estimate in the viscous case. In the case $\delta>0$, from the previous a priori estimate we also have $$\label{e:stima1-delta} \| \chi_t \|_{L^2 (0,T;H)} + \| \chi\|_{L^\infty (0,T;V)} \leq S_\delta^1\,.$$ #### Second a priori estimate. We test  by $1$ and find ${m}(\chi_t)=0$ a.e. in $(0,T)$, so that, in particular, $$\label{e:constant-m-value} {m}(\chi(t))=m_0:={m}(\chi_0) \qquad \forall\, t \in [0,T].$$ Hence, testing  by $1$, we obtain $$\label{est:2} {m}(\phi(\chi(t))) = {m}(w(t)) \qquad {\text{for a.a.}}\ t \in (0,T).$$ #### Second a priori estimate in the non-viscous case. It follows from and the Poincaré inequality that $$\label{est:2-deltazero} \| \chi\|_{L^\infty (0,T;V)} \leq S^2\,.$$ #### Third a priori estimate in the non-viscous case. We test  by $\chi-{m}(\chi)$: we have, for a.e. $t \in (0,T)$, $$\label{est:3} \begin{aligned} \| \nabla \chi(t) \|_{H}^2 + \int_{\Omega} \phi(\chi(t)) \left( \chi(t) - {m}(\chi(t)) \right) &= \int_{\Omega} w(t) \left( \chi(t) - {m}(\chi(t)) \right) \\ & = \int_{\Omega} \left( w(t) - {m}(w(t)) \right) \left( \chi(t) - {m}(\chi(t)) \right)\\ & \leq C \| \nabla \chi(t) \|_{H} \| \nabla w(t) \|_{H} \leq C S^1 \| \nabla w(t) \|_{H} , \end{aligned}$$ the latter estimate ensuing from the Poincaré inequality for zero mean value functions and the previous . On the other hand, and  yield that there exist constants $C_{m_0}, C_{m_0}'>0$ such that, for a.e. $t \in [0,T]$, $$\label{e:useful-later} \int_{\Omega} |\phi( \chi(t))| \leq C_{m_0} \int_{\Omega} \phi(\chi(t)) \left( \chi(t) - {m}(\chi(t)) \right) + C_{m_0}' \,.$$ Combining this with , we deduce that there exists $C>0$, also depending on $C_{m_0}$ and on $C_{m_0}'$, such that, for a.e. $t \in (0,T)$, $$\label{est:4} \int_{\Omega} |\phi( \chi(t))| \leq C \left(\| \nabla w(t) \|_{H} + 1\right)\,.$$ Thus, in view of , we obtain an estimate for $\phi(\chi)$ in $L^2 (0,T; L^1 (\Omega))$. Finally, due to , we find $$\| {m}(w) \|_{L^2 (0,T)} \leq C_0.$$ Hence, by  and the Poincaré inequality, we conclude that $$\label{e:stima3} \| w \|_{L^2 (0,T;V)} \leq S^1_0\,.$$ #### Third a priori estimate in the viscous case. Estimate  for $\widehat{\phi}(\chi)$ and  yield that $$\label{est:3-delta} \| \phi(\chi) \|_{L^\infty (0,T; L^1 (\Omega)) } \leq S_\delta^2\,.$$ Recalling , we immediately infer that $$\label{est:4-delta} \| {m}(w) \|_{L^\infty (0,T)} \leq S_\delta^3,$$ whence, again, $$\label{est:5-delta} \| w \|_{L^2 (0,T; V)} \leq S_\delta^4.$$ #### Fourth a priori estimate in the non-viscous case. We preliminarily observe that, thanks to , equation  can be rewritten as $$\label{e:more-convenient_form} A\chi + \beta(\chi) = w + C_{\phi,1} \chi \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,.$$ Notice that, in view of  and , the right-hand side of  belongs to $L^2 (0,T; L^6 (\Omega))$. Hence, we can test  by $|\beta(\chi)|^4 \beta(\chi)$ and easily conclude that $$\|A\chi\|_{L^2 (0,T; L^6 (\Omega))} +\|\beta(\chi)\|_{L^2 (0,T; L^6 (\Omega))} \leq C_0\,.$$ Then, also by standard elliptic regularity results, we find $$\label{e:stima3-bis} \| \phi(\chi)\|_{L^2 (0,T; L^6 (\Omega))} + \| \chi \|_{L^2 (0,T;W^{2,6} (\Omega))} \leq S^2_0\,.$$ #### Fourth a priori estimate in the viscous case. We combine  and and argue by comparison in . Relying on  and on the related elliptic regularity estimate, we have $$\label{e:useful-again} \| \phi(\chi) \|_{L^2 (0,T;H)} \leq S^5_\delta\,,$$ as well as an estimate for $A\chi$ in $L^2 (0,T; H)$, so that $$\label{e:stima3-delta} \| \chi \|_{L^2 (0,T;Z)} \leq S^6_\delta\,.$$ #### Fifth a priori estimate. It follows from  and  that $\int_0^T \int_{\Omega} w^{2p} |\nabla w|^2 \leq C$, whence we conclude that $$\label{est:3.14} \| \nabla (|w|^{p}w) \|_{L^2 (0,T;H)} \leq S^3\,.$$ #### Sixth a priori estimate in the non-viscous case. From , , and , we deduce that $$\label{e:crucial0} \||\phi(\chi)|^{\sigma}\|_{L^{2/\sigma}(0,T; L^{6/\sigma} (\Omega)) \cap L^\infty (0,T; L^1 (\Omega))} \leq C_0\,.$$ Using the interpolation inequality $$\forall\, v \in L^1 (\Omega) \cap L^{6/\sigma} (\Omega) \quad \| v \|_{L^{1/\sigma} (\Omega)} \leq \| v \|_{L^1 (\Omega)}^{\theta}\, \| v \|_{L^{6/\sigma} (\Omega)}^{1-{\theta}}, \quad \text{with ${\theta}= \frac{5\sigma}{6-\sigma}$,}$$ we obtain the estimate $$\label{e:crucial} \||\phi(\chi)|^{\sigma}\|_{L^{q_\sigma}(0,T; L^{1/\sigma} (\Omega))} \leq C_0, \quad \text{with $q_\sigma=\frac{2}{\sigma} \, \frac{1}{1-{\theta}}= \frac{6-\sigma}{3\sigma-3\sigma^2}$,}$$ whence a bound for $\phi(\chi)$ in $L^{\sigma q_\sigma} (0,T; L^1 (\Omega))$. Taking into account , we conclude that $$\label{e:3.15} \| {m}(w) \|_{L^{\sigma q_\sigma}(0,T)} \leq C_0\,, \quad \text{whence} \quad \| |{m}(w)|^{p+1} \|_{L^{(\sigma q_\sigma)/(p+1)}(0,T)} \leq C_0\,.$$ On the other hand, applying the *nonlinear* Poincaré inequality  with the choices $X=V$, $Y=H$, $Gv= \nabla v$, and $\Psi(v)= |\Omega|^{-p-1}|\int_{\Omega} |v|^{\frac{1}{p+1}} \mathrm{sign}(v)|^{p+1}$, where $v=|w|^p w$, we find $$\label{e:poinc-est} \||w|^p w \|_{V} \leq K \left( \| \nabla (|w|^{p}w) \|_{H} + \left| {m}(w)\right|^{p+1}\right)\,.$$ Therefore, combining estimate  for $|{m}(w)|^{p+1}$ with , we finally obtain, owing to the Poincaré inequality , $$\label{e:to-be-cited3} \| |w|^p w\|_{L^{(\sigma q_\sigma)/(p+1)}(0,T; V)} \leq C_0.$$ Using the embedding $V \subset L^6 (\Omega)$ and the growth  for $\alpha$, we infer \[e:est-refined\] $$\label{e:est-refined_1} \| \alpha(w) \|_{L^{\eta_{p\sigma}} (0,T; L^{\kappa_p} (\Omega))} \leq S_0^3\,$$ (where we have used the fact that $(\sigma q_\sigma)/(2p+1)$ equals the index $\eta_{p\sigma}$ defined in ). Hence, by comparison in , we also conclude that $$\label{e:3.18} \| \chi_t \|_{L^{\eta_{p\sigma}}} (0,T; {\mathcal{W}^{-2,{\kappa_p}}(\Omega)}) \leq S_0^4$$ (see again  for the definition of $\kappa_p$). #### Sixth a priori estimate in the viscous case. Combining , and the Poincaré-type inequality , we deduce an estimate for $|w|^{p}w $ in $L^2 (0,T;V)$. Then, arguing in the same way as for , we have $$\label{e:to-be-quoted} \| \alpha(w) \|_{L^{\rho_p} (0,T; L^{\kappa_p} (\Omega))} \leq C_\delta,$$ the index $\rho_p$ being defined in . Now, in view of estimate  for $\chi_t$ in $L^2 (0,T;H)$, a comparison in  yields an estimate for $A(\alpha (w))$ in $L^2 (0,T;H)$. By elliptic regularity results, we finally conclude that $$\label{e:3.19} \| \alpha(w) \|_{L^{\rho_p} (0,T; Z)} \leq S_\delta^7\,.$$ #### Seventh a priori estimate in the non-viscous case. Our aim is now to show that $$\label{useful2} \| \phi(\chi) \|_{L^{\sigma q_\sigma} (0,T;L^{6}(\Omega))} \leq S_0^5, \qquad \text{with $\sigma q_\sigma = \frac{6-\sigma}{3-3\sigma}>2$\,.}$$ Indeed, again recalling the embedding $V \subset L^6 (\Omega)$, we observe that yields an estimate for $w$ in $L^{\sigma q_\sigma}(0,T; L^{6p+6}(\Omega))$. Then, taking into account estimate  for $\chi$ in $L^\infty (0,T; L^6 (\Omega))$, together with the aforementioned elliptic regularity argument, we find estimate  by a comparison in . #### Seventh a priori estimate in the viscous case. We combine estimate , the continuous embedding $Z \subset L^\infty(\Omega)$, and the growth condition  to deduce an estimate for $w$ in $L^{\rho_p (2p+1)} (0,T;L^\infty (\Omega))$, whence $$\label{e:3.22} \| w\|_{L^{2p+2} (0,T; L^{\infty}(\Omega))} \leq S_\delta^8\,.$$ \[solo-formale\] Notice that all the a priori estimates for the viscous Problem \[p:2\] are in fact rigorously justified on system –. This has significant repercussions on the long-time analysis of Problem \[p:2\]. Indeed, this allows us to work with the semiflow associated with the solutions to Problem \[p:2\] (cf. ) and prove the existence of a global attractor in the sense of [@Ball97]. However, as pointed out in Remark \[zeta\], if we address further regularity properties of the attractor (e.g., ), then we need additional estimates which cannot be performed directly on system –, due to insufficient regularity of the solutions. Thus, we have to rely on some approximation. On the one hand, this leads to a smoother attractor $\mathcal{A}$, but, on the other hand, we lose the concatenation property of the trajectories (cf. [@rossi-segatti-stefanelli08; @segatti06]); moreover, only trajectories which are limits of the approximation scheme will be attracted by the smoother attractor $\mathcal{A}$. We also point out that the viscous system – cannot be used as an approximation for the non-viscous problem. Indeed, it is not difficult to realize that the fourth a priori estimates – (yielding a bound for $\phi(\chi)$ which plays a crucial role in the ensuing calculations) are not compatible with the term $\delta\chi_t$ in . This fact seems to suggest the use of two different approximation schemes for Problem \[p:1\] and Problem \[p:2\], which would lead to cumbersome and repetitious calculations. In order to circumvent this problem, we will construct in Appendix an approximation scheme depending on two distinct parameters and prove the existence of solutions to Problem \[p:1\] by passing to the limit in three steps. Since the (rigorous) proof of existence for Problem \[p:2\] can be performed along the very same lines, we have chosen not to detail it in Appendix. Proof of Theorem \[th:1\] {#s:3.2} ------------------------- Let $\{ (\chi_n, w_n )\} $ be some sequence of approximate solutions to Problem \[p:1\]. Due to estimates , , , , and , applying standard compactness and weak compactness results (see [@simon]), we find that there exists a pair $(\chi,w)$ with the regularities specified by – such that, along a (not relabeled) subsequence, the following strong, weak, and weak$^*$ convergences hold as $n \to +\infty$: $$\begin{aligned} \label{e:conv-chi-strong} & \begin{aligned} \chi_n \to \chi \quad &\text{in \ $L^2 (0,T; W^{2-{\varepsilon},6} (\Omega)) \cap L^q (0,T;V) \cap {\mathrm{C}}^0 ([0,T]; H^{1-{\varepsilon}} (\Omega))$} \\ & \text{for every ${\varepsilon}>0 $ and $1 \leq q < +\infty$,} \end{aligned} \\ & \label{e:conv-chi-weak1} \chi_n {{\stackrel{*}{\rightharpoonup}\,}}\chi \quad \text{in \ $L^2 (0,T; W^{2,6} (\Omega)) \cap L^\infty (0,T;V)$,} \\ & \label{e:conv-chi-weak2} \chi_{n,t} {\rightharpoonup}\chi_t \quad \text{in \ $L^{\eta_{p\sigma}} (0,T; {\mathcal{W}^{-2,\kappa_p}(\Omega)})$,} \\ & \label{e:conv-w-strong} w_n {\rightharpoonup}w \quad \text{in \ $L^2 (0,T; V)$.}\end{aligned}$$ Furthermore, there exists $\bar{\alpha} \in L^{\eta_{p\sigma}} (0,T; L^{\kappa_p} (\Omega)) $ such that $$\label{e:conv-alpha-weak} \alpha(w_n) {\rightharpoonup}\bar{\alpha} \quad \text{in \ $L^{\eta_{p\sigma}} (0,T; L^{\kappa_p} (\Omega))$.}$$ Now, estimate in particular yields (recall that $\sigma q_\sigma >2$) that $$\label{e:cruc-pass1} \text{the sequence $\{ \phi(\chi_n)\}$ is uniformly integrable in $L^2 (0,T;H)$.}$$ Furthermore, we have, up to a further subsequence, $$\label{e:cruc-pass2} \phi(\chi_n(x,t)) \to \phi(\chi(x,t)) \qquad {\text{for a.a.}}\, (x,t) \in \Omega \times (0,T)\,,$$ which is a consequence of the continuity of $\phi$ and of the pointwise convergence (up to a further subsequence) $\chi_n(x,t) \to \chi(x,t)$ a.e. in $\Omega \times (0,T)$ (cf. ). Combining  and  and recalling the compactness criterion Theorem \[t:ds\], we conclude that $$\label{e:strong-phi} \phi(\chi_n) \to \phi(\chi) \qquad \text{in \ $L^2 (0,T; H). $}$$ Exploiting –, one easily concludes that the triplet $(\chi,w,\bar{\alpha})$ satisfies $$\begin{aligned} & \label{1-var-lim} \chi_t + A \bar{\alpha} =0 \qquad \text{in ${\mathcal{W}^{-2,\kappa_p}(\Omega)}$} \quad {\text{a.e.\ in}}\ (0,T)\,, \\ & \label{2-var-lim} A\chi + \phi(\chi) =w \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,.\end{aligned}$$ Finally, in order to prove that $$\label{final-step} \bar{\alpha} (x,t) = \alpha (w(x,t)) \qquad {\text{for a.a.}}\ (x,t) \in \Omega \times (0,T)\,,$$ we test the equation approximating by $w_n$ and integrate in time. We thus have $$\begin{aligned} \lim_{n \to +\infty}\int_0^T \int_{\Omega} |w_n|^2 &= \lim_{n \to +\infty} \int_0^T \int_{\Omega}\phi(\chi_n) w_n + \lim_{n \to +\infty} \int_0^T \int_{\Omega}\nabla \chi_n \cdot \nabla w_n \\ & = \int_0^T \int_{\Omega}\phi(\chi) w+ \int_0^T \int_{\Omega}\nabla \chi \cdot\nabla w\\ & =\int_0^T \int_{\Omega} |w|^2, \end{aligned}$$ where the second equality follows from convergences , , and , and the last one from . Hence, we conclude that $$w_n \to w \ \ \text{in $L^2 (0,T;H)$, \ whence} \quad w_n \to w \ \ {\text{a.e.\ in}}\ \Omega \times (0,T)$$ (the latter convergence holding up to a subsequence). By continuity of $\alpha$, we also have $\alpha(w_n) \to \alpha (w)$ a.e. in $\Omega \times (0,T)$. Estimate  (recall ) and again Theorem \[t:ds\] yield, for instance, that $$\alpha(w_n) \to \alpha(w) \quad \text{in $L^1 (0,T; L^1 (\Omega))$,}$$ whence the desired equality . Proofs of Theorem \[th:2\] and Proposition \[prop:regularized\] {#s:3.3} --------------------------------------------------------------- #### Proof of Theorem \[th:2\]. Let $\{ (\chi_n, w_n )\} $ be some sequence of approximate solutions to Problem \[p:2\]. Thanks to estimates , , , , , and , applying standard compactness and weak compactness results (see [@simon]), we find a triplet $(\chi,w,\bar{\alpha})$ such that, along a (not relabeled) subsequence, the following strong, weak, and weak$^*$ convergences hold as $n \to +\infty$: $$\begin{aligned} \label{e:conv-chi-strong-v} & \begin{aligned} \chi_n \to \chi \quad &\text{in \ $L^2 (0,T; H^{2-{\varepsilon}} (\Omega)) \cap L^q (0,T;V) \cap {\mathrm{C}}^0 ([0,T]; H^{1-{\varepsilon}} (\Omega))$} \\ & \text{for every ${\varepsilon}>0 $ and $1 \leq q < +\infty$,} \end{aligned} \\ & \label{e:conv-chi-weak-v} \begin{aligned} \chi_n {{\stackrel{*}{\rightharpoonup}\,}}\chi \quad & \text{in \ $L^2 (0,T; Z) \cap L^\infty (0,T;V) \cap H^1 (0,T;H)$,} \end{aligned} \\ & \label{e:conv-w-strong-v} w_n {{\stackrel{*}{\rightharpoonup}\,}}w \quad \text{in \ $L^2 (0,T; V) \cap L^{2p+2} (0,T; L^{\infty}(\Omega))$.}\end{aligned}$$ In particular, from , we deduce that $$\label{step-1} {\mathcal{N}}(\chi_{n,t}) {\rightharpoonup}{\mathcal{N}}(\chi_t) \qquad \text{in $L^2 (0,T; Z)$.}$$ Furthermore, by , there exists $\bar{\alpha} \in L^{\rho_p} (0,T; L^{\kappa_p} (\Omega)) $ such that $$\label{e:conv-alpha-weak-v} \alpha(w_n) {\rightharpoonup}\bar{\alpha} \quad \text{in \ $L^{\rho_p} (0,T; L^{\kappa_p} (\Omega))$.}$$ Now, up to a subsequence, by the last of  and by continuity of $\phi$, we have, for all $t \in [0,T]$, $$\label{step0} \phi(\chi_n (\cdot,t)) \to \phi(\chi(\cdot,t)) \qquad {\text{a.e.\ in}}\ \Omega\,.$$ On the other hand, it follows from estimate  that $$\label{step00} \text{the sequence $\{ \phi(\chi_n )\}$ is uniformly integrable in $L^1 (0,T;L^1 (\Omega))$.}$$ Then, by – and Theorem \[t:ds\], we conclude that, along the same subsequence as in , $\phi(\chi_n) \to \phi(\chi)$ in $L^1 (0,T; L^1 (\Omega))$. We then have, up to a subsequence, $$\label{step1} \phi(\chi_n (t)) \to \phi(\chi(t)) \qquad \text{in $L^1 (\Omega)$} \ \ {\text{for a.a.}}\ t \in (0,T)\,.$$ Next, using , we see that $\phi(\chi_n)$ is uniformly integrable in $L^{\nu} (0,T; L^1 (\Omega))$ for all $\nu \in [1,+\infty)$. Applying Theorem \[t:ds\], from , we deduce that $$\label{step2} \phi(\chi_n ) \to \phi(\chi) \qquad \text{in $L^{\nu} (0,T; L^1(\Omega))$ \ for every $\nu \in [1,+\infty)$.}$$ Collecting – and , we conclude that the triplet $(\chi,w,\bar{\alpha})$ satisfies $$\begin{aligned} & \label{1-var-lim-v} \chi_t + A \bar{\alpha} =0 \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,, \\ & \label{2-var-lim-v} \delta \chi_t + A\chi + \phi(\chi) =w \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,.\end{aligned}$$ It remains to show that $\bar{\alpha} \equiv \alpha(w)$. To this aim, we note that $\alpha$ defines a maximal monotone graph in the duality $(L^{2p+2} (\Omega\times (0,T)), L^{\rho_p} (\Omega\times (0,T)))$ (note that $\rho_p$ and $2p+2$ are conjugate exponents). Taking into account relations and , and applying a well-known result from the theory of maximal monotone operators in Banach spaces (see [@barbu Lemma 1.3, p. 42]), it is then sufficient to prove that $$\label{3.40} \limsup_{n\to +\infty} \int_{0}^T \int_{\Omega} \alpha(w_n) w_n \leq \int_{0}^T \int_{\Omega} \bar{\alpha} w\,.$$ Now, $$\label{a1} \begin{aligned} \int_{0}^T \int_{\Omega} \alpha(w_n) w_n & =\int_{0}^T \int_{\Omega} \big(\alpha(w_n)-{m}(\alpha(w_n))\big)\, w_n +|\Omega| \int_{0}^T {m}(\alpha(w_n))\, {m}(w_n)\\ & = -\int_{0}^T\int_{\Omega} {w_n}\,{{\mathcal{N}}(\chi_{n,t})} +|\Omega| \int_{0}^T {m}(\alpha(w_n))\, {m}(w_n)\,, \end{aligned}$$ where the second equality follows from . Then, using  and , we find the chain of inequalities $$\label{e:3.40} \begin{aligned} \liminf_{n \to +\infty} & \Big( \int_{0}^T \int_{\Omega}{w_n}\,{{\mathcal{N}}(\chi_{n,t})}\Big) \\ & \geq \liminf_{n \to +\infty} \delta \int_{0}^T \| \chi_{n,t} \|_{V'}^2 + \lim_{n \to +\infty} \int_{0}^T \int_{\Omega} \chi_{n,t} \chi_n + \lim_{n \to +\infty} \int_{0}^T \int_{\Omega}\phi(\chi_n) {\mathcal{N}}(\chi_{n,t})\\ & \geq \delta \int_{0}^T \| \chi_{t} \|_{V'}^2 + \int_0^T \int_{\Omega} \chi_t \chi + \int_{0}^T \int_{\Omega}\phi(\chi) {\mathcal{N}}(\chi_{t})\\ &= \int_{0}^T \int_{\Omega} {w}\,{{\mathcal{N}}(\chi_{t})} =-\int_{0}^T \int_{\Omega} \big(\bar{\alpha}-{m}(\bar{\alpha})\big)\, w\,, \end{aligned}$$ where the second inequality follows from convergences  and  for $\chi_n$ and from combining  with , while the subsequent identities are due to –. On the other hand, it follows from  that $$\label{step3} {m}(\alpha(w_n)) {\rightharpoonup}{m}(\bar{\alpha}) \qquad \text{in $L^{\rho_p} (0,T)$,}$$ whereas, from , we gather that $$\label{step4} {m}(w_n)= {m}(\phi (\chi_n)) \to {m}(\phi(\chi))={m}(w) \qquad \text{in $L^{2p+2} (0,T)$.}$$ Combining –, we conclude that $$\label{e:3.41} \lim_{n \to +\infty} |\Omega| \int_{0}^T {m}(\alpha(w_n))\, {m}(w_n)= |\Omega| \int_{0}^T {m}(\bar{\alpha})\, {m}(w)\,.$$ Collecting , , and , we infer the desired . Ultimately, we have proved that $$\label{e:for-later-convenience} \alpha(w_n) {\rightharpoonup}\alpha(w) \ \ \text{in \ $L^{\rho_p} (0,T; L^{\kappa_p} (\Omega))$ \ \ and} \ \ \lim_{n\to +\infty} \int_{0}^T \int_{\Omega} \alpha(w_n) w_n = \int_{0}^T \int_{\Omega} \alpha(w) w\,.$$ #### Proof of Proposition \[prop:regularized\]. In order to prove that system – enjoys the regularization in time –, using the Gagliardo-Nirenberg interpolation inequality we note that $L^2(0,T; Z) \cap L^\infty (0,T;V) \subset L^8 (0,T; W^{1,12/5}(\Omega)) $ with continuous embedding. Therefore, regularity  for $\chi$ and standard Sobolev embeddings yield $$\label{e:stichi} \| \chi\|_{L^8 (0,T; L^{12}(\Omega))} \leq C\,.$$ Now, we test by $A\chi_t$. Note that all the forthcoming computations are rigorous on the approximation scheme for Problem \[p:2\] which we will detail in Appendix. Elementary calculations yield $$\label{e:elem1} \begin{aligned} \frac{{\mathrm{d}}}{ {\mathrm{d}}t}\left(\frac12 \int_{\Omega}|A\chi|^2 \right) + \delta\int_{\Omega}|\nabla \chi_t|^2 = I_1 + I_2\,, \end{aligned}$$ with $$\begin{aligned} & \label{e:elem2} I_1 := \int_{\Omega} \nabla w \,\cdot\, \nabla \chi_t \leq \frac{\delta}2\int_{\Omega} |\nabla \chi_t|^2 + \frac{1}{2\delta}\int_{\Omega} |\nabla w|^2\,, \\ & \label{e:elem3} \begin{aligned} I_2 & := - \int_{\Omega} \phi{'}(\chi) \left(\nabla\chi \cdot \nabla \chi_t\right) \\ & \leq C_{\phi,3} \int_{\Omega}|\nabla \chi||\nabla \chi_t| \left( 1+|\chi|^4\right)\\ & \leq C \| \nabla \chi_t \|_{H} \| \nabla \chi \|_{L^6(\Omega)} \left(\| \chi\|_{L^{12}(\Omega)}^4 + 1 \right)\\ & \leq \frac{\delta}{4}\| \nabla \chi_t \|_{H}^2 + C \left(\| \chi\|_{L^{12}(\Omega)}^8 + 1 \right) \| \chi \|_{Z}^2\,, \end{aligned}\end{aligned}$$ where the second inequality follows from , the third one from the Hölder inequality, and the last one by taking into account the continuous embedding $Z \subset W^{1,6}(\Omega)$. Collecting –, taking into account , and applying the uniform Gronwall Lemma (see [@temam Lemma III.1.1]), we find for every $\tau>0$ an estimate of the form for $\nabla \chi_t $ in $L^2 (\tau,T;H) $ and for $A\chi $ in $L^\infty (\tau,T;H)$, whence $$\chi \in L^\infty (\tau,T;Z) \cap H^1 (\tau,T;V)\quad \text{for all } 0<\tau<T\,.$$ Then, a comparison in  also yields a bound for $A(\alpha(w))$ in $L^{\rho_p}(\tau,T;V)$, whence an estimate for $\alpha(w) $ in $L^{\rho_p}(\tau,T;H^3(\Omega))$, in view of . Thus, we conclude –, as well as estimate . Global attractor for Problem \[p:2\] {#s:5} ==================================== Proof of Proposition \[prop:2.1\] {#ss:5.1} --------------------------------- We need two preliminary lemmas. The first one clarifies some properties of the energy functional ${\mathcal{E}}$ . \[l:5.1\] Assume –. Then, the functional ${\mathcal{E}}: {X}\to {\mathbb{R}}$ defined by  is bounded from below, lower-semicontinuous w.r.t. the $H$-topology, and satisfies the chain rule $$\label{e:l5.1} \begin{gathered} \text{for all $v \in H^1(0,T;H)$ with $Av+ \phi(v) \in L^2(0,T;H)$,} \\ \text{ the map $t \in [0,T] \mapsto {\mathcal{E}}(v(t))$ is absolutely continuous, and} \\ \frac{\mathrm{d}}{\mathrm{d}t} {\mathcal{E}}(v(t)) = \int_{\Omega} v_t(t) \left(Av(t)+ \phi(v(t)) \right) \qquad {\text{for a.a.}}\ t \in (0,T)\,. \end{gathered}$$ In order to prove the lower-semicontinuity property, we fix a sequence $\{v_n \}$ converging to some $v $ in $H$ and assume, without loss of generality, that $\sup_n {\mathcal{E}}(v_n) <+\infty$. Since $\widehat{\phi}$ is bounded from below, we conclude that $\{v_n\}$ is actually bounded in $V$, and thus $v_n {\rightharpoonup}v$ in $V$, yielding $\textstyle \int_{\Omega} |\nabla v|^2 \leq \liminf_{n} \int_{\Omega} |\nabla v_n|^2$. On the other hand, $$\begin{aligned} \liminf_{n \to +\infty} \int_{\Omega} \widehat{\phi}(v_n) & = \liminf_{n \to +\infty} \int_{\Omega} \left(\widehat{\phi}(v_n)+\frac{C_{\phi,2}}{2} |v_n|^2 \right) - \frac{C_{\phi,2}}2\lim_{n \to +\infty} \int_{\Omega}|v_n|^2 \\ & \geq \int_{\Omega}\left( \widehat{\phi}(v)+\frac{C_{\phi,2}}2 |v|^2\right) -\frac{C_{\phi,2}}2\int_{\Omega}|v|^2\,, \end{aligned}$$ the latter inequality following from  and, for instance, from Ioffe’s Theorem \[th-ioffe\]. Finally, to check the chain rule , we observe that the functional $$\label{funz-bar} \mathcal{E}_{\mathrm{cv}} (v): = \mathcal{E}(v) + \frac{C_{\phi,2}}{2}\int_{\Omega} |v|^2 \qquad \text{for all $v \in X$}$$ is convex, thanks to . Then, follows from the chain rule for $ \mathcal{E}_{\mathrm{cv}}$, see [@brezis73 Lemma III.3.3]. \[l:5.2\] Assume . Then, $$\label{e:5.2.2} \text{for all $w \in V \cap L^\infty (\Omega)$, there holds} \ \ \nabla \alpha(w(x)) = \alpha'(w(x)) \nabla w(x) \ \ {\text{for a.a.}}\ x \in \Omega\,.$$ Since $\Omega$ is smooth, we can take a sequence $\{w_k \}\subset \mathrm{C}^1 (\overline{\Omega})$ such that $w_k \to w$ in $V \cap L^q (\Omega)$ for all $1\leq q<+\infty$. Clearly, for all $k \in {\mathbb{N}}$, there holds $$\label{e:clearl} \nabla \alpha(w_k(x)) = \alpha'(w_k(x)) \nabla w_k(x) \qquad \forall\, x \in \Omega\,.$$ Now, since $\alpha'(r)$ grows like $|r|^{2p}$ by , we conclude that $\alpha'(w_k)\to \alpha'(w) $ and $\alpha(w_k)\to \alpha(w) $ in $L^q (\Omega)$ for all $1\leq q<+\infty$. Therefore, $\nabla \alpha(w_k) = \alpha'(w_k) \nabla w_k \to \alpha'(w) \nabla w $ in $L^{\rho}(\Omega)$ for all $\rho\in[1,2)$ and  follows. #### Proof of Proposition \[prop:2.1\]. Thanks to Theorem \[th:2\], the set ${\mathcal{S}}$ complies with the existence axiom **(P1)** in Definition \[def:generalized-semiflow\]. The translation property **(P2)** is immediate to check. Concerning the concatenation axiom, let $\chi_1$ and $\chi_2$ be two solutions to Problem \[p:2\] on $(0,+\infty)$, satisfying $\chi_1(\tau) = \chi_2 (0)$ for some $\tau \geq 0$, and let the functions $w_1$ and $w_2$ be such that, for $i=1,2$, the pairs $(\chi_i,w_i)$ satisfy equations –, with regularities  and . Then, one easily sees that the concatenations (cf. ) $\tilde \chi$ and $\tilde w$ of $\chi_1, \, \chi_2$ and $w_1,\, w_2$, respectively, satisfy equations –, and still enjoy regularities  and , respectively (the fact that $\chi_1(\tau) = \chi_2 (0)$ is crucial for the time-regularity of $\tilde{\chi}$). To prove that all solutions $\chi \in {\mathcal{S}}$ are continuous w.r.t. the phase space topology , let us fix $\{t_n\}, t_0$ in $[0,+\infty)$, and show that $$\label{e:continuity} t_n \to t_0 \ \Rightarrow \ \left( \| \chi(t_n) -\chi(t_0)\|_{V} + \left\|\widehat{\phi}(\chi(t_n))- \widehat{\phi}(\chi(t_0)) \right\|_{L^1(\Omega)} \right)\to 0\quad \text{as $n \to +\infty$}\,.$$ Indeed, thanks to regularity , for all $T>0$, the function $\chi:[0,T] \to V$ is continuous w.r.t. the weak $V$-topology, hence $$\label{e:weakV} \chi(t_n) {\rightharpoonup}\chi(t_0) \qquad \text{in $V$.}$$ Therefore, by Lemma \[l:5.1\], we have $$\liminf_{n\to +\infty} \mathcal{E}(\chi(t_n)) \geq \mathcal{E}(\chi(t_0))\,.$$ Combining this inequality with the continuity of the map $t \in [0,T] \mapsto \mathcal{E}(\chi(t))$, one concludes that \[e:converg\] $$\begin{aligned} & \label{e:c1} \lim_{n \to +\infty} \int_{\Omega} |\nabla \chi(t_n)|^2 = \int_{\Omega} |\nabla \chi(t_0)|^2, \\ & \label{e:c2} \lim_{n \to +\infty} \int_{\Omega} \widehat{\phi}(\chi(t_n))= \int_{\Omega} \widehat{\phi}(\chi (t_0))\,.\end{aligned}$$ Clearly, , combined with , yields that $\chi(t_n) \to \chi(t_0)$ in $V$. In order to prove the additional convergence $$\label{e:stronger-conv} \|\widehat{\phi}(\chi(t_n))- \widehat{\phi}(\chi(t_0)) \|_{L^1 (\Omega)} \to 0 \quad \text{as $n \to +\infty$}\,,$$ we note that implies, in particular, that $$\label{e:ci-serve} \widehat{\phi}(\chi(\cdot, t_n)) \to \widehat{\phi}(\chi(\cdot,t_0)) \qquad {\text{a.e.\ in}}\ \Omega\,.$$ In view of [@rocca-schimperna04 Lemma 4.2], , combined with and the fact that $\widehat{\phi}$ takes non-negative values, yields . The energy identity  follows by multiplying  by $w$ (note that the latter is an admissible test function, thanks to ), by $\chi_t$, adding the resulting relations, taking into account the chain rule  and formula , and integrating in time. It remains to prove the upper-semicontinuity with respect to the initial data. To this aim, we will exploit . Thus, let us fix a sequence of solutions $\{\chi_n \} \subset {\mathcal{S}}$ and $\chi_0 \in {X}$, with $$\label{conv-zero} {\operatorname{d}_X}(\chi_n(0),\chi_0) \to 0 \ \text{as $n \to +\infty$, so that, in particular, ${\mathcal{E}}(\chi_n(0)) \to {\mathcal{E}}(\chi_0)$.}$$ Identity  yields that there exists a constant $C>0$ such that, for all $n \in {\mathbb{N}}$, $$\label{aprio1} \delta\int_0^t \int_{\Omega} |\partial_t \chi_n|^2 + \int_0^t \int_{\Omega} \alpha'(w_n)|\nabla w_n|^2 + {\mathcal{E}}(\chi_n(t)) = {\mathcal{E}}(\chi_n(0)) \leq C \ \ \text{for all $t\geq 0$}.$$ Arguing as in Section \[s:3.1\], we obtain estimates , , , , , and  for the sequence $\{(\chi_n,w_n)\} $, on every interval $(0,T)$. Therefore, with a diagonalization procedure, we find a subsequence $\{(\chi_{n_k}, w_{n_k}) \}$ and functions $(\chi,w):(0,+\infty) \to {X}\times V$ for which –, , and hold on every interval $(0,T)$, for all $T>0$. Using all the aforementioned relations, we have $\chi(0)=\chi_0$ and, arguing as in Section \[s:3.3\], we conclude that $\chi \in {\mathcal{S}}$. In order to prove that $$\label{e:conv-chink} \text{for all $t \geq 0$}, \ \ \left( \| \chi_{n_k}(t) -\chi(t)\|_{V} + \left\| \widehat{\phi}(\chi_{n_k}(t))- \widehat{\phi}(\chi(t)) \right\|_{L^1 (\Omega)} \right)\to 0 \ \ \ \text{as $k \to +\infty$,}$$ we first obtain some *enhanced* convergence for the sequence $\{ w_{n_k}\}$. To this aim, we note that, for every $T>0$, there holds $$\begin{aligned} \mathsf{c}_\alpha \limsup_{k \to +\infty} \int_0^T \int_{\Omega} |w_{n_k}|^{2p+2} & \leq \limsup_{k \to +\infty} \int_0^T \int_{\Omega} \alpha(w_{n_k}) w_{n_k} - \liminf_{k \to +\infty} \int_0^T \int_{\Omega} \Psi(w_{n_k}) \\ & \leq \int_0^T \int_{\Omega} \alpha(w) w - \int_0^T \int_{\Omega} \Psi(w) = \mathsf{c}_\alpha \int_0^T \int_{\Omega} |w|^{2p+2}\,. \end{aligned}$$ Indeed, the first inequality follows from , the second one from the second convergence in , and from , together with the convexity of $\Psi$ (thanks to Ioffe’s Theorem [@ioffe77]), and the third one from  again. Taking into account the fact that $$\liminf_{k \to +\infty} \int_0^T \int_{\Omega} |w_{n_k}|^{2p+2} \geq \int_0^T \int_{\Omega} |w|^{2p+2},$$ due to , we have $$w_{n_k} \to w \qquad \text{in $L^{2p+2}(0,T; L^{2p+2} (\Omega))$ \ \ for all $T>0$}$$ and, thus, finally, $$\label{strong-w} w_{n_k} \to w \qquad \text{in measure in $\Omega \times (0,T)$ for all $T>0$.}$$ As a consequence, for all $t \geq 0$, $$\label{e:ioffe-again} \liminf_{k \to +\infty} \int_0^t \int_{\Omega} \alpha'(w_{n_k}) |\nabla w_{n_k}|^2 \geq \int_0^t \int_{\Omega} \alpha'(w) |\nabla w|^2\,,$$ thanks to the convergence in measure , the weak convergence  for $\{ \nabla w_{n_k}\}$ in $L^2 (0,T; H)$ for all $T>0$, and again Ioffe’s Theorem \[th-ioffe\]. Hence, passing to the limit in the energy identity  (written for the functions $(\chi_{n_k},w_{n_k})$), we infer, for all $t \geq 0$, $$\label{e:elem-arg} \begin{aligned} \delta\int_0^t \int_{\Omega} |\partial_t \chi|^2 & + \int_0^t \int_{\Omega} \alpha'(w)|\nabla w|^2 + {\mathcal{E}}(\chi(t)) \\ & \leq \liminf_{k \to +\infty} \left( \delta\int_0^t \int_{\Omega} |\partial_t \chi_{n_k}|^2 + \int_0^t \int_{\Omega} \alpha'(w_{n_k})|\nabla w_{n_k}|^2 + {\mathcal{E}}(\chi_{n_k}(t)) \right) \\ & \leq \limsup_{k \to +\infty} \left( \delta\int_0^t \int_{\Omega} |\partial_t \chi_{n_k}|^2 + \int_0^t \int_{\Omega} \alpha'(w_{n_k})|\nabla w_{n_k}|^2 + {\mathcal{E}}(\chi_{n_k}(t)) \right) \\ & = \lim_{k \to +\infty} {\mathcal{E}}(\chi_{n_k}(0))= {\mathcal{E}}(\chi_0) = \delta\int_0^t \int_{\Omega} |\partial_t \chi|^2 + \int_0^t \int_{\Omega} \alpha'(w)|\nabla w|^2 + {\mathcal{E}}(\chi(t))\,, \end{aligned}$$ where the first inequality follows from –, , and the fact that ${\mathcal{E}}$ is lower-semicontinuous w.r.t. the $H$-topology, the third one from , the fourth one from , and the last equality from the *energy identity*  satisfied by all solutions in $\mathcal{S}$. With an elementary argument, we deduce from  that, for all $t>0$, $$\int_0^t \int_{\Omega} |\partial_t \chi_{n_k}|^2 \to \int_0^t \int_{\Omega} |\partial_t \chi|^2, \quad \text{whence} \quad \chi_{n_k} \to \chi \ \ \text{in $H^1(0,t;H)$,}$$ as well as $${\mathcal{E}}(\chi_{n_k}(t)) \to {\mathcal{E}}(\chi(t)).$$ Arguing in the same way as throughout – and again invoking [@rocca-schimperna04 Lemma 4.2], we obtain . This concludes the proof. Proof of Theorem \[th:4\] {#ss:5.2} ------------------------- #### Eventual boundedness. In order to check that ${\mathcal{S}}$ is eventually bounded, we fix a ball $B(0,R)$ centered at $0$ of radius $R$ in ${X}$, some initial datum $\chi_0 \in B_{X}(0,R)$, namely satisfying (recall that we can assume that $\widehat{\phi}$ is a positive function) $$\label{e:in-a-ball} \|\chi_0\|_V+\int_{\Omega}\widehat{\phi}(\chi_0) \leq R,$$ and consider a generic trajectory $\chi \in {\mathcal{S}}$ starting from $\chi_0$. Recalling the energy identity , we find, for all $t \geq 0$, $$\label{e:eb1} \int_{\Omega}\widehat{\phi}(\chi(t)) \leq {\mathcal{E}}(\chi(t)) \leq {\mathcal{E}}(\chi_0) \leq R, \qquad \int_{\Omega}|\nabla \chi(t)|^2 \leq 2R\,.$$ Now, taking into account the fact that $m(\chi(t)) = m(\chi_0)$ for all $t \geq 0$ (cf. ), we deduce from a bound for $\| \chi \|_{L^\infty (0,+\infty;V)}$. Hence, there exists $R'>0$ such that ${\operatorname{d}_X}(\chi(t),0) \leq R'$ for all $t \geq 0$. Since $\chi_0$ is arbitrary, we conclude that the evolution of the ball $B_{X}(0,R)$ is contained in the ball $B_{X}(0,R')$. #### Compactness. In order to verify that ${\mathcal{S}}$ is compact, we consider a sequence $\{\chi_n\}\subset{\mathcal{S}}$ such that $\{\chi_n(0)\}$ is bounded in ${X}$. We write the energy identity  and, as in the proof of Proposition \[prop:2.1\], deduce that there exist a subsequence $\{(\chi_{n_k}, w_{n_k}) \}$ and functions $(\chi,w):(0,+\infty) \to {X}\times V$ for which convergences –, , and hold on every interval $(0,T)$ for all $T>0$. However, we cannot prove that $$\label{e:conv-chink-comp} \left( \| \chi_{n_k}(t) -\chi(t)\|_{V} + \left\|\widehat{\phi}(\chi_{n_k}(t))- \widehat{\phi}(\chi(t)) \right\|_{L^1(\Omega)} \right)\to 0 \ \ \ \text{for all $t>0$},$$ arguing in the same way as throughout –, for, in this case, we do not have the convergence of the initial energies ${\mathcal{E}}(\chi_{n_k}(0))$ at our disposal. Then, we rely on the following procedure (see also [@rossi-segatti-stefanelli08; @segatti06] for the use of an analogous argument). First, we apply Helly’s compactness principle (with respect to the pointwise convergence) for monotone functions to the functions $t \mapsto {\mathcal{E}}(\chi_{n_k}(t))$, which are non-increasing in view of the energy identity . Thus, up to a (not relabeled) subsequence, there exists a non-increasing function $\mathscr{E}: [0,+\infty) \to {\mathbb{R}}$ such that $$\label{helly} \mathscr{E}(t):= \lim_{k \to +\infty} {\mathcal{E}}(\chi_{n_k}(t)) \qquad \text{for all $t \geq 0$}.$$ By the lower-semicontinuity of ${\mathcal{E}}$ (w.r.t. the $H$-topology), we find $$\label{helly-ineq} {\mathcal{E}}(\chi(t)) \leq \mathscr{E}(t) \qquad \text{for all $t \geq 0$.}$$ On the other hand, ensures that, up to a further extraction, for almost all $s \in (0,t)$, $$\label{e:enhanced-conv} \chi_{n_k}(s) \to \chi(s) \ \ \text{in $H^{2-{\varepsilon}}(\Omega)$ for all ${\varepsilon}>0$, whence} \ \ \chi_{n_k}(s) \to \chi(s) \ \ \text{in $H^1 (\Omega) \cap L^\infty(\Omega) $.}$$ Thus, in particular, $$\label{e:pointwise} \widehat{\phi}(\chi_{n_k}(\cdot,s)) \to \widehat{\phi}(\chi(\cdot,s)) \qquad {\text{a.e.\ in}}\ \Omega\,.$$ Moreover, for every $\mathcal{O} \subset \Omega$, there holds $$\label{e:unifinte} \begin{aligned} \int_{\mathcal{O}} |\widehat{\phi}(\chi_{n_k}(s))| & \leq |\mathcal{O}||\widehat{\phi}(0)| +\frac{C_{\phi,2}}2 \int_{\mathcal{O}} |\chi_{n_k}(s)|^2 + \int_{\mathcal{O}} |{\phi}(\chi_{n_k}(s))| |\chi_{n_k}(s)| \\ & \leq C \left(|\mathcal{O}|+ \int_{\mathcal{O}}|{\phi}(\chi_{n_k}(s))|\right)\,, \end{aligned}$$ where the first inequality follows from  and the second one from . Notice that the right-hand side of  tends to zero as $|\mathcal{O}| \to 0$, since the sequence $\{ {\phi}(\chi_{n_k}(s)) \}$ is uniformly integrable in $L^1 (\Omega)$ thanks to . Hence, yields that $\{ \widehat{\phi}(\chi_{n_k}(s)) \}$ is itself uniformly integrable in $L^1 (\Omega)$. Combining this with , in view of Theorem \[t:ds\] we conclude that $\widehat{\phi}(\chi_{n_k}(s)) \to \widehat{\phi}(\chi(s))$ in $L^1 (\Omega)$. Finally, we have shown that there exists a negligible set $\mathscr{N} \subset (0,+\infty)$ such that $$\label{e:convene} \mathscr{E}(s) = \lim_{k \to +\infty} {\mathcal{E}}(\chi_{n_k}(s)) = {\mathcal{E}}(\chi(s)) \qquad {\text{for a.a.}}\, s \in (0,+\infty) \setminus \mathscr{N}\,.$$ We are now in a position to carry out the argument for  (which bypasses the lack of convergence of the initial data in the phase space ), using the fact that the energy identity  holds for all $t>0$. Indeed, for every fixed $t>0$ and for all $s \in (0,t) \setminus \mathscr{N}$, we can pass to the limit in the energy identity , written for the sequence $(\chi_{n_k}, w_{n_k})$ on the interval $(s,t)$. Note indeed that convergences – and for $(\chi_{n_k}, w_{n_k})$ hold on $(s,t)$. Proceeding as above, we then deduce once more that $$\lim_{k \to +\infty} \int_s^t \int_{\Omega} \alpha(w_{n_k}) w_{n_k} = \int_s^t \int_{\Omega} \alpha(w) w\,,$$ whence $w_{n_k} \to w$ in $L^{2p+2}(s,t;L^{2p+2}(\Omega))$. Therefore, repeating the very same passages as in  and relying on , we find $$\begin{aligned} \delta\int_s^t \int_{\Omega} |\partial_t \chi|^2 & + \int_s^t \int_{\Omega} \alpha'(w)|\nabla w|^2 + {\mathcal{E}}(\chi(t)) \\ & = \lim_{k \to +\infty} \left(\delta\int_s^t \int_{\Omega} |\partial_t \chi_{n_k}|^2 + \int_s^t \int_{\Omega} \alpha'(w_{n_k})|\nabla w_{n_k}|^2 + {\mathcal{E}}(\chi_{n_k}(t)) \right)\,, \end{aligned}$$ which gives $$\mathscr{E}(t) = \lim_{k \to +\infty} {\mathcal{E}}(\chi_{n_k}(t)) = {\mathcal{E}}(\chi(t)) \quad\text{for all $t>0$},$$ and, finally, . #### Lyapunov function and rest points. We now verify that ${\mathcal{E}}$ acts as a Lyapunov functional for ${\mathcal{S}}$. Actually, ${\mathcal{E}}$ clearly is continuous on ${X}$ and decreasing along all solutions, thanks to the energy identity . Furthermore, assume that, along some $\chi \in {\mathcal{S}}$, the map $t \in [0,+\infty) \mapsto {\mathcal{E}}(\chi(t))$ is constant. Then, in view of , we find $\nabla w \equiv 0$ and $\chi_t \equiv 0$ a.e. in $(0,+\infty)$, so that $\chi(t) \equiv \chi(0) $ for all $t \in [0,+\infty)$. Analogously, we immediately find that $\bar{\chi} \in {X}$ is a rest point for ${\mathcal{S}}$ if and only if it satisfies the stationary system \[e:sub\] $$\begin{aligned} & \label{e:sub1} A (\alpha(\bar{w})) =0 \quad {\text{a.e.\ in}}\ \Omega\,, \\ & \label{e:sub2} A \bar{\chi}+ \phi(\bar{\chi}) =\bar{w} \quad {\text{a.e.\ in}}\ \Omega\,.\end{aligned}$$ #### Conclusion of the proof. We apply Theorem \[thm:ball1\] and Remark \[rem:restriction\_to\_invariant\_set\] with the choice $\mathcal{D}:=\mathcal{D}_{m_0} $ for some $m_0>0$ (cf. ). Thanks to  (recall the second a priori estimate in Section \[s:3.1\]), for all $\chi_0 \in \mathcal{D}_{m_0} $, every solution starting from the initial datum $\chi_0$ remains in $\mathcal{D}_{m_0} $, so that the first condition in  is satisfied. To check the second one, we fix some $\bar{\chi} \in {Z({\mathcal{S}})}$ with $|m(\bar{\chi})| \leq m_0 $. It follows from  and  that $\nabla \bar{w} \equiv 0$, so that $\bar{w}$ is constant in $\Omega$. Hence, we test  by $\bar{\chi} - m(\bar{\chi})$. Since $\bar{w}= m(\bar{w})$, we infer that $$\label{est:rest-points} \| \nabla \bar{\chi} \|_H^2 + \int_{\Omega} \phi(\bar{\chi}) (\bar{\chi}-m(\bar{\chi})) \leq 0\,.$$ On the other hand, ensures that estimate  holds, so that there exist constants $\mathcal{K}_{m_0}, \,\mathcal{K}_{m_0}^1>0$, only depending on $m_0$, such that $$\label{est-aggiunta} \int_{\Omega}|\phi(\bar{\chi})| \leq \mathcal{K}_{m_0} \int_{\Omega} \phi(\bar{\chi}) (\bar{\chi}-m(\bar{\chi})) + \mathcal{K}_{m_0}^1\,.$$ Collecting  and , we deduce that $$\| \nabla \bar{\chi} \|_H^2 + \frac1{\mathcal{K}_{m_0}}\int_{\Omega}|\phi(\bar{\chi})| \leq \frac{\mathcal{K}_{m_0}^1}{\mathcal{K}_{m_0}}\,,$$ whence, in particular, $$|m(\bar{w})|=|m(\phi(\bar{\chi}))| \leq \frac{\mathcal{K}_{m_0}^1}{|\Omega|}.$$ Taking into account the fact that $\nabla \bar{w}=0$ (so that $\bar{w}$ is a constant) and that $|m(\bar{\chi})| \leq m_0 $, we conclude that $$\label{e:est-rest} \exists\,\mathcal{K}_{m_0}^{2}>0\,: \ \ \forall\, \bar{\chi} \in {Z({\mathcal{S}})}\cap\mathcal{D}_{m_0}\ \ \| \bar{\chi} \|_{V} + | \bar{w} | \leq \mathcal{K}_{m_0}^2\,.$$ Thus, a comparison in  and the standard elliptic regularity estimate (cf. also the calculations developed throughout –), yield $$\label{e:later} \exists\,\mathcal{K}_{m_0}^{3}>0\,: \ \ \forall\, \bar{\chi} \in {Z({\mathcal{S}})}\cap\mathcal{D}_{m_0} \ \ \| {\phi}(\bar{\chi}) \|_{L^6 (\Omega)}+ \| \bar{\chi}\|_{W^{2,6}(\Omega)} \leq \mathcal{K}_{m_0}^3\,,$$ whence, in particular, an estimate for $\bar{\chi}$ in $L^\infty (\Omega)$. Then, using , we readily infer that $$\label{e:est-rest2} \exists\,\mathcal{K}_{m_0}^{4}>0\,: \ \ \forall\, \bar{\chi} \in {Z({\mathcal{S}})}\cap\mathcal{D}_{m_0} \ \ \| \widehat{\phi}(\bar{\chi}) \|_{L^6 (\Omega)} \leq \mathcal{K}_{m_0}^4\,.$$ Finally, and yield that ${Z({\mathcal{S}})}\cap\mathcal{D}_{m_0} $ is bounded in the phase space ${X}$, and the existence of the global attractor follows from Theorem \[thm:ball1\]. In fact, with the same calculations as in the above lines, joint with a boot-strap argument, one easily proves that $$\label{rest} \forall\, p \in [1,+\infty)\,: \ \ \exists\,C_p>0 \ \ \bar{\chi} \in {Z({\mathcal{S}})}\cap\mathcal{D}_{m_0} \quad \| \bar{\chi}\|_{W^{2,p}(\Omega)} + \| \widehat{\phi}(\bar{\chi})\|_{L^p(\Omega)} \leq C_p.$$ Then, estimate  is a straightforward consequence of  and . \[s:6\] Proof of Theorems \[th:3\] and \[th:3.2\] {#ss:6.1} ========================================= #### Proof of Theorem \[th:3\]. Within this proof, we denote by $c_\delta$ a positive constant depending on $\delta>0$ and on quantities . Referring to the notation of the statement of Theorem \[th:3\], let us set ${\underline{\chi}_{0}}:= \chi_{0}^{1}-\chi_{0}^{2}$, $ {\underline{\chi}}:=\chi_1 -\chi_2 $, and ${\underline{w}}:= w_1-w_2$. The pair $ ({\underline{\chi}},{\underline{w}}) $ obviously satisfies $$\begin{aligned} && {\underline{\chi}}_t + A(\alpha (w_1))- A(\alpha (w_2)) =0 \quad {\text{a.e.\ in}}\ \Omega \times (0,T), \label{unodifb}\\ && \label{duedifb} \delta {\underline{\chi}}_t +A {\underline{\chi}}+ \chi_{1}^3-\chi_{2}^3 -{\underline{\chi}}= {\underline{w}}\quad {\text{a.e.\ in}}\ \Omega \times (0,T).\end{aligned}$$ Following the proof of [@rossi05 Prop. 2.1], we test by $ \mathcal{N}\left({\underline{w}}- {m}({\underline{w}}) \right)$, by $ \mathcal{N}( {\underline{\chi}}_t)+ {\underline{\chi}}$, add the resulting equations, and integrate over $(0,t)$, $ t \in (0,T)$. We refer to the proof of [@rossi05 Prop. 2.1] for all the detailed computations, leading to (cf. [@rossi05 (3.51)]) $$\label{basic-cont-dep} \begin{aligned} \int_0^t \| {\underline{w}}\|_{H}^2 + \delta \int_0^t \|{\mathcal{N}}({\underline{\chi}}_t) {\|_{V}}^2+ \delta \|{\underline{\chi}}(t) \|_{H}^2 + \int_0^t \| \nabla {\underline{\chi}}\|_{H}^2 \leq C\left(\|{\underline{\chi}_{0}}\|_H^2 + \int_0^t \|{\underline{\chi}}\|_H^{2} \right). \end{aligned}$$ An easy application of Gronwall’s lemma to the function $t \mapsto \|{\underline{\chi}}(t)\|_H^{2} $ entails $$\label{eq:stima-cont-dep-h} \| {\underline{\chi}}\|_{C^0 ([0,t];H) \cap L^2 (0,t;V)}+ \| {\underline{\chi}}_t\|_{L^2 (0,t;V')}+ \| {\underline{w}}\|_{L^2 (0,t;H)}\leq c_\delta \| {\underline{\chi}_{0}}\|_H.$$ Furthermore, exploiting  and the above , it follows from the Hölder inequality that $$\label{b} \begin{aligned} &\| \phi(\chi_1) -\phi(\chi_2) \|_{L^2 (0,t;H)}^2 \\ & \leq C \int_0^t \int_\Omega |{\underline{\chi}}|^2 \left(\chi_1^2 +\chi_2^2 +1\right)^2 \\ &\leq C \int_0^t \left(\| \chi_1 \|_{L^6 (\Omega)}^4 + \| \chi_2 \|_{L^6 (\Omega)}^4 \right) \| {\underline{\chi}}\|_{L^6 (\Omega)}^2 + C \int_0^t \int_\Omega |{\underline{\chi}}|^2 \\ &\leq C \left(\| \chi_1 \|_{L^{\infty}(0,T;L^{6}(\Omega))}^4 + \| \chi_2 \|_{L^{\infty}(0,T;L^{6}(\Omega))}^4 +1\right) \| {\underline{\chi}}\|_{L^2 (0,t;V)}^2 \leq c_\delta \| {\underline{\chi}_{0}}\|_H^2. \end{aligned}$$ Next, we test  by ${\underline{\chi}}_t $ and integrate in time to obtain $$\label{e:6.5} \begin{aligned} \frac\delta2\int_0^t \| {\underline{\chi}}_t \|_H^2 + \frac12 \| \nabla({\underline{\chi}}(t)) \|_H^2 \leq \frac12 \|\nabla{\underline{\chi}_{0}}\|_{H}^2 +c_\delta \left(\int_0^t \| {\underline{w}}\|_{H}^2 +\int_0^t \|\phi(\chi_1) -\phi(\chi_2)\|_H^2 + \int_0^t \| {\underline{\chi}}\|_{H}^2 \right)\,. \end{aligned}$$ In view of –, we readily infer the continuous dependence estimate  for ${\underline{\chi}}$ in $C^0 ([0,t];V) \cap H^1 (0,t;H)$. Then, the estimate for $\| {\underline{\chi}}\|_{L^2 (0,t;Z)}$ follows from – by a comparison argument. #### Proof of Theorem \[th:3.2\]. Referring to the notation of the proof of Theorem \[th:3\], we again test by $ \mathcal{N}\left({\underline{w}}- {m}({\underline{w}}) \right)$, by $ \mathcal{N}( {\underline{\chi}}_t)+ {\underline{\chi}}$, add the resulting equations, and integrate over $(0,t)$, $ t \in (0,T)$. Developing the same calculations as in the above lines, we note that the chain of inequalities is now trivial, since under the present assumptions the functions $\chi_1$ and $\chi_2$ are estimated in $L^\infty (0,T;Z)$ (see Proposition \[prop:regularized\]). On the other hand, the following term: $$I :=\int^t_0 \int_\Omega m({\underline{w}})\left(\alpha(w_1) -\alpha(w_2)\right),$$ which was easily estimated in the proof of Theorem \[th:3\], now needs to be carefully handled because of the (at most) quadratic controlled growth of $\alpha^\prime$. Indeed, observe that $$\begin{aligned} \vert I\vert &\le \int_0^t \Vert m({\underline{w}})\Vert_{L^\infty(\Omega)} \Vert\alpha(w_1) -\alpha(w_2)\Vert_{L^1(\Omega)} \\ &\le C\int_0^t \left(\Vert m({\underline{w}})\Vert_{L^1(\Omega)}\int_\Omega (1+\vert w_1\vert^{2p} +\vert w_2\vert^{2p}){\underline{w}}\right) \\ &\le C \int_0^t \left(\Vert \phi(\chi_1) -\phi(\chi_2)\Vert_{L^1(\Omega)} (1+\Vert w_1\Vert^{2p}_{L^\infty(\Omega)} +\Vert w_2\Vert^{2p}_{L^\infty(\Omega)})\Vert {\underline{w}}\Vert_{L^1(\Omega)} \right) \\ &\le C \int_0^t \left(\Vert{\underline{\chi}}\Vert_{L^1(\Omega)} (1+\Vert w_1\Vert^{2p}_{L^\infty(\Omega)} +\Vert w_2\Vert^{2p}_{L^\infty(\Omega)})\Vert {\underline{w}}\Vert_{L^1(\Omega)} \right) \\ &\le \varrho \int_0^t\Vert {\underline{w}}\Vert^2_{H} + C_\varrho \int_0^t (1+\Vert w_1\Vert^{4p}_{L^\infty(\Omega)} +\Vert w_2\Vert^{4p}_{L^\infty(\Omega)}) \Vert{\underline{\chi}}\Vert^2_{H} \end{aligned}$$ for some $\varrho \in (0,1)$ and $C_{\varrho}>0$. This modification gives, in place of , $$\begin{aligned} &(1-\varrho)\int_0^t \| {\underline{w}}\|_{H}^2 + \delta \int_0^t \|{\mathcal{N}}({\underline{\chi}}_t) {\|_{V}}^2+ \delta \|{\underline{\chi}}(t) \|_{H}^2 + \int_0^t \| \nabla {\underline{\chi}}\|_{H}^2\\ &\leq C\left(\|{\underline{\chi}_{0}}\|_H^2 + \int_0^t (1+\Vert w_1\Vert^{4p}_{L^\infty(\Omega)} +\Vert w_2\Vert^{4p}_{L^\infty(\Omega)}) \Vert{\underline{\chi}}\Vert^2_{H}\right). \end{aligned}$$ Thus, recalling , we can use Gronwall’s lemma to deduce . Estimate can be obtained by arguing as in the proof of Theorem \[th:3\], hence the result. Appendix ======== We propose the following approximate system for *both* Problem \[p:1\] and Problem \[p:2\]: $$\begin{aligned} & \label{1-var-better-mudelta} \chi_t + A (\alpha_M(w)) =0 \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,,\\ & \label{2-var-better-mudelta} \delta \chi_t+ A\chi + \phi_{{\mu}}(\chi) =w \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,,\end{aligned}$$ depending on the parameters $\delta,\, M, \, {\mu}>0$, where $$\label{approalfa} \alpha_M(r)=\left\{\begin{array}{lll} \alpha(-M) + C_1(r-M) &{\quad\text}{if }\,r<- M, \\ \alpha(r)& {\quad\text}{if }\,|r|\le M,\\ \alpha(M) + C_1(r-M) &{\quad\text}{if }\,r> M, \end{array} \right.$$ $C_1$ being the same constant as in , and $$\label{approphi} \phi_{{\mu}}(r)= \left\{ \begin{array}{lll} \phi(r) &{\quad\text}{if }\,|\phi(r)|\le\frac1{{\mu}},\\ \frac1{{\mu}}\operatorname{sign}(r) &{\quad\text}{otherwise}. \end{array} \right.$$ It is immediate to check that, for any choice of the approximation parameters $M$ and ${\mu}$, the functions $\alpha_M$ and $\phi_{{\mu}}$ are Lipschitz continuous on ${\mathbb{R}}$ and that $$\label{e:uniformly-cpt} \begin{aligned} & \alpha_M \to \alpha \qquad \text{uniformly on compact subsets of ${\mathbb{R}}$ as $M \nearrow +\infty$,} \\ & \phi_\mu \to \phi \qquad \text{uniformly on compact subsets of ${\text{dom}}(\phi)$ as $\mu \searrow 0$.} \end{aligned}$$ Of course, the Lipschitz constants of $\alpha_M$ and $\phi_\mu$ explode as $M \nearrow +\infty$ and $\mu \searrow 0$, respectively. Let us also point out that, by construction, $$\label{e:coercivity} \alpha_{M}'(r) \geq C_1>0 \qquad \text{for all $r \in {\mathbb{R}}$, $M >0,$}$$ which yields that the inverse $\rho_M : {\mathbb{R}}\to {\mathbb{R}}$ of $\alpha_M$ is Lipschitz continuous, with $$\label{e:impo-conse} |\rho_M(x) -\rho_M(y)| \leq \frac{1}{C_1}|x-y| \quad \text{for all $x,y \in {\mathbb{R}}$, $M >0.$}$$ What is more, relying on convergence of $\phi_{\mu}$ to $\phi$, one can also check that, for $\mu>0$ sufficiently small (say $0<\mu\leq \mu_*$), and hold on this approximate level as well, i.e., $$\label{e:2.2.14-approx} \begin{aligned} \forall\, m \in {\text{dom}}(\phi)=(a,b)\ \ \exists\, C_m,\ C_m'>0 \, : \ \ &\forall\, 0<\mu\leq \mu_* \quad \forall\, r \in (a-m,b-m) \\ & |\phi_{\mu}(r+m)| \leq C_m \phi_{\mu}(r+m)r +C_m'\,, \end{aligned}$$ as well as $$\label{hyp:3-appr} \ \ \exists\, C>0\, : \ \ \forall\, 0<\mu\leq \mu_* \quad \quad \forall\, r \in (a,b) \quad |\phi_{\mu}(r)|^{\sigma} \leq C \left( \widehat{\phi_{\mu}}(r) +1\right)\,,$$ with $\sigma \in (0,1)$ as in , in particular, complying with the compatibility condition . It was proved in [@rossi05 Thm. 2.1] that, for every $\delta,\, M, \, {\mu}>0$, there exists a unique pair $({\chi},{w})$, with $$\label{initial-regularity} \begin{aligned} & {\chi}\in L^2 (0,T;Z) \cap L^\infty (0,T;V) \cap H^1 (0,T;H),\\ &{w}\in L^2 (0,T;V), \end{aligned}$$ solving the Cauchy problem for system –, supplemented with some initial datum $\chi^0 \in V$. #### Problem $\mathbf{P}_{\delta,{\mu}}$. In what follows, we approximate the initial datum $\chi_0 \in V$ in  by a sequence $$\label{basta} \{\chi_{0,\mu}\} \subset H^4(\Omega) \quad \text{with} \quad \chi_{0,\mu} {\rightharpoonup}\chi_0 \ \ \text{in $V$} \ \ \text{and} \ \ \sup_{\mu>0}\| \widehat{\phi}_{{\mu}}(\chi_{0,\mu})\|_{L^1(\Omega)}<+\infty$$ (for example, we can construct $\{ \chi_{0,\mu}\}$ by applying (twice) the elliptic regularization procedure developed in the proof of [@bcgg Prop. 2.6]). For every $\delta,\, M, \, {\mu}>0$, we call $\mathbf{P}_{\delta,M,{\mu}}$ the initial and boundary value problem obtained by supplementing the PDE system – with the initial condition $$\label{app-init} \chi(0)=\chi_{0,\mu} \quad \text{in $H^4(\Omega)$}.$$ In the following Section \[ss:a.1\], we will prove some further regularity of the approximate solutions. In this way, we will justify, on the level of the approximate Problem $\mathbf{P}_{\delta,M,{\mu}}$, the estimates formally performed in Section \[s:3.1\]. Hence, in Section \[ss:a.2\], we will develop the rigorous proof of Theorem \[th:1\] by relying on the aforementioned estimates and by passing to the limit in Problem $\mathbf{P}_{\delta,M, {\mu}}$, first as $\delta\searrow 0$ for $M,\,{\mu}>0$ fixed, then as $M \nearrow +\infty$ for ${\mu}>0$ fixed, and, finally, as ${\mu}\searrow 0$. Furthermore, it would be possible to give a rigorous proof of Theorem \[th:2\] by passing to the limit in Problem $\mathbf{P}_{\delta,M,{\mu}}$ first as $M \nearrow +\infty$ for ${\mu}>0$ fixed, and then as ${\mu}\searrow 0$. However, we are not going to enter into the details of the latter procedure, which follows the very same lines as the one for Theorem \[th:1\]. In what follows, we denote by $C_{\delta,M,{\mu}}$ various constants (which can differ from occurrence to occurrence, even in the same line), depending on the parameters $\delta$, $M$, and ${\mu}$, and such that $C_{\delta,M,{\mu}} \nearrow +\infty$ as either $\delta \searrow 0$, or $M \nearrow +\infty$, or ${\mu}\searrow 0$. The symbols $C_{\delta,\mu}$, $C_{M,\mu}$, and $C_{\mu}$ have an analogous meaning. Enhanced regularity estimates on the approximate problem {#ss:a.1} -------------------------------------------------------- #### First estimate. We note that ${w}\in L^2 (0,T;V)$ and that, since $\phi_{\mu}$ is a Lipschitz continuous function, ${\chi}\in L^\infty(0,T;V)$ (cf. ) implies $\phi_{\mu}({\chi}) \in L^\infty(0,T;V)$. Thus, by comparison in , we have $\delta \partial_t {\chi}+ A{\chi}\in L^2 (0,T;V)$. Hence, testing by $A(\partial_t {\chi})$ and using the fact that ${\chi}(0)= \chi_{0,{\mu}} \in Z$, we deduce the estimate $$\label{f1} \|\nabla \partial_t {\chi}\|_{L^2 (0,T;H)} + \| A {\chi}\|_{L^\infty (0,T;H)} \leq C_{\delta,{\mu}},$$ whence $$\label{f1-bis} {\chi}\in L^\infty (0,T;Z) \cap H^1 (0,T;V).$$ #### Second estimate. Since $\alpha_M$ is Lipschitz continuous and $w \in L^2(0,T;V)$, we have $\alpha_M({w}) \in L^2(0,T;V)$. Estimate  and a comparison in  yield a bound for $A(\alpha_M({w}))$ in $L^2 (0,T;V)$, whence $$\alpha_M(w) \in L^2(0,T;H^3(\Omega)) \subset L^2(0,T;W^{1,\infty}(\Omega)).$$ Recalling and using the fact that $w=\rho_M (\alpha_M(w))$, we readily deduce the estimate $$\label{regosig4} \|{w}\|_{L^2(0,T;W^{1,\infty}(\Omega))} \leq C_{\delta,M,{\mu}}.$$ #### Third estimate. Using a parabolic regularity argument in and relying on regularity for the approximate initial datum $\chi_{0,{\mu}}$, we deduce that $$\label{cf-later} \| \partial_t {\chi}\|_{L^2 (0,T;W^{1,3+\epsilon}(\Omega))} + \| A {\chi}\|_{L^2 (0,T;W^{1,3+\epsilon}(\Omega))} \leq C_{\delta,M,\mu},$$ where $\epsilon>0$ is a suitable number. More precisely, since $\chi_{0,\mu}\in H^4(\Omega)\subset W^{3,6}(\Omega)$, the above formula holds for any $\epsilon\in(0,3]$ (cf. inequality below for a justification). Thus, by interpolation, we obtain that $\nabla {\chi}$ belongs to $ H^{1/2} (0,T; W^{1,3+\epsilon}(\Omega))$ and, recalling the continuous embedding $W^{1,3+\epsilon}(\Omega) \subset L^\infty (\Omega)$, we conclude that $$\label{regosig5} \| \nabla{\chi}\|_{L^\infty (0,T;L^\infty(\Omega))} \leq C_{\delta,M,{\mu}}.$$ #### Fourth estimate. Notice that, for almost all $t \in (0,T)$, the function $\nabla(|{w}(t)|^p {w}(t))= (p+1)|{w}(t)|^p \nabla {w}(t)$ belongs to $L^2(\Omega)$, thanks to  and . Hence, for a.a. $t \in (0,T)$, we can test  by $|{w}(t)|^p {w}(t)$, which yields $$\label{f3} \begin{aligned} \int_{\Omega} & |{w}(t)|^{p+2} \\ & = \int_{\Omega} \nabla {\chi}(t) \cdot \nabla(|{w}(t)|^p {w}(t)) + \int_{\Omega} \phi_{\mu}({\chi}(t)) |{w}(t)|^p {w}(t) + \delta \int_{\Omega} \partial_t {\chi}(t) |{w}(t)|^p {w}(t) \\ & \begin{aligned} = (p+1)\int_{\Omega} |{w}(t)|^p \nabla {\chi}(t) \cdot \nabla {w}(t) & + \int_{\Omega} \phi_{\mu}({\chi}(t)) |{w}(t)|^p {w}(t) \\ & - \delta(p+1) \int_{\Omega} \alpha_M'({w}(t))|{w}(t)|^p | \nabla {w}(t)|^2, \end{aligned} \end{aligned}$$ the second equality following from equation . We estimate the second term on the right-hand side of the above equality by using the bound for $\phi_{\mu}({\chi})$ in $L^\infty (0,T; L^\infty (\Omega))$, due to  and the Lipschitz continuity of $\phi_{\mu}$. We deal with the first integral term as follows: $$\label{e:f2} \begin{aligned} \left|\int_{\Omega} |{w}(t)|^p \nabla {\chi}(t) \cdot \nabla {w}(t) \right| & \le \big\| |{w}(t)|^{p/2}\nabla {w}(t)\big\|_{L^2(\Omega)} \big\| |{w}(t)|^{p/2} \big\|_{L^2(\Omega)} \big\|\nabla {\chi}(t)\big\|_{L^\infty} \\ & \le \varrho {\int_\Omega}|w(t)|^p |\nabla w(t)|^2 + C_{\delta,M,\mu} \int_{\Omega}| w(t) |^p \end{aligned}$$ for some suitable positive constant $\varrho$, where we have also used . Now, recalling , we estimate the last summand on the right-hand side of  by $$-\delta(p+1) \int_{\Omega} \alpha_M'({w}(t))|{w}(t)|^p | \nabla {w}(t)|^2 \leq -\delta(p+1)C_1 \int_{\Omega} |{w}(t)|^p |\nabla {w}(t)|^2,$$ and we move the above term to the left-hand side of . Then, we combine the latter inequality with , in which we choose $\varrho=\frac{\delta (p+1)C_1}{4}$. We thus obtain, for a.a. $t \in (0,T)$, $$\label{co31} \int_{\Omega} |{w}(t)|^{p+2} +\frac34\delta(p+1)C_1 \int_{\Omega} |{w}(t)|^p |\nabla {w}(t)|^2 \le C_{\delta,M,{\mu}} \left(\int_{\Omega}|{w}(t)|^{p+1} + \int_{\Omega}| w(t) |^p\right)\,.$$ Thus, we finally infer that $$\label{regosig6} w \in L^\infty(0,T;L^p(\Omega)) \quad \text{for all $p\in[1,\infty)$,}$$ whence, by the Lipschitz continuity of $\alpha_M$, $$\label{regosig7} \alpha_M(w) \in L^\infty(0,T;L^p(\Omega)) \quad \text{for all $p\in[1,+\infty)$.}$$ Rigorous proof of Theorem \[th:1\] {#ss:a.2} ---------------------------------- Within this section, for all $\delta,\,{\mu}>0$, we will denote by $\{({\chi_{\delta,M,{\mu}}},{w_{\delta,M,{\mu}}})\}_{\delta,M,{\mu}}$ the family of solutions to Problem $\mathbf{P}_{\delta,M,{\mu}}$. . For fixed ${\mu},M>0 $, we pass to the limit in Problem $\mathbf{P}_{\delta,{\mu}}$ as $\delta\searrow 0$. We then perform the same calculations as in Section \[s:3.1\] (cf. –, , –). Also relying on –, we conclude that $$\label{ff1} \begin{aligned} & \exists\, C>0\,: \ \ \forall\, \delta,\, M, \, \mu>0 \quad \|{\chi_{\delta,M,{\mu}}}\|_{L^\infty (0,T;V)} + \| {w_{\delta,M,{\mu}}}\|_{L^2 (0,T;V)} + \| \widehat{\phi}_{\mu}({\chi_{\delta,M,{\mu}}})\|_{L^\infty (0,T;L^1 (\Omega))} \\ & \mbox{}~~~~~~~~~~ + \delta^{1/2} \| \partial_t {\chi_{\delta,M,{\mu}}}\|_{L^2 (0,T;L^2(\Omega))} + \| (\alpha_M'({w_{\delta,M,{\mu}}}))^{1/2} \nabla {w_{\delta,M,{\mu}}}\|_{L^2 (0,T;L^2(\Omega))} \leq C. \end{aligned}$$ Recalling the definition of $\phi_{\mu}$ and its Lipschitz continuity, we also have $$\label{ff1-bis} \begin{aligned} \exists\, C_\mu>0\,: \ \ \forall\, \delta,\, M>0\quad \| \phi_{\mu}({\chi_{\delta,M,{\mu}}})\|_{L^\infty (0,T;V)\cap L^\infty (0,T;L^\infty (\Omega))} \leq C_\mu. \end{aligned}$$ In the same way, estimate for ${w_{\delta,M,{\mu}}}$ and the Lipschitz continuity of $\alpha_M$ yield $$\label{est-intermediate} \exists\, C_{M}>0\,: \ \ \forall\, \delta,\, {\mu}>0 \quad \|\alpha_M({w_{\delta,M,{\mu}}})\|_{L^2 (0,T;V)} \leq C_{M}.$$ Next, a comparison in and the maximal parabolic regularity result from [@hieber-pruss] yield $$\label{hieber-pruss} c(\delta)\int_0^T \| \partial_t {\chi_{\delta,M,{\mu}}}\|_{L^6 (\Omega)}^2 + \int_0^T \| A {\chi_{\delta,M,{\mu}}}\|_{L^6 (\Omega)}^2 \leq C \int_0^T \| \ell_{\delta,M,{\mu}}\|_{L^6 (\Omega)}^2,$$ for some $c(\delta)$ such that $c(\delta)\to 0$ as $\delta \to 0$, where we have set $$\ell_{\delta,M,{\mu}} = {w_{\delta,M,{\mu}}}-\phi_{\mu}({\chi_{\delta,M,{\mu}}}) -A\chi_{0,{\mu}}.$$ In view of estimates for ${w_{\delta,M,{\mu}}}$, for $\phi_{\mu}({\chi_{\delta,M,{\mu}}})$ in $L^\infty (0,T;V)$, and for $\{\chi_{0,{\mu}}\}$, we conclude that $$\| \ell_{\delta,M,{\mu}} \|_{L^2 (0,T;L^6(\Omega))} \leq C_\mu.$$ Therefore, gives $$\label{inghippo} \exists\, C_\mu>0\,: \ \ \forall\, \delta,\, M>0 \quad \|{\chi_{\delta,M,{\mu}}}\|_{L^2 (0,T;W^{2,6}(\Omega))} \leq C_{\mu}.$$ On the other hand, estimate and a comparison in  imply $$\label{est-chit} \exists\, C_{M}>0\,: \ \ \forall\, \delta, \, {\mu}>0\quad \|\partial_t {\chi_{\delta,M,{\mu}}}\|_{L^2 (0,T;V')} \leq C_{M}.$$ On behalf of the above estimates and arguing in the very same way as in Section \[s:3.2\], we see that, for every fixed $M>0$ and $\mu>0$, there exist a sequence $\delta_k \searrow 0$ (for notational simplicity, we do not highlight its dependence on the parameters $M$ and $\mu$) and functions $(\chi_{M,{\mu}},w_{M,{\mu}}, \bar{\alpha}_{M,{\mu}})$ such that the sequence $\{(\chi_{\delta_k,M,\mu},w_{\delta_k,M,\mu}) \}$ converges to $(\chi_{M,{\mu}},w_{M,{\mu}})$, as $k \to +\infty$, in the sense specified by –, , as well as $$\begin{aligned} & \partial_t \chi_{\delta_k,M,\mu} {\rightharpoonup}\partial_t \chi_{M,{\mu}} \quad \text{in $L^2 (0,T;V')$,}\\ & \delta_k^{1/2} \partial_t \chi_{\delta_k,M,\mu} {\rightharpoonup}0 \quad \text{in $L^2(0,T;H)$,}\\ & \alpha_M(w_{\delta_k,M,\mu}) {\rightharpoonup}\bar{\alpha}_{M,{\mu}} \quad \text{in $L^2 (0,T;V)$.} \end{aligned}$$ Next, arguing similarly to the (formal) proof of Theorem \[th:1\], we conclude that $$\label{post} \phi_{\mu}(\chi_{\delta_k,M,\mu}) \to \phi_{\mu}(\chi_{M,{\mu}}) \quad \text{in $L^2 (0,T;H)$.}$$ Finally, we use in the very same way as in Section \[s:3.2\] to infer $$\bar{\alpha}_{M,{\mu}} =\alpha_M (w_{M,{\mu}})$$ and $$\alpha_M(w_{\delta_k,M,\mu}) \to \alpha_M (w_{M,{\mu}}) \quad \text{in $L^2 (0,T;H)$.}$$ Therefore, we conclude that the pair $(\chi_{M,{\mu}},w_{M,{\mu}})$ is a solution to the PDE system $$\begin{aligned} & \label{1-var-better-mu} \chi_t + A (\alpha_M(w)) =0 \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,,\\ & \label{2-var-better-mu} A\chi + \phi_{{\mu}}(\chi) = w \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,,\end{aligned}$$ supplemented with the initial condition .\ . We now take the limit $M\nearrow +\infty$ in (the Cauchy problem for) –. Estimates  and hold for the sequence of solutions $\{(\chi_{M,{\mu}},w_{M,{\mu}})\}_M$ as well. Furthermore, using a lower-semicontinuity argument, we also deduce from that $$\label{bor} \|\chi_{M,{\mu}}\|_{L^2 (0,T;W^{2,6}(\Omega))}\leq C_\mu\quad \text{for all $M>0.$}$$ Now, we point out that entails $$\label{stima_g} \int_0^T \int_{\Omega} \alpha_M'(w_{M,{\mu}})|\nabla w_{M,{\mu}} |^2 \leq C \quad \text{for all $M>0$}.$$ Let us denote by $\mathcal{T}_M$ the truncation operator at level $M$ and define $$\label{def-taum} \tau_M:=\mathcal{T}_M(w_{M,{\mu}} ):= \left\{ \begin{array}{lll} -M &\text{if } w_{M,{\mu}} <-M, \\ w_{M,{\mu}} &\text{if } |w_{M,{\mu}}| \leq M, \\ M & \text{if } w_{M,{\mu}} > M, \end{array} \right. \qquad \text{a.e.}\ t\in \Omega \times (0,T)$$ (to simplify, we omit the index $\mu$ in the notation for $\tau_M$). For later use, we also introduce ${\text{for a.a.}}\ t \in (0,T)$ the sets $$\label{def-om} \begin{cases} \mathcal{A}_M:= \left\{(x,t)\in \Omega \times (0,T)\, : \ |w_{M,{\mu}}(x,t)| \leq M \right\}, \\ \mathcal{O}_M:= \left\{(x,t)\in \Omega \times (0,T)\, : \ |w_{M,{\mu}}(x,t)|> M \right\}, \\ \mathcal{O}_M^t:= \left\{x\in \Omega\, : \ (x,t) \in \mathcal{O}_M \right\}. \end{cases}$$ From , we also infer $$\int_0^T \int_{\Omega} \alpha'(\tau_M)|\nabla \tau_M |^2 \leq C \quad \text{for all $M>0$},$$ whence, in view of , $$\label{stima_h} \| |\tau_M|^p\, \nabla \tau_M \|_{L^2 (0,T;H)} \leq C \quad \text{for all $M>0$}.$$ Now, in order to reproduce estimates – in the present approximate setting, we test by $|\tau_{M}(t)|^p \tau_{M}(t)$ for a.e. $t \in (0,T)$. Clearly, $$\int_{\Omega} w_{M,{\mu}}(t)|\tau_{M}(t)|^p \tau_{M}(t) \ge\int_{\Omega} | \tau_{M}(t)|^{p+2},$$ so that we have (cf. also ) $$\label{stima_j} \begin{aligned} \int_{\Omega} | \tau_{M}(t)|^{p+2} & \le (p+1)\int_{\Omega} | \tau_{M}(t)|^p \,\nabla \chi_{M}(t) \cdot \nabla \tau_{M}(t) + \int_{\Omega} \phi_{\mu}(\chi_{M,{\mu}}(t))|\tau_{M}(t)|^p \tau_{M}(t)\\ & \leq (p+1) \| | \tau_{M}(t) |^p \,\nabla \tau_{M}(t) \|_H \| \nabla \chi_{M}(t)\|_H + C_\mu \int_{\Omega} | \tau_M(t)|^{p+1} \\ & \leq C \||\tau_{M}(t)|^p \,\nabla \tau_{M}(t) \|_{H} + \frac12 \int_{\Omega} | \tau_M (t)|^{p+2} + C_\mu, \end{aligned}$$ where the second inequality follows from estimate and the last one from and Young’s inequality. Therefore, combining and , we find an estimate for $\| \tau_M\|_{L^{p+2}(\Omega)}^{p+2}$ in $L^2 (0,T)$ with some constant $C_\mu>0$ which is independent of $M>0$. On behalf of –, from the latter bound, we infer (recall that $|\cdot|$ also denotes the Lebesgue measure) $$\label{stima_m} \begin{aligned} C_\mu \geq \int_{0}^T \left( \int_{\mathcal{O}_M^t} |M|^{p+2}\, {\mathrm{d}}x\right)^2 \, {\mathrm{d}}t = M^{2p+4} \int_0^T |\mathcal{O}_M^t|^2 \, {\mathrm{d}}t \geq \frac{ M^{2p+4}}{T}|\mathcal{O}_M|^2 \quad \text{for all $M>0$}, \end{aligned}$$ where the last inequality is a direct consequence of Jensen’s inequality. Next, we apply the nonlinear Poincaré inequality to $|\tau_M|^p \tau_M$, thus obtaining (cf. ) $$\||\tau_M|^p \tau_M \|_{V} \leq K \left( \| \nabla (|\tau_M|^{p} \tau_M) \|_{H} + \left| {m}(\tau_M)\right|^{p+1}\right)\,.$$ In view of and of the definition of $\tau_M$, we find an estimate for $|\tau_M|^p \tau_M $ in $L^2 (0,T;V)$, again with some constant $C_{\mu}$ which is independent of $M>0$. Hence, using the fact that $V \subset L^6 (\Omega)$ and the growth condition for $\alpha$, we conclude that $$\label{stima_l} \| \alpha(\tau_M)\|_{L^{\rho_p}(0,T; L^{\kappa_p} (\Omega))} \leq C_\mu \quad \text{for all $M>0$}$$ (where the indexes $\rho_p$ and $\kappa_p$ are as in : in particular, $1<\rho_p<2$). Therefore, we have $$\begin{aligned} \int_0^T \int_{\Omega} |\alpha_M (w_{M,{\mu}})|^{\rho_p} & \leq \iint_{\mathcal{A}_M} |\alpha (\tau_{M})|^{\rho_p} + 2^{\rho_p-1} \iint_{\mathcal{O}_M}|\alpha(M)|^{\rho_p} + 2^{\rho_p-1} C_1^{\rho_p} \iint_{\mathcal{O}_M}| w_{M,{\mu}} - M|^{\rho_p}\\ & \leq 2^{\rho_p-1} \int_0^T \int_{\Omega} |\alpha(\tau_M)|^{\rho_p}+ C\| w_{M,{\mu}}\|_{L^{\rho_p}(0,T; L^{\rho_p}(\Omega))}^{\rho_p} + CM^{\rho_p} |\mathcal{O}_M| \\ & \leq C_\mu + C + C \frac{M^{\rho_p}}{M^{p+2}}, \end{aligned}$$ where the first inequality follows from the very definition of $\alpha_M$, the second one from trivial calculations, and the last one from estimates for $w_{M,{\mu}}$, for $|\mathcal{O}_M|$, and for $\alpha(\tau_M)$. Note that, since $\rho_p<2$, we have $M^{\rho_p}/ M^{p+2} \to 0 $ as $M \to +\infty$. Altogether, we find $$\label{stima_n} \|\alpha_M (w_{M,{\mu}})\|_{L^{\rho_p}(0,T; L^{\rho_p}(\Omega))} \leq C_\mu \quad \text{for all $M>0$,}$$ which yields, by comparison in , $$\label{stima_p} \| \partial_t \chi_{M,{\mu}} \|_{L^{\rho_p} (0,T;W^{-2,\rho_p}(\Omega))} \leq C_\mu \quad \text{for all $M>0$,}$$ $W^{-2,\rho_p}(\Omega)$ denoting here the standard negative order Sobolev space. Collecting estimates , , , and –, we then argue in the same way as in Section \[s:3.2\]. Thus, we conclude that there exist a subsequence $M_k \nearrow +\infty$ as $k \to +\infty$ (whose dependence on the index $\mu>0$ is not highlighted) and functions $(\chi_{\mu},w_{\mu})$ fulfilling – such that the functions $(\chi_{M_k,{\mu}}, w_{M_k,{\mu}})$ converge, as $k \to +\infty,$ to $(\chi_{\mu},w_{\mu})$ in the same sense as in - and , while, in place of , we only have $${\partial_t}\chi_{M_k,{\mu}} {\rightharpoonup}{\partial_t}\chi_{{\mu}} \quad \text{in \ $L^{\rho_p} (0,T;W^{-2,\rho_p}(\Omega))$,}$$ which is, anyway, sufficient for what follows. Furthermore, there exists some $\bar{\alpha}_\mu \in L^{\rho_p}(0,T; L^{\rho_p}(\Omega))$ such that $$\alpha_M (w_{M_k,{\mu}}) {\rightharpoonup}\bar{\alpha}_\mu \quad \text{in $L^{\rho_p}(0,T; L^{\rho_p}(\Omega))$.}$$ Again, we prove that $$\label{post-2} \phi_{\mu}(\chi_{M_k,\mu}) \to \phi_{\mu}(\chi_{{\mu}}) \quad \text{in $L^2 (0,T;H)$}$$ and, proceeding as in Section \[s:3.2\], with we show that $\bar{\alpha}_\mu =\alpha(w_\mu)$ and $$\alpha_{M_k}(w_{M_k,\mu}) \to \alpha(w_{\mu}) \qquad \text{in $L^1 (0,T; L^1 (\Omega))$.}$$ Having this, we conclude that the pair $(\chi_{\mu},w_{\mu})$ is solution to the PDE system $$\begin{aligned} & \label{1-var-better-mu-bis} \chi_t + A (\alpha(w)) =0 \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,,\\ & \label{2-var-better-mu-bis} A\chi + \phi_{{\mu}}(\chi) = w \qquad {\text{a.e.\ in}}\ \Omega \times (0,T)\,,\end{aligned}$$ supplemented with the initial condition . . Finally, we take the limit $\mu\searrow 0$ in (the Cauchy problem for) –. Estimate , with $\alpha_M$ replaced by $\alpha$, holds for the sequence $\{(\chi_{\mu},w_{\mu})\}_{\mu}$ for a constant $C>0$ which is *independent* of the parameter $\mu>0$. Furthermore, using the fact that system – has the same structure as –, we argue as in – and conclude that $$\exists\, C>0 \ \ \forall\, \mu>0\, : \quad \|\chi_{{\mu}}\|_{L^2 (0,T;W^{2,6}(\Omega))} + \| \phi_{\mu}(\chi_{\mu})\|_{L^2 (0,T; L^6(\Omega))} \leq C.$$ From the bound for $(\alpha'(w_{\mu}))^{1/2} \nabla w_{\mu}$ in $ L^2(0,T;H)$ (which follows from by applying once more Ioffe’s theorem), developing the very same calculations as throughout –, we find $$\label{giulio11} \begin{aligned} \exists\, C>0 \ \ \forall\, \mu>0\,: \quad \| \partial_t \chi_{\mu}\|_{L^{\eta_{p\sigma}}} (0,T; {\mathcal{W}^{-2,{\kappa_p}}(\Omega)}) & + \| \alpha(w_{\mu}) \|_{L^{\eta_{p\sigma}} (0,T; L^{\kappa_p} (\Omega))} \\ & + \| \phi_{\mu}(\chi_{\mu}) \|_{L^{\sigma q_\sigma} (0,T;L^{6}(\Omega))} \leq C \end{aligned}$$ (where the indexes $\eta_{p\sigma}$ and $q_\sigma$ are as in and , respectively). Thanks to the above estimates, we conclude that there exist a vanishing sequence ${\mu}_k \searrow 0$ and functions $(\chi,w)$ satisfying – such that $(\chi_{{\mu}_k}, w_{{\mu}_k})$ converges to $(\chi,w)$ in the topologies of –. We then pass to the limit as $k \to +\infty$ in  and, also in view of , infer that $\chi$ complies with the initial condition . Furthermore, we deduce from the strong convergence of $\chi_{\mu_k}$ to $\chi$ in $L^2 (0,T;H)$ that $\chi_{\mu_k} \to \chi$ almost everywhere in $\Omega \times (0,T)$. Using the uniform convergence  of $\{\phi_{\mu_k}\}$ to $\phi$, we infer that $$\phi_{\mu_k} (\chi_{\mu_k}(x,t)) \to \phi(\chi(x,t)) \quad {{\text{for a.a.}}}\, (x,t) \in \Omega \times (0,T).$$ Then, taking into account the uniform integrability of $\{ \phi_{\mu}(\chi_{\mu_k})\}$ in $L^2 (0,T;H)$ (which follows from , noting that $\sigma q_\sigma>2$), in view of Theorem \[t:ds\] we obtain $$\label{step-fundamental} \phi_{\mu_k}(\chi_{\mu_k}) \to \phi(\chi) \qquad \text{in $L^2 (0,T;H)$.}$$ Then, we again argue as in Section \[s:3.2\] and use to prove that $$\alpha(w_{\mu_k}) \to \alpha(w) \qquad \text{in $L^1 (0,T; L^1 (\Omega))$.}$$ Having this, we conclude that the pair $(\chi,w)$ is solution to Problem \[p:1\], which finishes the proof. [99]{} A.V. Babin and M.I. Vishik: *Attractors of evolution equations*. Translated and revised from the 1989 Russian original by Babin. Studies in Mathematics and its Applications, 25. North-Holland Publishing Co., Amsterdam, 1992. J.M. Ball: Continuity properties and global attractors of generalized semiflows and the Navier-Stokes equations. *J. Nonlinear Sci.*, **7** (1997), 475–502. V. Barbu: . Noordhoff, Leyden, 1976. V. Barbu, P. Colli, G. Gilardi, and M. Grasselli: Existence, uniqueness, and long-time behaviour for a nonlinear Volterra integrodifferential equation. , **13** (2000), 1233–1262. H. Brézis: . North Holland Math. Studies, 5. North-Holland, Amsterdam, 1973. : . , **9** (1961), [795–801]{}. : . , **2** (1958), [258–267]{}. N. Dunford and J.T. Schwartz: . Interscience Publishers, New York, 1958. : . . M. E. Gurtin: Generalized Ginzburg-Landau and Cahn-Hilliard equations based on a microforce balance. *Phys. D*, **92** (1996), [178–192]{}. M. Hieber and J. Prüss: Heat kernels and maximal $L^p$-$L^q$ estimates for parabolic evolution equations. *Comm. Partial Differential Equations*, **22** (1997), 1647–1669. A.D. Ioffe: On lower semicontinuity of integral functionals. I. *SIAM J. Control Optimization*, **15** (1977), 521–538. J. Málek and D. Pražák: Large time behavior via the method of $l$-trajectories. *J. Differential Equations*, **181** (2002), 243–279. A. Miranville and S. Zelik: Robust exponential attractors for Cahn-Hilliard type equations with singular potentials. *Math. Methods Appl. Sci.*, [**27**]{} (2004), 545–582. : *Attractors for dissipative partial differential equations in bounded and unbounded domains*. . : *The Cahn-Hilliard equation*. . E. Rocca and G. Schimperna: Universal attractor for some singular phase transition systems. *Phys. D*, [**192**]{} (2004), 279–307. R. Rossi: On two classes of generalized viscous Cahn-Hilliard equations. *Commun. Pure Appl. Anal.*, [**4**]{} (2005), 405–430. R. Rossi: Global attractor for the weak solutions of a class of viscous Cahn-Hilliard equations. In: *Dissipative Phase Transitions*, pp. 247–268. Series on Advances in Mathematics for Applied Sciences, Vol. 71, World Sci. Publ., Hackensack, NJ, 2006. R. Rossi, A. Segatti, and U. Stefanelli: Attractors for gradient flows of non convex functionals and applications to quasistationary phase field models. *Arch. Ration. Mech. Anal.*, [**187**]{} (2008), 91–135. G. Schimperna: Global attractors for Cahn-Hilliard equations with nonconstant mobility. *Nonlinearity*, **20** (2007), 2365–2387. A. Segatti: On the hyperbolic relaxation of the Cahn-Hilliard equation in 3D: approximation and long time behaviour. *Math. Models Methods Appl. Sci.*, **17** (2007), 411–437. J. Simon: Compact sets in the space [$L^p(0,T;B)$]{}. *Ann. Mat. Pura Appl. (4)*, [**146**]{} (1987), 65–96. : *Infinite-dimensional dynamical systems in mechanics and physics*, . [^1]: *Dipartimento di Matematica “F. Brioschi”, Politecnico di Milano. Via Bonardi, 9. I–20133 Milano, Italy. Email: [maurizio.grasselli@polimi.it]{}* [^2]: *Laboratoire de Mathématiques et Applications–SP2MI, Université de Poitiers. Boulevard Marie et Pierre Curie–Téléport 2. F–86962 Chasseneuil Futuroscope Cedex, France. Email: [miranv@math.univ-poitiers.fr]{}* [^3]: *Dipartimento di Matematica, Università di Brescia. Via Valotti 9. I–25133 Brescia, Italy.* E-mail: [riccarda.rossi@ing.unibs.it]{} [^4]: *Dipartimento di Matematica “F.Casorati”, Università di Pavia. Via Ferrata, 1. I–27100 Pavia, Italy. Email: [giusch04@unipv.it]{}* [^5]: All authors have been supported by the [project *Programma Galileo, Università Italo-Francese/Projet Galilée “Modelli matematici in scienza dei materiali/Modèles mathématiques en science des matériaux”.*]{} This paper was initiated during a stay of M.G., R.R., and G.S. in the *Laboratoire de Mathématiques et Applications* (Université de Poitiers), whose hospitality is gratefully acknowledged.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study a skew product with a curve of neutral points. We show that there exists a unique absolutely continuous invariant probability measure, and that the Birkhoff averages of a sufficiently smooth observable converge to a normal law or a stable law, depending on the average of the observable along the neutral curve.' author: - 'S. Gouëzel' - 'Sébastien Gouëzel [^1]' bibliography: - 'biblio.bib' date: November 2003 title: 'Statistical properties of a skew product with a curve of neutral points [^2]' --- Introduction ============ Let $T:M\to M$ be a map on a compact space. While uniformly hyperbolic or uniformly expanding dynamics are well understood, problems arise when there are neutral fixed points (where the differential of $T$ has an eigenvalue equal to $1$). The one-dimensional case has been thoroughly studied, particularly when $T$ has only one neutral fixed point (see [@liverani_saussol_vaienti] and references therein). The normal form at the fixed point dictates the asymptotics of the dynamics, and in particular the speed of mixing, and the convergence of Birkhoff sums to limit laws ([@gouezel:stable]). In this article, we study the same type of phenomenon, but in higher dimension. Contrary to [@hu:almost_hyperbolic], [@pollicott_yuri:indifferent] (where the case of isolated fixed points is considered), our models admit a whole invariant neutral curve. We show that the one-dimensional results remain essentially true. More precisely, define a map $T_\alpha$ on $[0,1]$ by $$T_\alpha(x)=\left\{ \begin{array}{cl} x(1+2^\alpha x^\alpha) &\text{if }0{\leqslant}x{\leqslant}1/2 \\ 2x-1 &\text{if }1/2<x{\leqslant}1 \end{array}\right.$$ It has a neutral fixed point at $0$, behaving like $x(1+x^\alpha)$. To mix different such behaviors, we consider a skew product, similar to the Alves-Viana map ([@viana:multidim_attr]) but where the unimodal maps are replaced by $T_\alpha$. Let $\alpha:S^1 \to (0,1)$ be a map with minimum $\operatorname{{\alpha_{\text{min}}}}$ and maximum $\operatorname{{\alpha_{\text{max}}}}$. Assume that 1. $\alpha$ is $C^2$. 2. $0<\operatorname{{\alpha_{\text{min}}}}<\operatorname{{\alpha_{\text{max}}}}<1$. 3. $\alpha$ takes the value $\operatorname{{\alpha_{\text{min}}}}$ at a unique point $x_0$, with $\alpha''(x_0)>0$. 4. $\operatorname{{\alpha_{\text{max}}}}< \frac{3}{2}\operatorname{{\alpha_{\text{min}}}}$ (which implies $\operatorname{{\alpha_{\text{max}}}}<\operatorname{{\alpha_{\text{min}}}}+1/2$). These conditions are for example satisfied by $\alpha(\omega)=\operatorname{{\alpha_{\text{min}}}}+{\varepsilon}(1+\sin(2\pi \omega))$ where $\operatorname{{\alpha_{\text{min}}}}\in (0,1)$ and ${\varepsilon}$ is small enough. We define a map $T$ on $S^1 \times [0,1]$ by $$\label{definit_T} T(\omega,x)= (F(\omega), T_{\alpha(\omega)}(x))$$ where $F(\omega)=4\omega$. In the following, we will generalize to this skew product the one-dimensional results on the maps $T_\alpha$. First of all, in Section \[section\_invariante\], we prove that there exists a unique absolutely continuous invariant probability measure $m$, whose density $h$ is in fact Lipschitz on every compact subset of $S^1\times (0,1]$ (Theorem \[existence\_mesure\_invariante\]). In Section \[limit\_Markov\], we prove limit theorems for abstract Markov maps (using a method essentially due to [@melbourne_torok] and recalled in Appendix \[appendice:loi\_stable\], and estimates of [@aaronson_denker] and [@gouezel:stable]). Finally, in Sections \[section\_estimee\_Xn\] and \[section:limite\], we study the limit laws of Birkhoff sums for the skew product $T$, and we obtain the convergence to a normal law or a stable law, depending on the value of $\operatorname{{\alpha_{\text{min}}}}$. We obtain the following theorem (see Theorem \[enonce\_theoreme\_limite\] for more details). Set $$A=\frac{1}{4\left( \operatorname{{\alpha_{\text{min}}}}^{3/2} \sqrt{\frac{\pi}{2\alpha''(x_0)}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}}\int_{S^1\times \{1/2\}} h\operatorname{dLeb},$$ where $h$ is the density of the absolutely continuous invariant probability measure. Let $f$ be a Lipschitz function on $S^1\times [0,1]$, with $\int f{\, {\rm d}}m=0$. Write $c=\int_{S^1\times\{0\}} f \operatorname{dLeb}$ and $S_n f=\sum_{k=0}^{n-1} f\circ T^k$. Then - If $\operatorname{{\alpha_{\text{min}}}}<1/2$, there exists $\sigma^2 {\geqslant}0$ such that $\frac{1}{\sqrt{n}} S_n f \to {\mathcal{N}}(0,\sigma^2)$. - If $\operatorname{{\alpha_{\text{min}}}}=1/2$ and $c\not=0$, then $\frac{S_n f}{\sqrt{ \frac{c^2 A }{4} n (\ln n)^2}} \to {\mathcal{N}}(0,1)$. - If $1/2<\operatorname{{\alpha_{\text{min}}}}<1$ and $c\not =0$, then $\frac{S_n f}{n^{\operatorname{{\alpha_{\text{min}}}}}\sqrt{\operatorname{{\alpha_{\text{min}}}}\ln n}} \to Z$, where the random variable $Z$ has an explicit stable distribution. - If $1/2{\leqslant}\operatorname{{\alpha_{\text{min}}}}<1$ and $c=0$, then there exists $\sigma^2 {\geqslant}0$ such that $\frac{1}{\sqrt{n}} S_n f \to {\mathcal{N}}(0,\sigma^2)$. An interesting feature of this example is that its study involves sophisticated mixing properties of $F$, particularly a multiple decorrelation property, proved in Appendix \[appendice:pene\] using [@pene:averaging]. Theorems of [@gouezel:stable] could be used instead of the method of [@melbourne_torok] to get the limit laws. However, this elementary method is interesting in its own right, and can be generalized more easily to other settings than the results of [@gouezel:stable] (in particular to the case of more neutral fixed points). The specific form of $F$ is of no importance at all, the results remain true when $F$ is $C^2$ with $|F'| {\geqslant}4$ (for example $F(\omega)=d\omega$ with $d {\geqslant}4$). In the same way, the only important properties of the maps $T_\alpha$ are their normal form close to $0$ and the fact that they are Markov. Finally, the hypothesis $\alpha''(x_0)\not=0$ is only useful for limit theorems, and can be replaced by: $\exists m, \alpha^{(m)}(x_0)\not=0$ (but the normalizing factors have to be modified accordingly). For the sake of simplicity, we will restrict ourselves in what follows to the aforementioned case. In this article, $a(n) \sim b(n)$ means that $a(n)/b(n) \to 1$ when $n\to \infty$. The integral with respect to a probability measure will sometimes be denoted by $E(\cdot)$. Finally, $\lfloor x \rfloor$ will denote the integer part of $x$. Invariant measure {#section_invariante} ================= An important property of the map $T$, that will be used thoroughly in what follows, is that it is Markov: there exists a partition of the space such that every element of this partition is mapped by $T$ on a union of elements of this partition. In fact, we will consider $T_Y$ (the induced map on $Y=S^1 \times (1/2,1]$), which is also Markov, and expanding, contrary to $T$. We will apply to $T_Y$ classical results on expanding Markov maps (also called *Gibbs-Markov* maps), which we recall in the next paragraph. Markov maps and invariant measures ---------------------------------- Let $(Y,{\mathcal{B}},m_Y)$ be a standard probability space, endowed with a bounded metric $d$. A non-singular map $T_Y$ defined on $Y$ is said to be a *Markov map* if there exists a finite or countable partition $\alpha$ of $Y$ such that $\forall a\in \alpha$, $m_Y(a)>0$, $T_Y(a)$ is a union (mod $0$) of sets of $\alpha$, and $T_Y:a \to T_Y(a)$ is invertible. In this case, $\alpha$ is a *Markov partition* for $T_Y$. A Markov map $T_Y$ (with a Markov partition $\alpha$) is a *Gibbs-Markov map* ([@aaronson:book]) if 1. $T_Y$ has the big image property: $\inf_{a\in \alpha} m_Y(T_Y(a))>0$. 2. \[enumere\_expansion\] There exists $\lambda>1$ such that $\forall a\in \alpha, \forall x,y\in a, d(T_Yx, T_Y y){\geqslant}\lambda d(x,y)$. 3. \[enumere\_distortion\] Let $g$ be the inverse of the jacobian of $T_Y$, i.e. on a set $a\in \alpha$, $g(x)=\frac{{\, {\rm d}}m_Y}{{\, {\rm d}}\left(m_Y \circ (T_Y)_{|a}\right)} (x)$. Then there exists $C>0$ such that for all $a\in \alpha$, for almost all $x,y\in a$, $$\left|1-\frac{g(x)}{g(y)} \right| {\leqslant}C d(T_Yx,T_Yy).$$ This definition is slightly more general than the definition of [@aaronson:book]: the distance $d=d_\tau$ considered there is given by $d_\tau(x,y)=\tau^{s(x,y)}$ where $\tau<1$ and $s(x,y)$ is the separation time of $x$ and $y$, i.e.  $$\label{definit_separation} s(x,y)=\inf\{ n\in {\mathbb{N}}{\ |\ }\nexists a\in \alpha, T^n x\in a, T^n y\in a\}.$$ The proof of [@aaronson:book Theorem 4.7.4] still works in our context, and gives: \[existe\_mesure\_invariante\] Let $T_Y$ be a transitive Gibbs-Markov map ($\forall a,b \in \alpha, \exists n\in {\mathbb{N}}, m_Y(T_Y^n a \cap b)>0$) such that $\operatorname{Card}(\alpha_*)<\infty$, where $\alpha_*$ is the partition generated by the images $T_Y(a)$ for $a\in \alpha$. Then $T_Y$ is ergodic, and there exists a unique absolutely continuous (with respect to $m_Y$) invariant probability measure, denoted by $\mu_Y$. Moreover, $\mu_Y=h m_Y$ where the density $h$ is bounded and bounded away from $0$, and Lipschitz on every set of $\alpha_*$. Preliminary estimates --------------------- To apply Theorem \[existe\_mesure\_invariante\], we will construct a Markov partition, and control the distortion of the inverse branches of $T_Y$. We will write $T_\omega^n=T_{\alpha(F^{n-1}\omega)} \circ\cdots\circ T_{\alpha(\omega)}$, whence $T^n(\omega,x)=(F^n\omega, T_\omega^n(x))$. Write also $d((\omega_1,x_1),(\omega_2,x_2))= |\omega_1-\omega_2|+|x_1-x_2|$. A point of $S^1 \times [0,1]$ will be denoted by ${\textbf{x}}=(\omega,x)$. Finally, set $\operatorname{{d_{\text{vert}}}}((\omega_1,x_1),(\omega_2,x_2))=|x_2-x_1|$. Define $X_0(\omega)=1$, $X_1(\omega)=1/2$, and for $n{\geqslant}2$, $X_n(\omega)$ is the preimage in $[0,1/2]$ of $X_{n-1}(F\omega)$ by $T_{\alpha(\omega)}$. These $X_n$ will be useful in the construction of a Markov partition for $T$ (paragraph \[construit\_markov\]). \[estime\_croissance\_grossiere\_Xn\] There exists $C>0$ such that $\forall n\in {\mathbb{N}}^*, \forall \omega\in S^1$, $$\frac{1}{Cn^{1/\operatorname{{\alpha_{\text{min}}}}}} {\leqslant}X_n(\omega) {\leqslant}\frac{C}{n^{1/\operatorname{{\alpha_{\text{max}}}}}}.$$ Write $Z_1=1/2$ and $T(Z_{n+1})=Z_n$ where $T(x)=x(1+2^{\operatorname{{\alpha_{\text{max}}}}} x^{\operatorname{{\alpha_{\text{min}}}}})$. We easily check inductively that $Z_n{\leqslant}X_n(\omega)$ for every $\omega$, since $T(x){\geqslant}T_{\alpha(\omega)}(x)$ for every $\omega$. It is thus sufficient to estimate $Z_n$ to get the minoration. As $T(x){\geqslant}x$, the sequence $Z_n$ is decreasing, and nonnegative, whence it tends to a fixed point of $T$, necessarily $0$. We have $$\begin{aligned} \frac{1}{Z_n^{\operatorname{{\alpha_{\text{min}}}}}}& =\frac{1}{Z_{n+1}^{\operatorname{{\alpha_{\text{min}}}}}}\left(1+2^{\operatorname{{\alpha_{\text{max}}}}} Z_{n+1}^{\operatorname{{\alpha_{\text{min}}}}}\right)^{-\operatorname{{\alpha_{\text{min}}}}} =\frac{1}{Z_{n+1}^{\operatorname{{\alpha_{\text{min}}}}}}\left(1-\operatorname{{\alpha_{\text{min}}}}2^{\operatorname{{\alpha_{\text{max}}}}} Z_{n+1}^{\operatorname{{\alpha_{\text{min}}}}} +o(Z_{n+1}^{\operatorname{{\alpha_{\text{min}}}}})\right) \\& =\frac{1}{Z_{n+1}^{\operatorname{{\alpha_{\text{min}}}}}}-\operatorname{{\alpha_{\text{min}}}}2^{\operatorname{{\alpha_{\text{max}}}}}+o(1). \end{aligned}$$ A summation gives $\frac{1}{Z_m^{\operatorname{{\alpha_{\text{min}}}}}}\sim m \operatorname{{\alpha_{\text{min}}}}2^{\operatorname{{\alpha_{\text{max}}}}}$, whence $Z_m \sim C/m^{1/\operatorname{{\alpha_{\text{min}}}}}$, which concludes the minoration. The majoration is similar, using a sequence $Z'_n$ with $Z'_n {\geqslant}X_n(\omega)$. *We fix once and for all a large enough constant $D$.* The following definition is analogous to a definition of Viana ([@viana:multidim_attr]). Let $\psi: K\to [0,1]$, where $K$ is a subinterval of $S^1$. We say that the graph of $\psi$ is an admissible curve if $\psi$ is $C^1$ with $|\psi'|{\leqslant}D$. Let $\psi$ be an admissible curve, defined on $K$ with $|K| <1/4$, and included in $K\times [0,1/2]$ or $K\times (1/2,1]$. Then the image of $\psi$ by $T$ is still an admissible curve. Let $(u,v)$ be a tangent vector at $(\omega,x)$ with $|v|{\leqslant}D |u|$, we have to check that its image $(u',v')$ by $DT(\omega,x)$ still satisfies $|v'|{\leqslant}D |u'|$. Assume first that $x{\leqslant}1/2$, whence $u'=4u$ and $v'=(1+(2x)^{\alpha(\omega)} (\alpha(\omega)+1))v+ x \ln(2x)\alpha'(\omega)(2x)^{\alpha(\omega)}u$. As $\alpha(\omega){\leqslant}\operatorname{{\alpha_{\text{max}}}}{\leqslant}1$, we get $|v'|{\leqslant}3|v|+C|u|$ for a constant $C$ (which depends only on ${\left\| \alpha' \right\|}_\infty$). Thus, $$\frac{|v'|}{|u'|} {\leqslant}\frac{3}{4} \frac{|v|}{|u|} +\frac{C}{4}.$$ This will give $|v'|/|u'| {\leqslant}D$ if $\frac{3}{4}D +\frac{C}{4} {\leqslant}D$, which is true if $D$ is large enough. Assume then that $x>1/2$. Then $u'=4u$ and $v'=2v$, and there is nothing to prove. \[controle\_rapport\_horz\_vert\] Let $(\omega_1,x_1)$ and $(\omega_2,x_2)$ be two points in $S^1\times [0,1/2]$ with $|x_1-x_2|{\leqslant}D|\omega_1-\omega_2|$ and $|\omega_1-\omega_1|{\leqslant}\frac{1}{8}$. Then their images satisfy $|x'_1-x'_2|{\leqslant}D|\omega'_1-\omega'_2|$. Use a segment between the two points: it is an admissible curve, whence its image is still admissible. The Markov partition {#construit_markov} -------------------- Set $Y=S^1 \times (1/2,1]$. For ${\textbf{x}}\in Y$, set ${\varphi}_Y({\textbf{x}})=\inf\{n>0 {\ |\ }T^n({\textbf{x}})\in Y\}$: this is the first return time to $Y$, everywhere finite. The map $T_Y({\textbf{x}}):=T^{{\varphi}_Y({\textbf{x}})}({\textbf{x}})$ is the map induced by $T$ on $Y$. We will show that $T_Y$ is a Gibbs-Markov map, by constructing an appropriate Markov partition. If $I$ is an interval of $S^1$, we will abusively write $I\times [X_{n+1},X_n]$ for $\{(\omega,x){\ |\ }\omega \in I, x\in [X_{n+1}(\omega), X_n(\omega)]\}$. Set $I_n(\omega)=[X_{n+1}(\omega),X_n(\omega)]$ (or $\{\omega\} \times [X_{n+1}(\omega),X_n(\omega)]$, depending on the context). By definition of $X_n$, $T$ maps $\{\omega\}\times I_n(\omega)$ bijectively on $\{F\omega\} \times I_{n-1}(F\omega)$. Thus, the interval $I_n(\omega)$ returns to $[1/2,1]$ in exactly $n$ steps. Let $Y_n(\omega)$ be the preimage in $[1/2,1]$ of $X_{n-1}(F\omega)$ under $T_{\alpha(\omega)}$. Thus, the interval $J_n(\omega)= [Y_{n+1}(\omega),Y_n(\omega)]$ returns to $[1/2,1]$ in $n$ steps. *We fix once and for all $0<{{\varepsilon}_0}<\frac{1}{8}$, small enough so that $D{{\varepsilon}_0}$ is less than the length of every interval $I_1(\omega)$*. (This condition will be useful in distortion estimates). *Let $q$ be large enough so that $\frac{1}{4^q}<{{\varepsilon}_0}$, and consider $A_{s,n}=\left[ \frac{s}{4^{q+n}},\frac{s+1}{4^{q+n}}\right] \times J_n$, for $n\in {\mathbb{N}}^*$ and $0{\leqslant}s{\leqslant}4^{q+n}-1$*: this set is mapped by $T^n$ on $\left[\frac{s}{4^q},\frac{s+1}{4^q}\right] \times [1/2,1]$. Let $K_0,\ldots,K_{4^q-1}$ be the sets $\left[\frac{i}{4^q},\frac{i+1}{4^q}\right] \times [1/2,1]$. Then the map $T_Y$ is an isomorphism between each $A_{s,n}$ and some $K_i$. Consequently, the map $T_Y$ is Markov for the partition $\{A_{s,n}\}$, and it has the big image property. To apply Theorem \[existe\_mesure\_invariante\], we need expansion (for in the definition of Gibbs-Markov maps) and distortion control (for ). The expansion is given by the next proposition, and the distortion is estimated in the following paragraph. On the intervals $[X_3(\omega),X_1(\omega)]$, the derivative of $T_{\alpha(\omega)}$ is greater than $1$, whence greater than a constant $2>\lambda>1$, independent of $\omega$. For $(\omega_1,x_1)$ and $(\omega_2,x_2)\in S^1\times [0,1]$, set $$\label{definit_d'} d'((\omega_1,x_1),(\omega_2,x_2))=a|x_1-x_2|+|\omega_1-\omega_2|$$ where $a=\frac{1-\lambda/4}{D}$. \[dilate\_markov\] On each $A_{s,n}$, the map $T^n$ is expanding by at least $\lambda$ for the distance $d'$. For $n=1$ (the points return directly to $S^1\times [1/2,1]$), everything is linear, and the result is clear. Assume $n{\geqslant}2$. Take $(\omega_1,x_1)$ and $(\omega_2,x_2)\in A_{s,n}$, with for example $x_2{\geqslant}x_1$. The points $(\omega_1,x_1)$ and $(\omega_2,x_1)$ return to $S^1\times [1/2,1]$ after at least $n$ iterations (by hypothesis for the first point, and the second point is under $(\omega_2,x_2)$). We can use Corollary \[controle\_rapport\_horz\_vert\] $n-1$ times, and get that in vertical distance, $\operatorname{{d_{\text{vert}}}}( T^n(\omega_1,x_1),T^n(\omega_2,x_1)) {\leqslant}D|F^n\omega_1-F^n\omega_2|$. In particular, $T_{\omega_2}^n(x_1) {\geqslant}T_{\omega_1}^n(x_1) - D {\varepsilon}_0 {\geqslant}1/2-D {\varepsilon}_0$. Thus, by definition of ${\varepsilon}_0$, $T^n(\omega_2,x_1) \in I_i(F^n \omega_2)$ for $i=0$ or $1$, whence $T^{n-1}(\omega_2,x_1) \in [X_3(F^{n-1} \omega_2), X_1(F^{n-1}\omega_2)]$. Note that $T^{n-1}(\omega_2,x_2)$ belongs to the same interval (in fact, $T^{n-1}_{\omega_2}(x_2) \in [X_2(F^{n-1}\omega_2), X_1(F^{n-1}\omega_2)]$). Moreover, the $T_\alpha$ are expanding, whence $\operatorname{{d_{\text{vert}}}}(T^{n-1}(\omega_2,x_1), T^{n-1}(\omega_2,x_2)) {\geqslant}|x_1-x_2|$. We apply once more $T$, which expands at least by $\lambda$ on $[X_3(F^{n-1}\omega_2), X_1(F^{n-1} \omega_2)]$ by definition of $\lambda$, and get $\operatorname{{d_{\text{vert}}}}(T^n(\omega_2,x_1),T^n(\omega_2,x_2)) {\geqslant}\lambda|x_1-x_2|$. Finally, $$\begin{aligned} d'(T^n(\omega_1,x_1),T^n(\omega_2,x_2))& =a \operatorname{{d_{\text{vert}}}}(T^n(\omega_1,x_1),T^n(\omega_2,x_2))+|F^n\omega_1-F^n\omega_2| \\& {\geqslant}a \operatorname{{d_{\text{vert}}}}(T^n(\omega_2,x_1),T^n(\omega_2,x_2))-a \operatorname{{d_{\text{vert}}}}( T^n(\omega_1,x_1),T^n(\omega_2,x_1)) \\& \hphantom{=\ }+|F^n\omega_1-F^n\omega_2| \\& {\geqslant}a \lambda|x_1-x_2| -aD |F^n\omega_1-F^n\omega_2|+|F^n\omega_1-F^n\omega_2|. \end{aligned}$$ The proposition will be proved if $(1-aD)|F^n\omega_1-F^n\omega_2| {\geqslant}\lambda |\omega_1-\omega_2|$. But $$(1-aD)|F^n\omega_1-F^n\omega_2| =(1-aD)4^n |\omega_1-\omega_2| {\geqslant}(1-aD)4 |\omega_1-\omega_2| =\lambda |\omega_1-\omega_2|.$$ Distortion bounds ----------------- \[lemme\_distortion\_deplac\_horz\] There exists a constant $E>0$ such that $$\begin{split} \forall n>0, \forall \omega_1,\omega_2&\in S^1 \text{ with }|\omega_1-\omega_2|{\leqslant}\frac{{{\varepsilon}_0}}{4^n}, \forall x_1 \in J_n(\omega_1) \text{ with }T_{\omega_2}^{n-1} x_1{\leqslant}1/2,\\& \left| \ln (T_{\omega_1}^n)'(x_1)-\ln (T_{\omega_2}^n)'(x_1)\right| {\leqslant}E |F^n \omega_1 -F^n \omega_2|. \end{split}$$ We use Corollary \[controle\_rapport\_horz\_vert\] $n-1$ times and get for $0{\leqslant}k{\leqslant}n$ that $|T_{\omega_1}^k x_1 -T_{\omega_2}^k x_1|{\leqslant}D|F^k\omega_1-F^k\omega_2|$. In particular, for $k=n$, $|T_{\omega_1}^n x_1|{\geqslant}1/2$, whence $|T_{\omega_2}^n x_1|{\geqslant}1/2-D{{\varepsilon}_0}$. Consequently, $T^n(\omega_2,x_1)\in I_i(F^n\omega_2)$ for some $i\in \{0,1\}$, by definition of ${{\varepsilon}_0}$. An inverse induction gives $T^k(\omega_2,x_1) \in I_{n-k+i}(F^k\omega_2)$. For $x{\leqslant}1/2$ and $\omega\in S^1$, write $G(\omega,x)=\ln T_{\alpha(\omega)}'(x)= \ln\left(1+(\alpha(\omega)+1)(2x)^{\alpha(\omega)}\right)$. Then $$\frac{\partial G}{\partial x}(\omega,x) =\frac{(\alpha(\omega)+1)\alpha(\omega)2^{\alpha(\omega)} x^{\alpha(\omega)-1}}{1+(\alpha(\omega)+1)(2x)^{\alpha(\omega)}} {\leqslant}C x^{\operatorname{{\alpha_{\text{min}}}}-1}$$ and $$\left|\frac{\partial G}{\partial \omega}(\omega,x)\right| =\left|\frac{\alpha'(\omega)(2x)^{\alpha(\omega)} +(\alpha(\omega)+1)\alpha'(\omega)\ln(2x)(2x)^{\alpha(\omega)}} {1+(\alpha(\omega)+1)(2x)^{\alpha(\omega)}} \right| {\leqslant}C.$$ Lemma \[estime\_croissance\_grossiere\_Xn\], and the fact that $T^k(\omega_1,x_1)\in I_{n-k}(F^k\omega_1)$ and $T^k(\omega_2,x_1) \in I_{n-k+i}(F^k\omega_2)$ with $i{\leqslant}1$, give that the second coordinates of $T^k(\omega_1,x_1)$ and $T^k(\omega_2,x_1)$ are ${\geqslant}\frac{1}{C (n-k+1)^{1/\operatorname{{\alpha_{\text{min}}}}}}$. On the set of points $(\omega,x)$ with $x{\geqslant}\frac{1}{C (n-k+1)^{1/\operatorname{{\alpha_{\text{min}}}}}}$, the estimates on the partial derivatives of $G$ show that this function is $C (n-k+1)^{1/\operatorname{{\alpha_{\text{min}}}}-1}$-Lipschitz, whence $$\begin{aligned} |G(T^k(\omega_1,x_1))-G(T^k(\omega_2,x_1))| & {\leqslant}C (n-k+1)^{1/\operatorname{{\alpha_{\text{min}}}}-1} d((T^k(\omega_1,x_1),T^k(\omega_2,x_1)) \\& {\leqslant}C (n-k+1)^{1/\operatorname{{\alpha_{\text{min}}}}-1} (1+D)|F^k \omega_1-F^k \omega_2| \\& {\leqslant}C (n-k+1)^{1/\operatorname{{\alpha_{\text{min}}}}-1} (1+D) 4^k |\omega_1-\omega_2|. \end{aligned}$$ Finally, $$\begin{aligned} \left| \ln (T_{\omega_1}^n)'(x_1)-\ln (T_{\omega_2}^n)'(x_1)\right|& {\leqslant}\sum_{k=0}^{n-1}|G(T^k(\omega_1,x_1))-G(T^k(\omega_2,x_1))| \\& {\leqslant}C 4^n |\omega_1-\omega_2| \sum_{k=0}^{n-1} (n-k+1)^{1/\operatorname{{\alpha_{\text{min}}}}-1} 4^{k-n} \\& {\leqslant}C |F^n\omega_1-F^n\omega_2| \sum_{l=1}^\infty (l+1)^{1/\operatorname{{\alpha_{\text{min}}}}-1} 4^{-l}. \end{aligned}$$ The last sum is finite, which concludes the proof. For $n{\geqslant}2$, write $J_n^+(\omega)=[Y_{n+2}(\omega),Y_n(\omega)]$. Thus, if $n{\geqslant}1$, $J_{n+1}^+(\omega)$ is the preimage of $I_n^+(F\omega)$, defined by $I_n^+(F\omega)=[X_{n+2}(F\omega),X_{n}(F\omega)]$. These intervals will appear naturally in distortion controls, since we have seen in the proof of Lemma \[lemme\_distortion\_deplac\_horz\] that, if we move away horizontally from a point of $J_n(\omega_1)$, we find a point of $J_{n+i}(\omega_2)$ for $i\in \{0,1\}$, i.e. in $J_n^+(\omega_2)$. \[distortion\_verticale\_bornee\] There exists a constant $C$ such that $$\forall n{\geqslant}0, \forall \omega\in S^1, \forall x,y\in J_n^+(\omega),\ \left|\ln (T_\omega^n)'(x)-\ln(T_\omega^n)'(y) \right| {\leqslant}C |T_\omega^n(x)-T_\omega^n(y)|.$$ Recall that the Schwarzian derivative of an increasing diffeomorphism $g$ of class $C^3$ is $Sg(x)=\frac{g'''(x)}{g'(x)}-\frac{3}{2}\left(\frac{g''(x)}{g'(x)} \right)^2$. The composition of two functions with nonpositive Schwarzian derivative still has a nonpositive Schwarzian derivative. For $\tau>0$, the Koebe principle ([@demelo_vanstrien Theorem IV.1.2]) states that, if $Sg{\leqslant}0$, and $J\subset J'$ are two intervals such that $g(J')$ contains a $\tau$-scaled neighborhood of $g(J)$ (i.e. the intervals on the left and on the right of $g(J)$ in $g(J')$ have length at least $\tau |g(J)|$), then there exists a constant $K(\tau)$ such that $$\forall x,y\in J, \left|\ln g'(x) - \ln g'(y)\right| {\leqslant}K(\tau) \frac{|x-y|}{|J|}.$$ This implies that the distortion of $g$ is bounded on $J$, whence it is possible to replace the bound on the right with $K'(\tau) \frac{|g(x)-g(y)|}{|g(J)|}$. In our case, if $0<\alpha<1$, the left branch of $T_\alpha$ has nonpositive Schwarzian derivative, since $T_\alpha'''<0$ and $T_\alpha'>0$. Let in particular $g$ be the composition of the left branches of $T_{\alpha(F^{n-1}\omega)},\ldots, T_{\alpha(F\omega)}$, and of the right branch of $T_{\alpha(\omega)}$. Then, on $J_n^+$, we have $T_\omega^n=g$, and $g$ has nonpositive Schwarzian derivative. We want to see that $\left|\ln (T_\omega^n)'(x) - \ln (T_\omega^n)'(y)\right|{\leqslant}C|T_\omega^n(x)-T_\omega^n(y)|$. For this, we apply the Koebe principle to $J=J_n^+$ and $J'=[1/2+\delta,2]$ for $\delta$ very small. Then $g(J)=[X_2,1]$ while $g(J')$ contains $[\delta',2]$ for $\delta'>0$, arbitrarily small if $\delta$ is small enough. As the $X_{2}$ are uniformly bounded away from $0$, there exists $\tau>0$ (independent of $\omega$ and $n$) such that $g(J')$ contains a $\tau$-scaled neighborhood of $g(J)$. The Koebe principle then gives the desired result. \[prop\_distortion\_bornee\] There exists a constant $C$ such that, for every $A_{s,n}$, for every $(\omega_1,x_1)$ and $(\omega_2,x_2)\in A_{s,n}$, $$\left| \frac{\det DT^n(\omega_1,x_1)}{\det DT^n(\omega_2,x_2)}-1 \right|{\leqslant}C d(T^n(\omega_1,x_1),T^n(\omega_2,x_2)).$$ The matrix $DT^n(\omega,x)$ is upper triangular, with $4^n$ in the upper left corner. Thus, we have to show that $$\left| \ln(T_{\omega_1}^n)'(x_1)-\ln(T^n_{\omega_2})'(x_2) \right|{\leqslant}C d(T^n(\omega_1,x_1),T^n(\omega_2,x_2)).$$ Assume for example $x_2{\geqslant}x_1$, which implies that $T_{\omega_2}^k(x_1){\leqslant}1/2$ for $k=0,\ldots,n-1$. Lemma \[lemme\_distortion\_deplac\_horz\] can be applied to $x_1$, $\omega_1$ and $\omega_2$, and gives in particular that $x_1\in J_n^+(\omega_2)$. Write $$\begin{aligned} \left| \ln(T_{\omega_2}^n)'(x_2)-\ln(T^n_{\omega_1})'(x_1) \right| &{\leqslant}\left| \ln(T_{\omega_2}^n)'(x_2)-\ln(T^n_{\omega_2})'(x_1) \right| + \left| \ln(T_{\omega_2}^n)'(x_1)-\ln(T^n_{\omega_1})'(x_1) \right| \\& {\leqslant}C d(T^n(\omega_2,x_2)),T^n(\omega_2,x_1)) + E|F^n\omega_2-F^n \omega_1| \end{aligned}$$ by Lemmas \[lemme\_distortion\_deplac\_horz\] and \[distortion\_verticale\_bornee\]. For the first term, $$\begin{aligned} d(T^n(\omega_2,x_2),T^n(\omega_2,x_1)) &{\leqslant}d(T^n(\omega_2,x_2),T^n(\omega_1,x_1)) +d(T^n(\omega_1,x_1),T^n(\omega_2,x_1)) \\& {\leqslant}d(T^n(\omega_2,x_2),T^n(\omega_1,x_1)) + (D+1) |F^n \omega_1-F^n \omega_2| \end{aligned}$$ using admissible curves. As $|F^n \omega_1-F^n \omega_2|{\leqslant}d(T^n(\omega_1,x_1),T^n(\omega_2,x_2))$, we get the conclusion. Construction of the invariant measure ------------------------------------- The previous estimates and Theorem \[existe\_mesure\_invariante\] easily give that $T_Y$ admits an invariant measure, with Lipschitz density. Inducing gives an invariant measure for $T$, whose density is Lipschitz on each set $S^1\times (X_{n+1},X_n)$. However, this does not exclude discontinuities on $S^1\times X_n$, which is not surprising since $T$ itself has a discontinuity on $S^1 \times \{1/2\}$, which will then propagate to the other $X_n$, since the measure is invariant. However, in the one-dimensional case, Liverani, Saussol and Vaienti ([@liverani_saussol_vaienti]) have proved that the density is really continuous everywhere, since they constructed it as an element of a cone of continuous functions. This fact remains true here: \[existence\_mesure\_invariante\] The map $T$ admits a unique absolutely continuous invariant probability measure ${\, {\rm d}}m$. Moreover, this measure is ergodic. Finally, the density $h=\frac{{\, {\rm d}}m}{\operatorname{dLeb}}$ is Lipschitz on every compact subset of $S^1 \times (0,1]$. Consider the map $T_Y$ induced by $T$ on $Y=S^1\times (1/2,1]$. It is Markov for the partition $\alpha=\{A_{s,n}\}$, and transitive for this partition since $T_Y^2(a)=Y$ for all $a\in \alpha$. Moreover, it is expanding for $d'$ on each set of the partition (Proposition \[dilate\_markov\]) and its distortion is Lipschitz (Proposition \[prop\_distortion\_bornee\], and $d$ equivalent to $d'$). Theorem \[existe\_mesure\_invariante\] shows that $T_Y$ admits a unique absolutely continuous invariant probability measure ${\, {\rm d}}m_Y=h \operatorname{dLeb}$, which is ergodic. Moreover, the density $h$ is Lipschitz (for the distance $d'$, whence for the usual one) on each element of the partition $\alpha_*$ generated by the sets $T_Y(a)$, i.e. on the sets $K_i$. To construct an invariant measure for the initial map $T$, we use the classical induction process ([@aaronson:book Section 1.1.5]): let ${\varphi}_Y$ be the return time to $Y$ under $T$, then $\mu=\sum_{n=0}^\infty T_*^n(m_Y | {\varphi}_Y>n)$ is invariant. To check that the new measure has finite mass, we have to see that $\sum m_Y({\varphi}_Y>n) <\infty$. As ${\, {\rm d}}m_Y$ and $\operatorname{dLeb}$ are equivalent, we check it for $\operatorname{dLeb}$. We have $$\operatorname{Leb}({\varphi}_Y>n)=\operatorname{Leb}(S^1 \times [1/2,Y_{n+1}])=\frac{1}{2} \operatorname{Leb}(S^1\times [0,X_n]) {\leqslant}\frac{1}{2} \frac{C}{n^{1/\operatorname{{\alpha_{\text{max}}}}}},$$ using Lemma \[estime\_croissance\_grossiere\_Xn\]. As $\operatorname{{\alpha_{\text{max}}}}<1$, this is summable. We know that $h$ is Lipschitz on the sets $[\frac{s}{4^q},\frac{s+1}{4^q}]\times [1/2,1]$, we have to prove the continuity on $\{s/4^q\}\times [1/2,1]$, which is not hard: these numbers $s/4^q$ are artificial, since they depend on the arbitrary choice of a Markov partition on $S^1$. We can do the same construction using other sets than the $A_{s,n}$. For example, set $A'_{s,n}=\left[ \frac{1}{3}+\frac{s}{4^{q+n}}, \frac{1}{3}+\frac{s+1}{4^{q+n}}\right]\times J_n$, and $K'_i=\left[\frac{1}{3}+\frac{i}{4^q}, \frac{1}{3}+\frac{i+1}{4^q}\right]$. Since $1/3$ is a fixed point of $F$, the map $T_Y$ is Markov for the partition $\{A'_{s,n}\}$, and each of these sets is mapped on a set $K'_i$. Thus, the same arguments as above apply, and prove that $h$ is Lipschitz on each set $K'_i$. Since the boundaries of the sets $K_i$ and $K'_i$ are different, this shows that $h$ is in fact Lipschitz on $S^1\times [1/2,1]$. We show now that $h$ is Lipschitz on $S^1 \times [X_2,1]$. Note that it is slightly incorrect to say that $h$ is Lipschitz, since $h$ is defined only almost everywhere. Nevertheless, if we prove that $|h({\textbf{x}})-h({\textbf{y}})|{\leqslant}Cd({\textbf{x}},{\textbf{y}})$ for almost all ${\textbf{x}}$ and ${\textbf{y}}$, then there will exist a unique version of $h$ which is really Lipschitz. Thus, all the equalities we will write until the end of this proof will be true only almost everywhere. Let $A_{s,n}^+ = \left[\frac{s}{4^{q+n}}, \frac{s+1}{4^{q+n}}\right]\times J_n^+$: $T^n$ is a diffeomorphism between $A_{s,n}^+$ and $K_i^+ = \left[\frac{i}{4^q},\frac{i+1}{4^q}\right]\times [X_2,1]$. We fix some $K^+=K_i^+=I\times [X_2,1]$, and we show that $h$ is Lipschitz on $K^+$. Let $U_1,U_2,\ldots$ be the inverse branches of $T^{n_1},T^{n_2},\ldots$ whose images all coincide with $K^+$. Let $T_Y$ be the map induced by $T$ on $Y=S^1\times [1/2,1]$. Then $h\operatorname{dLeb}_{|Y}$ is invariant under $T_Y$, which means that, for each ${\textbf{x}}\in I\times [1/2,1]$, $$h({\textbf{x}})=\sum JU_j({\textbf{x}}) h(U_j {\textbf{x}})$$ where $JU_j$ is the jacobian of $U_j$. Let $Z=S^1\times [X_2,1]$, and $T_Z$ be the map induced by $T$ on $Z$. Since $h\operatorname{dLeb}_{|Z}$ is also invariant under $T_Z$, we have the same kind of equation as above. For ${\textbf{x}}\in I\times [X_2,1/2]$, all its preimages under $T_Z$ are in $S^1 \times [1/2,1]$, and the invariance gives that $$h({\textbf{x}})=\sum JU_j({\textbf{x}}) h(U_j {\textbf{x}}).$$ We have shown that, for every ${\textbf{x}}\in S^1 \times [X_2,1]$, $$h({\textbf{x}})=\sum JU_j({\textbf{x}}) h(U_j {\textbf{x}}).$$ This means that $h$ is invariant under some kind of transfer operator, even though it is not a real transfer operator since the images of the maps $U_j$ are not disjoint, and since they do not cover the space. In particular, the images of the $U_j$ are included in $S^1 \times [1/2,1]$, and we already know that $h$ is Lipschitz on this set. The bounds of the previous paragraphs still apply to the distortion of the $U_j$, and their expansion. In particular, $\left|1-\frac{JU_j({\textbf{y}})}{JU_j({\textbf{x}})}\right|{\leqslant}Cd({\textbf{x}},{\textbf{y}})$ for a constant $C$ independent of $j$, and $|h(U_j {\textbf{x}})-h(U_j {\textbf{y}})| {\leqslant}C d(U_j {\textbf{x}},U_j {\textbf{y}}){\leqslant}D d({\textbf{x}},{\textbf{y}})$ (since $h$ is Lipschitz on the image of $U_j$). Thus, $$\begin{aligned} |h({\textbf{x}})-h({\textbf{y}})|& {\leqslant}\sum |JU_j({\textbf{x}})h(U_j {\textbf{x}})-JU_j({\textbf{y}})h(U_j {\textbf{y}})| \\& {\leqslant}\sum |JU_j({\textbf{x}})| \left|1-\frac{JU_j({\textbf{y}})}{JU_j({\textbf{x}})}\right| |h(U_j {\textbf{x}})| +\sum |JU_j({\textbf{y}})| |h(U_j {\textbf{x}})-h(U_j {\textbf{y}})| \\& {\leqslant}Cd({\textbf{x}},{\textbf{y}})\sum |JU_j({\textbf{x}})| + Dd({\textbf{x}},{\textbf{y}}) \sum |JU_j({\textbf{y}})|. \end{aligned}$$ It remains to prove that $\sum |JU_j({\textbf{x}})|$ is bounded. The bound on distortion gives $JU_j({\textbf{x}}) \asymp \operatorname{Leb}(\operatorname{Im}U_j)$, whence $\sum JU_j({\textbf{x}}){\leqslant}C\sum \operatorname{Leb}(\operatorname{Im}U_j)$, which is finite since every point of $I\times[1/2,1]$ is in the image of at most two maps $U_j$. We have proved that $h$ is Lipschitz on $S^1\times [X_2,1]$, except maybe on $\{\frac{s}{4^q}\}\times [X_2,1]$. As above, using another Markov partition, we exclude the possibility of discontinuities there. Thus, $h$ is Lipschitz on $S^1 \times [X_2,1]$. To prove that $h$ is Lipschitz on $S^1\times [X_k,1]$, we do exactly the same thing, except that we consider $[Y_{n+k},Y_n]$ instead of $J_n^+=[Y_{n+2},Y_n]$. As above, writing $U_1,U_2,\ldots$ for the inverse branches of $T^n$ defined on a set $[\frac{s}{4^{n+q}},\frac{s+1}{4^{n+q}}]\times [Y_{n+k},Y_n]$ and whose image is $K'=[\frac{i}{4^q},\frac{i+1}{4^q}] \times [X_k,1]=I\times [X_k,1]$, we show that $h({\textbf{x}})=\sum JU_j({\textbf{x}}) h(U_j{\textbf{x}})$ for ${\textbf{x}}\in K'$. In fact, for ${\textbf{x}}\in I\times [X_{l},X_{l-1}]$, we use the invariance of $h\operatorname{dLeb}$ under the map induced by $T$ on $S^1 \times [X_l,1]$. We conclude finally as above, using the fact that $h$ is Lipschitz on $S^1\times [1/2,1]$, which contains the images of the $U_j$. This concludes the proof, since every compact subset of $S^1\times(0,1]$ is contained in $S^1 \times [X_k,1]$ for large enough $k$. Limit theorems for Markov maps {#limit_Markov} ============================== We want to establish limit theorems for Birkhoff sums, of the form $\sum_{k=0}^{n-1} f(T^k x)$. We give in this section an abstract result, valid for a map that induces a Gibbs-Markov map on a subset of the space (which is the case of our skew product). Related limit theorems have been proved in [@gouezel:stable], but we will show here a slightly different result, which requires more control on the return time ${\varphi}$ but is more elementary, using Theorem \[thm\_probabiliste\_general\] proved in Appendix \[appendice:loi\_stable\] and inspired by results of Melbourne and Török ([@melbourne_torok]) for flows. An advantage of this new method is that, contrary to [@gouezel:stable], it can easily be extended to stable laws of index $1$. If $Z_0,\ldots,Z_{n-1},\ldots$ are independent identically distributed random variables with zero mean, the sums $\frac{1}{B_n} \sum_{k=0}^{n-1} Z_k$ (where $B_n$ is a real sequence) converge to a nontrivial limit in essentially three cases: if $Z_k \in L^2$, there is convergence to a normal law for $B_n=\sqrt{n}$. There is also convergence to a normal law, but with a different normalization, if $P(|Z_k|>x)=x^{-2}l(x)$ with $L(x):=2\int_1^x \frac{l(u)}{u}{\, {\rm d}}u$ unbounded and slowly varying (i.e. $L:(0,\infty) \to (0,\infty)$ satisfies $\forall a >0, \lim_{x\to \infty} L(ax)/L(x)=1$) – this is in particular true when $l$ itself is slowly varying. Finally, if $P(Z_k>x)=(c_1+o(1))x^{-p}L(x)$ and $P(Z_k<-x)=(c_2+o(1))x^{-p}L(x)$, where $L$ is slowly varying and $p\in (0,2)$, we have convergence (for a good choice of $B_n$) to a limit law called stable law. Moreover, these are the only cases where there is a convergence ([@feller:2]). In the dynamical setting, we will prove the same kind of limit theorems, still with three possible cases: $L^2$, normal nonstandard, and stable. The normalizations will moreover be the same as in the probabilistic setting. \[thm\_abstrait\_markov\] Let $T:X\to X$ be an ergodic transformation preserving a probability measure $m$. Assume that there exists a subset $Y$ of $X$ with $m(Y)>0$ such that the first return map $T_Y(x)=T^{{\varphi}(x)}(x)$ (where ${\varphi}(x)= \inf\{ n>0 {\ |\ }T^n(x) \in Y\}$) is Gibbs-Markov for $m_{|Y}$, a partition $\alpha$ of $Y$ such that ${\varphi}$ is constant on each element of $\alpha$, and a distance $d$ on $Y$. Let $f:X \to {\mathbb{R}}$ be an integrable map with $\int f=0$, such that $f_Y(y):=\sum_{n=0}^{{\varphi}(y)-1} f(T^n y)$ satisfies $$\label{condition_markov_abstrait} \sum_{a\in \alpha} m(a) D f_Y(a) <\infty$$ where $$D f_Y(a)=\inf\{ C >0 {\ |\ }\forall x,y\in a, |f_Y(x)-f_Y(y)| {\leqslant}C d(x,y)\}.$$ Set $M(y)=\max_{1{\leqslant}k {\leqslant}{\varphi}(y)} \left| \sum_{j=0}^{k-1} f\circ T^j(y) \right|$. Then: - Assume that $f_Y \in L^2$ and $M \in L^2$. Assume moreover that ${\varphi}$ satisfies one of the following hypotheses: - ${\varphi}\in L^2$. - $m({\varphi}> x)=x^{-p} L(x)$ where $L$ is slowly varying and $p \in (1,2]$. Then there exists $\sigma^2 {\geqslant}0$ such that $\frac{1}{\sqrt{n}} S_n f \to {\mathcal{N}}(0,\sigma^2)$. - Assume that $m(|f_Y| > x)=x^{-2} l(x)$, with $L(x):=2\int_1^x\frac{l(u)}{u}{\, {\rm d}}u$ unbounded and slowly varying. Assume moreover that $m(M > x) {\leqslant}C x^{-2} l(x)$, and $m({\varphi}>x)=(c+o(1)) x^{-2}l(x)$. Let $B_n\to \infty$ satisfy $n L(B_n)=B_n^2$. Then $\frac{1}{B_n} S_n f \to {\mathcal{N}}(0,1)$. - Assume that $m(f_Y>x)=(c_1+o(1))x^{-p}L(x)$ and $m(f_Y<-x)=(c_2+o(1))x^{-p} L(x)$ where $L$ is a slowly varying function, $p\in (1,2)$, and $c_1,c_2{\geqslant}0$ with $c_1+c_2>0$. Assume also that $m(M>x) {\leqslant}C x^{-p} L(x)$, and $m({\varphi}>x)=(c_3+o(1)) x^{-p}L(x)$. Let $B_n\to \infty$ satisfy $n L(B_n)=B_n^p$. Then $\frac{1}{B_n} S_n f \to Z$ where the random variable $Z$ has a characteristic function given by $$E(e^{itZ})=e^{-c|t|^p \left( 1-i\beta \operatorname{sgn}(t) \tan \left(\frac{p\pi}{2} \right) \right)}$$ with $c=(c_1+c_2) \Gamma(1-p) \cos \left(\frac{p\pi}{2} \right)$ and $\beta=\frac{c_1-c_2}{c_1+c_2}$. Note that $M(y){\leqslant}\sum_{j=0}^{{\varphi}(y)-1} |f(T^j y)|=|f|_Y(y)$. Thus, if the integrability hypotheses of the theorem are satisfied by $|f|_Y$ (which will often be the case), they are automatically satisfied by $M$. In the second case of the theorem, when $l$ itself is slowly varying, then $L$ is automatically slowly varying. The second case of the theorem is not the most general possible result, since one may have convergence to a normal law even when the function $l$ is not slowly varying (what really matters is that $L$ is slowly varying). The theorem can be extended without problem to this more general setting, but the result becomes more complicated to state. In the applications, the statement given in Theorem \[thm\_abstrait\_markov\] will be sufficient. The idea is to use Theorem \[thm\_probabiliste\_general\]: we have to check all its hypotheses. We will use the notations of this theorem, and in particular write $E_Y(u)=\frac{\int_Y u {\, {\rm d}}m}{m(Y)}$. We first treat the third case (stable law), using the results of [@aaronson_denker] (and the generalizations of [@gouezel:stable]). Let $s(x,y)$ be the separation time of $x$ and $y$ defined in , $\tau=1/\lambda$ and $d_\tau=\tau^s$ the corresponding metric. Since every iteration of $T_Y$ expands by at least $\lambda$, we get $d(x,y){\leqslant}C d_\tau(x,y)$. In particular, we can assume without loss of generality that $d=d_\tau$, which is the setting of [@aaronson_denker] and [@gouezel:stable]. Let $P$ be the transfer operator associated to $T_Y$ (i.e.defined by $\int u\cdot v\circ T_Y = \int P(u)\cdot v$), and $P_t(u)=P(e^{itf_Y} u)$. Let ${\mathcal{L}}$ be the space of bounded Lipschitz functions (i.e. such that there exists $C$ such that, $\forall a\in \alpha, \forall x,y\in a, |g(x)-g(y)|{\leqslant}C d(x,y)$). Theorem 5.1 of [@aaronson_denker] ensures that, for small enough $t$, $P_t$ acting on ${\mathcal{L}}$ has an eigenvalue $\lambda(t)=e^{-\frac{c}{m(Y)}|t|^p \left( 1-i\beta \operatorname{sgn}(t) \tan \left(\frac{p\pi}{2} \right) \right)L(|t|^{-1})(1+o(1))}$, the remaining part of its spectrum being contained in a disk of radius ${\leqslant}1-\delta<1$. In fact, this theorem requires that $Df_Y(a)$ is bounded, but [@gouezel:stable Theorem 3.8] shows that it remains true under the weaker assumption $\sum m(a)D f_Y(a)<\infty$. The slow variation of $L$ easily implies that $\lambda\left(\frac{t}{B_n} \right)^{\lfloor n m(Y) \rfloor} \to e^{-c|t|^p \left( 1-i\beta \operatorname{sgn}(t) \tan \left(\frac{p\pi}{2} \right) \right)}$, whence, for $g\in {\mathcal{L}}$, $$\label{converge_dans_L} E_Y\left(g e^{i\frac{t}{B_n} S^Y_{\lfloor n m(Y) \rfloor} f_Y}\right) \to E_Y(g)E(e^{itZ})$$ where the random variable $Z$ is as in the statement of the theorem (see [@aaronson_denker] or [@gouezel:stable] for more details). We can not apply this result to $g={\varphi}$, since ${\varphi}$ is not bounded. However, ${\varphi}$ is Lipschitz and integrable, whence $P{\varphi}\in {\mathcal{L}}$ ([@aaronson_denker Proposition 1.4]). Equation applied to $P{\varphi}$ gives $E_Y\left({\varphi}e^{i\frac{t}{B_n} S^Y_{\lfloor n m(Y) \rfloor} f_Y\circ T_Y }\right) \to E(e^{itZ})$, since $E_Y(P{\varphi})=E_Y({\varphi})=1$ by Kac’s Formula. Let $k(n)$ be a sequence such that $\lfloor k(n)m(Y) \rfloor = \lfloor n m(Y) \rfloor -1$. Since $k(n)\sim n$, the same arguments give in fact that $E_Y\left({\varphi}e^{i\frac{t}{B_n} S^Y_{\lfloor k(n) m(Y) \rfloor} f_Y\circ T_Y }\right) \to E(e^{itZ})$, i.e.  $E_Y\left({\varphi}e^{i\frac{t}{B_n} (S^Y_{\lfloor n m(Y) \rfloor} f_Y - f_Y)}\right) \to E(e^{itZ})$. The difference between this term and $E_Y\left({\varphi}e^{i\frac{t}{B_n} (S^Y_{\lfloor n m(Y) \rfloor} f_Y }\right)$ is bounded by $E_Y\left( {\varphi}\left|e^{-i\frac{t}{B_n} f_Y}-1\right| \right)$, which tends to $0$ by dominated convergence. Thus, $$\label{eq_presque_bonne} E_Y\left({\varphi}e^{i\frac{t}{B_n} S^Y_{\lfloor n m(Y) \rfloor} f_Y} \right) \to E(e^{itZ}).$$ This is . Finally, since $L$ is slowly varying, the equation $n L(B_n)=B_n^p$ implies that $\sup_{r{\leqslant}2n} \frac{B_r}{B_n}<\infty$, $\inf_{r{\geqslant}n} \frac{B_r}{B_n}>0$ (using for example [@feller:2 Corollary page 274]). Let ${\varepsilon}>0$, we bound $m(M {\geqslant}{\varepsilon}B_n)$. $$m(M {\geqslant}{\varepsilon}B_n) {\leqslant}C ({\varepsilon}B_n)^{-p} L( {\varepsilon}B_n) =C {\varepsilon}^{-p} B_n^{-p} L(B_n) \frac{L({\varepsilon}B_n)}{L(B_n)}.$$ But $B_n^{-p}L(B_n)=\frac{1}{n}$ by definition of $B_n$, and $\frac{L({\varepsilon}B_n)}{L(B_n)}$ tends to $1$ since $L$ is slowly varying. Thus, $m(M {\geqslant}{\varepsilon}B_n) {\leqslant}\frac{D}{n}$, which proves . Hypothesis \[hypothese\_3\] of Theorem \[thm\_probabiliste\_general\] is satisfied for $b=1$, according to the Birkhoff Theorem applied to ${\varphi}-E_Y({\varphi})$ (and because $T_Y$ is ergodic, which is a consequence of the ergodicity of $T$). Finally, the hypothesis on the distribution of ${\varphi}$ ensures, once again by [@aaronson_denker], that $\frac{S_{\lfloor nm(Y)\rfloor}^Y {\varphi}-n m(Y)E_Y({\varphi})}{B_n}$ converges in distribution. Thus, is satisfied. We can use Theorem \[thm\_probabiliste\_general\], and get that $\frac{S_n f}{B_n} \to Z$. The proof of the second case of Theorem \[thm\_abstrait\_markov\] is exactly the same, using [@aaronson_denker:central] instead of [@aaronson_denker] to show the convergence in distribution of $\frac{S_{\lfloor nm(Y)\rfloor} ^Y f_Y}{B_n}$ and $\frac{S_{\lfloor nm(Y) \rfloor} ^Y {\varphi}-n m(Y) E_Y({\varphi})}{B_n}$. In the first case ($f_Y\in L^2$), the proof is again identical when ${\varphi}\in L^2$, with $B_n=\sqrt{n}$, using [@guivarch-hardy] (or the remarks of [@aaronson_denker:central]). However, when $m({\varphi}>x)=x^{-p} L(x)$, we check in a different way the hypotheses \[hypothese\_3\] and \[hypothese\_4\] of Theorem \[thm\_probabiliste\_general\]. [@aaronson_denker] ensures that, if $B'_n$ is given by $$\label{definit_bn} nL(B'_n)=(B'_n)^p,$$ then $\frac{S_n^Y {\varphi}-n E_Y({\varphi})}{B'_n}$ converges in distribution. Moreover, [@gouezel:stable Lemma 3.4] proves that $Pf_Y \in {\mathcal{L}}$, and has a vanishing integral. As $P$ has a spectral gap on ${\mathcal{L}}$, $P^n f_Y \to 0$ exponentially fast. In particular, $\int f_Y\circ T_Y^n \cdot f_Y=\int (P^n f)\cdot f=O((1-\delta)^n)$ for some $0<\delta<1$. Thus, as $f_Y\in L^2$, [@vitesse_birkhoff Theorem 16] gives that, for every $b>1/2$, $\frac{1}{N^b}\sum_{k=0}^{N-1}f_Y(T_Y^k) \to 0$ almost everywhere when $N\to \infty$. In the natural extension, $\int f_Y\circ T_Y^{-n} \cdot f_Y=\int f_Y \cdot f_Y\circ T_Y^n$ decays also exponentially fast, whence the same argument gives that $\frac{1}{|N|^b}\sum_{k=0}^{N-1}f_Y(T_Y^k) \to 0$ when $N\to -\infty$. Thus, Hypothesis \[hypothese\_3\] of Theorem \[thm\_probabiliste\_general\] is satisfied for any $b>1/2$. Let $\kappa>0$ be very small. As $L$ is slowly varying, $L(B'_n)=O((B'_n)^\kappa)$, whence Equation gives $B'_n=O(n^{1/(p-\kappa)})$. Thus, if $b<\frac{p}{2}$, we have $B'_n=O(B_n^{1/b})$, which implies . Asymptotic behavior of $X_n$ {#section_estimee_Xn} ============================ We return to the study of the skew product . To prove limit theorems using Theorem \[thm\_abstrait\_markov\], we will need to estimate $m({\varphi}_Y>n)$, which is directly related to the speed of convergence of $X_n$ to $0$. This section will be devoted to the proof of the following theorem: \[estimee\_Xn\_L1\] We have $$\left(\frac{n}{\sqrt{\ln n}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} X_n \to \frac{1}{\left(2^{\operatorname{{\alpha_{\text{min}}}}} \operatorname{{\alpha_{\text{min}}}}^{3/2} \sqrt{\frac{\pi}{2\alpha''(x_0)}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}}$$ almost everywhere and in $L^1$. \[lemme\_asympt\_Xn\] We have $$E(e^{-(\alpha-\operatorname{{\alpha_{\text{min}}}})w})\sim \sqrt{\frac{\pi}{2\alpha''(x_0)}} \frac{1}{\sqrt{w}} \text{ when }w\to\infty.$$ Write $\beta=\alpha-\operatorname{{\alpha_{\text{min}}}}$, and $f(b)=\operatorname{Leb}\{ \omega {\ |\ }\beta(\omega) \in [0,b)\}$. In a neighborhood of $x_0$ (the unique point where $\alpha$ takes its minimal value $\operatorname{{\alpha_{\text{min}}}}$), $\alpha$ behaves like the parabola $\operatorname{{\alpha_{\text{min}}}}+\frac{\alpha''(x_0)}{2}(x-x_0)^2$, whence $f(b) \sim \sqrt{\frac{2}{\alpha''(x_0)}} \sqrt{b}$ when $b \to 0$. Writing $P_\beta$ for the distribution of $\beta$, an integration by parts gives $$\begin{aligned} E\left(e^{-(\alpha-\operatorname{{\alpha_{\text{min}}}})w}\right)&=\int_0^\infty e^{-b w} {\, {\rm d}}P_\beta(b) =w \int_0^\infty e^{-b w} f(b) {\, {\rm d}}b =\int_0^\infty e^{-u} f(u/w) {\, {\rm d}}u \\ & =\frac{1}{\sqrt{w}} \int_0^\infty e^{-u} \left(\sqrt{w}f(u/w)\right) {\, {\rm d}}u. \end{aligned}$$ But $e^{-u} \left(\sqrt{w}f(u/w)\right) \to e^{-u} \sqrt{\frac{2}{\alpha''(x_0)}}\sqrt{u}$ when $w\to \infty$. There exists a constant $E$ such that $f(u){\leqslant}E \sqrt{u}$ (this is clear in a neighborhood of $0$, and elsewhere since $f$ is bounded), whence $e^{-u} \left(\sqrt{w}f(u/w)\right) {\leqslant}E e^{-u}\sqrt{u}$ integrable. By dominated convergence, $$\int_0^\infty e^{-u} \left(\sqrt{w}f(u/w)\right) {\, {\rm d}}u \to \sqrt{\frac{2}{\alpha''(x_0)}} \int_0^\infty e^{-u} \sqrt{u}{\, {\rm d}}u =\sqrt{\frac{2}{\alpha''(x_0)}} \frac{\sqrt{\pi}}{2}.$$ As in Proposition \[estime\_croissance\_grossiere\_Xn\], we write $$\frac{1}{X_n(F\omega)^{\operatorname{{\alpha_{\text{min}}}}}}=\frac{1}{X_{n+1}(\omega)^{\operatorname{{\alpha_{\text{min}}}}}}-\operatorname{{\alpha_{\text{min}}}}2^{\operatorname{{\alpha_{\text{min}}}}} (2X_{n+1}(\omega))^{\alpha(\omega) -\operatorname{{\alpha_{\text{min}}}}} +O(X_{n+1}(\omega)^{2\alpha(\omega) -\operatorname{{\alpha_{\text{min}}}}}).$$ Proposition \[estime\_croissance\_grossiere\_Xn\] gives $$X_{n+1}(\omega)^{2\alpha(\omega)-\operatorname{{\alpha_{\text{min}}}}} {\leqslant}X_{n+1}(\omega)^{\operatorname{{\alpha_{\text{min}}}}} {\leqslant}\frac{C}{(n+1)^{\operatorname{{\alpha_{\text{min}}}}/\operatorname{{\alpha_{\text{max}}}}}} {\leqslant}\frac{C}{\sqrt{n+1}}$$ as $\operatorname{{\alpha_{\text{min}}}}/\operatorname{{\alpha_{\text{max}}}}{\geqslant}1/2$ by hypothesis. Thus, $$\frac{1}{X_{n+1}(\omega)^{\operatorname{{\alpha_{\text{min}}}}}}-\frac{1}{X_n(F\omega)^{\operatorname{{\alpha_{\text{min}}}}}}=2^{\operatorname{{\alpha_{\text{min}}}}} \operatorname{{\alpha_{\text{min}}}}(2X_{n+1}(\omega))^{\alpha-\operatorname{{\alpha_{\text{min}}}}} +O(1/\sqrt{n}).$$ Summing from $1$ to $n$, we get a constant $P$ (independent of $\omega$) such that $$\begin{aligned} \label{minore_1/X_n} \frac{1}{X_n(\omega)^{\operatorname{{\alpha_{\text{min}}}}}}{\geqslant}2^{\operatorname{{\alpha_{\text{min}}}}} \operatorname{{\alpha_{\text{min}}}}\left[\sum_{k=1}^{n} (2X_k(F^{n-k} \omega))^{\alpha(F^{n-k}\omega)-\operatorname{{\alpha_{\text{min}}}}} -P\sqrt{n} \right] \\ \label{majore_1/X_n} \frac{1}{X_n(\omega)^{\operatorname{{\alpha_{\text{min}}}}}}{\leqslant}2^{\operatorname{{\alpha_{\text{min}}}}} \operatorname{{\alpha_{\text{min}}}}\left[\sum_{k=1}^n (2X_k(F^{n-k}\omega))^{\alpha(F^{n-k}\omega)-\operatorname{{\alpha_{\text{min}}}}} +P\sqrt{n} \right] \end{aligned}$$ Equation and Proposition \[estime\_croissance\_grossiere\_Xn\] imply that $$\label{definit_An} \frac{\sqrt{\ln n}}{n} \frac{1}{2^{\operatorname{{\alpha_{\text{min}}}}} \operatorname{{\alpha_{\text{min}}}}X_n(\omega)^{\operatorname{{\alpha_{\text{min}}}}}} {\geqslant}\frac{\sqrt{\ln n}}{n}\sum_{k=1}^n \left(\frac{2C^{-1}}{k^{1/\operatorname{{\alpha_{\text{min}}}}}} \right)^{\alpha(F^{n-k} \omega)-\operatorname{{\alpha_{\text{min}}}}} -P\sqrt{\frac{\ln n}{n}}=: A_n(\omega).$$ We first study the convergence of $A_n$. The functions $\alpha$ and $\alpha \circ F^{n-k}$ have the same distribution since $F$ preserve Lebesgue measure. Thus, by Lemma \[lemme\_asympt\_Xn\], $$E\left(\left(\frac{2C^{-1}}{k^{1/\operatorname{{\alpha_{\text{min}}}}}}\right)^{\alpha\circ F^{n-k}-\operatorname{{\alpha_{\text{min}}}}} \right) \sim \sqrt{\frac{\pi}{2\alpha''(x_0)}} \frac{1}{\sqrt{\ln (k^{1/\operatorname{{\alpha_{\text{min}}}}})-\ln(2C^{-1})}} \sim \sqrt{\frac{\pi \operatorname{{\alpha_{\text{min}}}}}{2\alpha''(x_0)}}\frac{1}{\sqrt{\ln k}}.$$ Summing, we get that $$E(A_n)\to C_1:=\sqrt{\frac{\pi \operatorname{{\alpha_{\text{min}}}}}{2\alpha''(x_0)}},$$ since $\sum_{k=2}^n \frac{1}{\sqrt{\ln k}} \sim \frac{n}{\sqrt{\ln n}}$. We will need $L^p$ estimates, for $p{\geqslant}1$. To get them, we use a result of Françoise Pène, recalled in Appendix \[appendice:pene\]. Let us denote by ${\left\| g \right\|}$ the Lipschitz norm of a function $g:S^1 \to {\mathbb{R}}$. We define $f_k(\omega)=\left(\frac{2C^{-1}}{k^{1/\operatorname{{\alpha_{\text{min}}}}}} \right)^{\alpha(\omega)-\operatorname{{\alpha_{\text{min}}}}}$, and $g_k=f_k-E(f_k)$. Thus, $A_n=\frac{\sqrt{\ln n}}{n}\sum_{k=1}^n f_k \circ F^{n-k}-P\sqrt{\frac{\ln n}{n}}$. As $g'_k=\ln \left(\frac{2C^{-1}}{k^{1/\operatorname{{\alpha_{\text{min}}}}}} \right)\alpha' f_k$, there exists a constant $L$ such that, for $k{\leqslant}n$, ${\left\| g_k \right\|} {\leqslant}L \ln n$. As a consequence, Theorem \[thm\_borne\_Lp\_pene\] applied to $g_k/(L \ln n)$ gives $${\left\| A_n-E(A_n) \right\|}_p = \frac{\sqrt{\ln n}}{n} L\ln n {\left\| \sum_{k=1}^{n} g_k \circ F^{n-k} / (L \ln n) \right\|}_p {\leqslant}\frac{\sqrt{\ln n}}{n} L\ln n K_p \sqrt{n},$$ i.e. $${\left\| A_n-E(A_n) \right\|}_p {\leqslant}L_p \sqrt{\frac{\ln^3 n}{n}}.$$ This implies in particular that $A_n$ converges almost everywhere to $C_1$. Namely, if $\delta>0$, $$\operatorname{Leb}\{|A_n-E(A_n)|>\delta\} {\leqslant}\int \frac{|A_n-E(A_n)|^4}{\delta^4} {\leqslant}\frac{L_4}{\delta^4} \left(\frac{\ln^3 n}{n}\right)^{4/2}$$ which is summable, and $E(A_n)\to C_1$. We have $$\begin{aligned} A_n(\omega)& {\geqslant}\frac{\sqrt{\ln n}}{n} \left[\sum_{k=1}^n \left( \frac{2C^{-1}} {k^{1/\operatorname{{\alpha_{\text{min}}}}}}\right)^{\operatorname{{\alpha_{\text{max}}}}-\operatorname{{\alpha_{\text{min}}}}} - P\sqrt{n} \right] {\geqslant}\frac{\sqrt{\ln n}}{n} \bigl[ K n^{2-\operatorname{{\alpha_{\text{max}}}}/\operatorname{{\alpha_{\text{min}}}}} - P \sqrt{n}\bigr] \\& {\geqslant}K' \frac{\sqrt{\ln n}}{n} n^{2-\operatorname{{\alpha_{\text{max}}}}/\operatorname{{\alpha_{\text{min}}}}} \end{aligned}$$ since $\operatorname{{\alpha_{\text{max}}}}/\operatorname{{\alpha_{\text{min}}}}<3/2$. Thus, $${\left\| \frac{1}{A_n} \right\|}_\infty {\leqslant}K'' \frac{n^{\operatorname{{\alpha_{\text{max}}}}/\operatorname{{\alpha_{\text{min}}}}-1}}{\sqrt{\ln n}}.$$ Note that $E(A_n)$ tends to $C_1 \not=0$, whence $\frac{1}{E(A_n)}$ is bounded. Thus, $$\begin{aligned} {\left\| \frac{1}{A_n} - \frac{1}{E(A_n)} \right\|}_p & {\leqslant}{\left\| \frac{1}{A_n} \right\|}_\infty \frac{1}{E(A_n)} {\left\| A_n-E(A_n) \right\|}_p {\leqslant}K''' \frac{n^{\operatorname{{\alpha_{\text{max}}}}/\operatorname{{\alpha_{\text{min}}}}-1}}{\sqrt{\ln n}} L_p \sqrt{\frac{\ln^3 n}{n}} \\& = M_p \frac{\ln n}{n^\kappa} \end{aligned}$$ where $\kappa=\frac{3}{2}-\frac{\operatorname{{\alpha_{\text{max}}}}}{\operatorname{{\alpha_{\text{min}}}}}>0$. In particular, $\frac{1}{A_n}$ tends to $\frac{1}{C_1}$ in every $L^p$. Equation shows that $$\label{un_petit_label} \left(\frac{n}{\sqrt{\ln n}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} X_n {\leqslant}\frac{1} {(2^{\operatorname{{\alpha_{\text{min}}}}}\operatorname{{\alpha_{\text{min}}}}A_n )^{1/\operatorname{{\alpha_{\text{min}}}}}}.$$ The right hand side tends to $$C_2:=\frac{1}{\left(2^{\operatorname{{\alpha_{\text{min}}}}}\operatorname{{\alpha_{\text{min}}}}^{3/2} \sqrt{\frac{\pi}{2\alpha''(x_0)}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}}$$ in every $L^p$, and in particular in $L^1$. Thus, $$\label{eq_esperance} \varlimsup E\left(\left( \frac{n}{\sqrt{\ln n}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} X_n \right) {\leqslant}C_2.$$ Moreover, $A_n$ converges almost everywhere to $C_1$, whence yields that, almost everywhere, $$\label{eq_limsup} \varlimsup \left( \frac{n}{\sqrt{\ln n}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} X_n(\omega) {\leqslant}C_2.$$ Set $Q=\sup_n \left(\frac{1}{E(A_n)}\right)+1$, we estimate $\operatorname{Leb}\left\{ \frac{1}{A_n} {\geqslant}Q\right\}$. If $p{\geqslant}1$, $$\operatorname{Leb}\left\{ \frac{1}{A_n} {\geqslant}Q\right\} {\leqslant}\operatorname{Leb}\left\{ \left|\frac{1}{A_n} -\frac{1}{E(A_n)} \right| {\geqslant}1 \right\} {\leqslant}E\left( \left|\frac{1}{A_n} -\frac{1}{E(A_n)} \right| ^p \right) {\leqslant}\left( M_p \frac{\ln n}{n^\kappa} \right)^p.$$ In particular, choosing $p$ large enough gives $$\operatorname{Leb}\left\{ \frac{1}{A_n} {\geqslant}Q\right\}{\leqslant}\frac{M}{n^5}.$$ Setting $Q'=\frac{Q}{2^{\operatorname{{\alpha_{\text{min}}}}} \operatorname{{\alpha_{\text{min}}}}}$, thus yields that $$\operatorname{Leb}\left\{ X_n {\geqslant}\left(\frac{Q'\sqrt{\ln n}}{n}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} \right\} {\leqslant}\frac{M}{n^5}.$$ Consequently, $U_n:=\left\{ \omega {\ |\ }\exists \sqrt{n} {\leqslant}k{\leqslant}n \text{ with } X_k(F^{n-k} \omega) {\geqslant}\left(\frac{Q'\sqrt{\ln k}}{k}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} \right\}$ has a measure at most $\sum_{\sqrt{n}}^n \frac{M}{k^5} {\leqslant}\frac{M'}{n^2}$ (since $\operatorname{Leb}$ is invariant under $F^{n-k}$). Finally, Borel-Cantelli ensures that there is a full measure subset of $S^1$ on which $\omega \not \in U_n$ for large enough $n$. Set $$A'_n(\omega)=\frac{\sqrt{\ln n}}{n} \left[\sum_{k=1}^n \left(\frac{2(Q'\sqrt{\ln k})^{1/\operatorname{{\alpha_{\text{min}}}}}}{k^{1/\operatorname{{\alpha_{\text{min}}}}}}\right)^{\alpha(F^{n-k} \omega) -\operatorname{{\alpha_{\text{min}}}}} + (P+1)\sqrt{n}\right].$$ As for $A_n$, we show that $A'_n \to C_1$ in every $L^p$ and almost everywhere. Let $\omega$ be such that $\omega \not \in U_n$ for large enough $n$, and $A'_n(\omega) \to C_1$ (these properties are true almost everywhere). Then, for large enough $n$, Equation and the fact that $X_k(F^{n-k} \omega) {\leqslant}\left(\frac{Q'\sqrt{\ln k}}{k}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}$ for $\sqrt{n}{\leqslant}k{\leqslant}n$, yield that $$\begin{aligned} \frac{1}{2^{\operatorname{{\alpha_{\text{min}}}}} \operatorname{{\alpha_{\text{min}}}}X_n(\omega)^{\operatorname{{\alpha_{\text{min}}}}}}& {\leqslant}\left[\sum_{k=1}^{\sqrt{n}} 1+ \sum_{k=\sqrt{n}}^n \left(\frac{2(Q'\sqrt{\ln k})^{1/\operatorname{{\alpha_{\text{min}}}}}}{k^{1/\operatorname{{\alpha_{\text{min}}}}}}\right)^{\alpha(F^{n-k}\omega) -\operatorname{{\alpha_{\text{min}}}}} +P \sqrt{n} \right] \\& {\leqslant}\frac{n}{\sqrt{\ln n}} A'_n(\omega)\sim \frac{n}{\sqrt{\ln n}} C_1. \end{aligned}$$ Thus, $$\label{eq_liminf} \varliminf \left( \frac{n}{\sqrt{\ln n}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} X_n(\omega) {\geqslant}C_2.$$ Equations and prove that $\left( \frac{n}{\sqrt{\ln n}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} X_n$ tends almost everywhere to $C_2$. We get the convergence in $L^1$ from the inequality and the following elementary lemma. Let $f_n$ be nonnegative functions on a probability space, with $f_n \to f$ almost everywhere, and $\varlimsup E(f_n) {\leqslant}E(f)<\infty$. Then $f_n \to f$ in $L^1$. Limit theorems {#section:limite} ============== Set $$\label{definit_A} A=\frac{1}{4\left( \operatorname{{\alpha_{\text{min}}}}^{3/2} \sqrt{\frac{\pi}{2\alpha''(x_0)}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}}\int_{S^1\times \{1/2\}} h\operatorname{dLeb},$$ where $h$ is the density of $m$ with respect to $\operatorname{Leb}$. In this section, we prove the following theorem: \[enonce\_theoreme\_limite\] Let $f$ be a Hölder function on $S^1\times [0,1]$, with $\int f{\, {\rm d}}m=0$. Write $c=\int_{S^1\times\{0\}} f \operatorname{dLeb}$. Then - If $\operatorname{{\alpha_{\text{min}}}}<1/2$, there exists $\sigma^2 {\geqslant}0$ such that $\frac{1}{\sqrt{n}} S_n f \to {\mathcal{N}}(0,\sigma^2)$. - If $\operatorname{{\alpha_{\text{min}}}}=1/2$ and $c\not=0$, then $\frac{S_n f}{\sqrt{ \frac{c^2 A }{4} n (\ln n)^2}} \to {\mathcal{N}}(0,1)$. - If $1/2<\operatorname{{\alpha_{\text{min}}}}<1$ and $c\not =0$, then $\frac{S_n f}{n^{\operatorname{{\alpha_{\text{min}}}}}\sqrt{\operatorname{{\alpha_{\text{min}}}}\ln n}} \to Z$, where the random variable $Z$ has a characteristic function given by $$E(e^{itZ}) =e^{- A |c|^{1/\operatorname{{\alpha_{\text{min}}}}} \Gamma(1-1/\operatorname{{\alpha_{\text{min}}}})\cos\left(\frac{\pi}{2\operatorname{{\alpha_{\text{min}}}}}\right) |t|^{1/\operatorname{{\alpha_{\text{min}}}}} \left( 1-i\operatorname{sgn}(ct) \tan \left(\frac{\pi}{2\operatorname{{\alpha_{\text{min}}}}} \right) \right)}$$ - If $1/2{\leqslant}\operatorname{{\alpha_{\text{min}}}}<1$ and $c=0$, assume also that there exists $\gamma >0$ such that $|f(\omega,x)-f(\omega,0)|{\leqslant}Cx^\gamma$, with $\gamma>\operatorname{{\alpha_{\text{max}}}}\left(1-\frac{1}{2\operatorname{{\alpha_{\text{min}}}}}\right)$. Then there exists $\sigma^2 {\geqslant}0$ such that $\frac{1}{\sqrt{n}} S_n f \to {\mathcal{N}}(0,\sigma^2)$. The random variable $Z$ in the third case has a stable distribution of exponent $1/\operatorname{{\alpha_{\text{min}}}}$ and parameters $A|c|^{1/\operatorname{{\alpha_{\text{min}}}}} \Gamma(1-1/\operatorname{{\alpha_{\text{min}}}})\cos\left(\frac{\pi}{2\operatorname{{\alpha_{\text{min}}}}}\right)$ and $\operatorname{sgn}(c)$. To prove this theorem, we will use Theorem \[thm\_abstrait\_markov\]. For this, we need a control of $m({\varphi}_Y>n)$ which comes from the asymptotic behavior of $X_n$ proved in Theorem \[estimee\_Xn\_L1\]. It will also be necessary to estimate $m(f_Y>x)$, through the study of the integrability of $f_Y$ (Lemmas \[L2\_am\_petit\] and \[lemme\_dans\_Lp\]). In the rest of this section, $f$ will be a Hölder function on $S^1\times [0,1]$, fixed once and for all. Recall that $f_Y(y)=\sum_{k=0}^{{\varphi}_Y(y)-1} f(T^k y)$, where ${\varphi}_Y$ is the first return time to $Y=S^1\times(1/2,1]$. Estimates on measures --------------------- \[controle\_mesure\_phi\] We have $$m({\varphi}_Y>n) \sim \left(\frac{\sqrt{\ln n}}{n}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} A$$ where $A$ is given by . We have $$\begin{aligned} m({\varphi}_Y> n)& =\int_{S^1} \int_{1/2}^{Y_{n+1}(\omega)} h(\omega,u){\, {\rm d}}u {\, {\rm d}}\omega =\int_{S^1} \int_0^{X_{n}(F\omega)/2} h(\omega,1/2+u){\, {\rm d}}u {\, {\rm d}}\omega \\& =\int_{S^1} \frac{X_{n}(F\omega)}{2} h(\omega,1/2){\, {\rm d}}\omega +\int_{S^1} \int_0^{X_{n}(F\omega)/2} \bigl[h(\omega,1/2+u)-h(\omega,1/2) \bigr] {\, {\rm d}}u{\, {\rm d}}\omega \\& =I+II. \end{aligned}$$ As $\left(\frac{n}{\sqrt{\ln n}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} X_n(F\omega) \to \frac{1}{\left(2^{\operatorname{{\alpha_{\text{min}}}}} \operatorname{{\alpha_{\text{min}}}}^{3/2} \sqrt{\frac{\pi}{2\alpha''(x_0)}}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}}$ in $L^1$ and almost everywhere (Theorem \[estimee\_Xn\_L1\]) and $h(\omega,1/2)$ is bounded, we get that $I\sim \left(\frac{\sqrt{\ln n}}{n}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} A$. Moreover, for large enough $n$, $|h(\omega,1/2+u)-h(\omega,1/2)|{\leqslant}{\varepsilon}$, whence $II=o\left(\frac{\sqrt{\ln n}}{n}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}$. \[L2\_am\_petit\] If $\operatorname{{\alpha_{\text{min}}}}<1/2$, then $f_Y \in L^2(Y, {{\rm d}}m)$. We have $$\begin{aligned} \int f_Y^2 {\, {\rm d}}m & {\leqslant}C\sum_n m({\varphi}_Y=n) n^2 =C \sum \bigl( m({\varphi}_Y>n-1)-m({\varphi}_Y>n)\bigr)n^2 \\& {\leqslant}C\sum m({\varphi}_Y>n) n \end{aligned}$$ which is summable since $m({\varphi}_Y>n)\sim A \left(\frac{\sqrt{\ln n}}{n}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}$ with $1/\operatorname{{\alpha_{\text{min}}}}>2$. \[lemme\_dans\_Lp\] Assume that $\int_{S^1\times \{0\}} f=0$. Let $\operatorname{{\alpha_{\text{max}}}}>\gamma>0$ be such that $|f(\omega,x)-f(\omega,0)| {\leqslant}C x^\gamma$. If $1<p< \min\left( \frac{2}{\operatorname{{\alpha_{\text{min}}}}}, \frac{1}{\operatorname{{\alpha_{\text{min}}}}(1-\gamma/\operatorname{{\alpha_{\text{max}}}})} \right)$, then $f_Y \in L^p(Y,{{\rm d}}m)$. As $h$ is bounded on $Y$, it is sufficient to prove that $f_Y \in L^p(Y,\operatorname{dLeb})$. Assume first that $f \equiv 0$ on $S^1 \times \{0\}$. Then, if ${\textbf{x}}=(\omega,x)$ satisfies ${\varphi}_Y({\textbf{x}})=n$, we have $f_Y({\textbf{x}})=\sum_0^{n-1} f(T^k {\textbf{x}})$. If $k {\geqslant}1$, $T_\omega^k(x){\leqslant}X_{n-k}(F^k \omega) {\leqslant}\frac{C}{(n-k)^{1/\operatorname{{\alpha_{\text{max}}}}}}$, whence $|f(T^k {\textbf{x}})|{\leqslant}\frac{C}{(n-k)^{\gamma/\operatorname{{\alpha_{\text{max}}}}}}$, and a summation yields that $|f_Y({\textbf{x}})|{\leqslant}C n^{1-\gamma/\operatorname{{\alpha_{\text{max}}}}}$. Thus, $$\begin{aligned} \int |f_Y|^p & {\leqslant}C \sum m({\varphi}_Y=n) n^{p(1-\gamma/\operatorname{{\alpha_{\text{max}}}})} \\& {\leqslant}C \sum m({\varphi}_Y>n) n^{p(1-\gamma/\operatorname{{\alpha_{\text{max}}}})-1}. \end{aligned}$$ As $m({\varphi}_Y>n) \sim A \left(\frac{\sqrt{\ln n}}{n} \right)^{1/\operatorname{{\alpha_{\text{min}}}}}$, this last series is summable as soon as $$-\frac{1}{\operatorname{{\alpha_{\text{min}}}}}+p\left(1-\frac{\gamma}{\operatorname{{\alpha_{\text{max}}}}}\right)-1 <-1,$$ which is the case by assumption on $p$. Assume now that $f$ has a vanishing integral on $S^1$. Let $g(\omega,x)=f(\omega,0)$. The function $f-g$ vanishes on $S^1 \times \{0\}$, whence $f_Y-g_Y \in L^p$ according to the first part of this proof. Consequently, it is sufficient to prove that $g_Y \in L^p$. Write $\chi(\omega)=f(\omega,0)$ and $S_n \chi(\omega)=\sum_{k=0}^{n-1} \chi(F^k \omega)$: then $g_Y(\omega,x)=S_{{\varphi}_Y(\omega,x)} \chi(\omega)$. Let $M_n \chi(\omega)=\max_{k{\leqslant}n}|S_k \chi(\omega)|$. Let $\delta>0$, and $l=\frac{1+\delta}{\delta}$, so that $\frac{1}{l}+\frac{1}{1+\delta}=1$. We have $$\begin{aligned} \int |g_Y|^p & =\sum_{n=0}^{\infty} \int_{S^1} \int_{1/2+X_{n}(F\omega)/2}^{1/2+X_{n-1}(F\omega)/2} \bigl|S_n \chi(\omega)\bigr|^p {\, {\rm d}}u {\, {\rm d}}\omega \\& {\leqslant}\sum_{k=1}^{\infty} \int_{S^1} \int_{1/2+X_{2^k}(F\omega)/2}^{1/2+X_{2^{k-1}}(F\omega)/2} |M_{2^k} \chi(\omega)|^p {\, {\rm d}}u {\, {\rm d}}\omega \\& {\leqslant}\sum_{k=1}^\infty \int_{S^1} X_{2^{k-1}}(F\omega) |M_{2^k}\chi(\omega)|^p {\, {\rm d}}\omega {\leqslant}\sum_{k=1}^\infty {\left\| X_{2^{k-1}}\circ F \right\|}_{1+\delta} {\left\| M_{2^k}\chi \right\|}_{lp}^p, \end{aligned}$$ where the last inequality is Hölder inequality. If $\delta$ is small enough, $lp>2$, whence Corollary \[ineg\_max\] yields that ${\left\| M_{2^k}\chi \right\|}_{lp} {\leqslant}C k^{\frac{lp-1}{lp}}\sqrt{2^k}$. Moreover, $${\left\| X_{2^{k-1}}\circ F \right\|}_{1+\delta} ={\left\| X_{2^{k-1}} \right\|}_{1+\delta} {\leqslant}\left( \int X_{2^{k-1}} \right)^{1/(1+\delta)} \sim C \left( \frac{\sqrt{\ln(2^{k-1})}}{2^{k-1}} \right)^{\frac{1}{(1+\delta) \operatorname{{\alpha_{\text{min}}}}}}$$ by Theorem \[estimee\_Xn\_L1\]. Thus, $\int|g_Y|^p<\infty$ if $\frac{1}{(1+\delta)\operatorname{{\alpha_{\text{min}}}}} > \frac{p}{2}$, and it is possible to choose $\delta$ such that this inequality is true, since $\frac{1}{\operatorname{{\alpha_{\text{min}}}}}>\frac{p}{2}$ by hypothesis. Proof of Theorem \[enonce\_theoreme\_limite\] --------------------------------------------- To apply Theorem \[thm\_abstrait\_markov\], we first check the condition . Let $\theta$ be the Hölder exponent of $f$. We will work with the distance $d_{ \lambda^{-\theta}}= \lambda^{-\theta s(x,y)}$. For this distance, $T_Y$ is a Gibbs-Markov map. *Fact: if $f$ is $\theta$-Hölder on $S^1\times [0,1]$, then $$\sum m[A_{s,n}] D f_Y(A_{s,n}) <\infty.$$* Recall that $D f_Y(A_{s,n})$ (defined in Theorem \[thm\_abstrait\_markov\]) is the best Lipschitz constant of $f_Y$ on $A_{s,n}$, here for the distance $d_{ \lambda^{-\theta}}$. Take $(\omega_1,x_1)$ and $(\omega_2,x_2)\in A_{s,n}$ with for example $x_2{\geqslant}x_1$. This implies that $x_1\in J_n^+(\omega_2)$ and that, for $0{\leqslant}k{\leqslant}n$, $d(T^k(\omega_1,x_1),T^k(\omega_2,x_2)) {\leqslant}(1+D) |F^k \omega_1 -F^k \omega_2|$ (see the beginning of the proof of Proposition \[prop\_distortion\_bornee\]). Moreover, $d(T^k(\omega_2,x_1),T^k(\omega_2,x_2)) {\leqslant}d(T^n(\omega_2,x_1),T^n(\omega_2,x_2))$ (since, if $\omega$ is fixed, the map $T_{\alpha(\omega)}$ is expanding). Thus, for $0{\leqslant}k{\leqslant}n$, $$\begin{aligned} d(T^k(\omega_1,x_1),T^k(\omega_2,x_2))& {\leqslant}d(T^k(\omega_1,x_1),T^k(\omega_2,x_1)) +d(T^k(\omega_2,x_1),T^k(\omega_2,x_2)) \\& {\leqslant}(1+D)|F^k \omega_1 -F^k \omega_2| +d(T^n(\omega_2,x_1),T^n(\omega_2,x_2)) \\& {\leqslant}(1+D)|F^n \omega_1 -F^n \omega_2| +d(T^n(\omega_1,x_1),T^n(\omega_2,x_1)) \\& \hphantom{=\ } +d(T^n(\omega_1,x_1),T^n(\omega_2,x_2)) \\& {\leqslant}(1+D)|F^n \omega_1 -F^n \omega_2| +(1+D)|F^n \omega_1 -F^n \omega_2| \\& \hphantom{=\ } +d(T^n(\omega_1,x_1),T^n(\omega_2,x_2)) \\& {\leqslant}(3+2D)d(T^n(\omega_1,x_1),T^n(\omega_2,x_2)). \end{aligned}$$ We deduce that $$\begin{aligned} |f_Y(\omega_1,x_1)-f_Y(\omega_2,x_2)|& {\leqslant}\sum_{k=0}^{n-1} |f(T^k(\omega_1,x_1))-f(T^k(\omega_2,x_2))| \\& {\leqslant}\sum_{k=0}^{n-1}C d(T^k(\omega_1,x_1),T^k(\omega_2,x_2))^\theta \\& {\leqslant}C' n d(T^n(\omega_1,x_1),T^n(\omega_2,x_2))^\theta. \end{aligned}$$ As $T_Y$ is expanding for the distance $d'$ (defined in , and equivalent to $d$), we get $$d(T^n(\omega_1,x_1),T^n(\omega_2,x_2)) {\leqslant}C d_{\lambda^{-1}}(T^n(\omega_1,x_1),T^n(\omega_2,x_2))=C \lambda d_{\lambda^{-1}}((\omega_1,x_1),(\omega_2,x_2)),$$ whence $d(T^n(\omega_1,x_1),T^n(\omega_2,x_2))^\theta {\leqslant}C d_{\lambda^{-\theta}}((\omega_1,x_1),(\omega_2,x_2))$. Thus, $Df_Y(A_{s,n}) {\leqslant}C n$, and $$\sum m(A_{s,n}) D f_Y(A_{s,n}) {\leqslant}C \sum m({\varphi}_Y=n) n = C < +\infty,$$ by Kac’s Formula. In the case $\operatorname{{\alpha_{\text{min}}}}<1/2$, Lemma \[L2\_am\_petit\] gives that $f_Y \in L^2$. Moreover, $|f|_Y\in L^2$ for the same reason, and ${\varphi}\in L^2$ (since ${\varphi}=g_Y$ for $g\equiv 1$, whence Lemma \[L2\_am\_petit\] applies also). We have already checked the condition , so we can apply (the first case of) Theorem \[thm\_abstrait\_markov\]. This yields the central limit theorem for $f$. The second and third cases are analogous. Let us prove for example the third one, i.e. $1/2<\operatorname{{\alpha_{\text{min}}}}<1$ and $c\not=0$. Assume for example $c>0$. We estimate $m(f_Y>x)$. *Fact: $m(f_Y>x) \sim \left(\frac{c \sqrt{\ln x}}{x}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} A$ and $m(f_Y<-x)=o\left(\frac{\sqrt{\ln x}}{x}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}$.* We prove the estimate on $m(f_Y>x)$, the other one being similar. Let $g\equiv c$ on $S^1 \times [0,1]$. Then $g_Y=nc$ on $[{\varphi}_Y=n]$, which implies that $m(g_Y>nc)=m({\varphi}_Y>n)\sim \left(\frac{\sqrt{\ln n}}{n}\right)^{1/\operatorname{{\alpha_{\text{min}}}}} A$ by Lemma \[controle\_mesure\_phi\]. In the general case, consider $j=f-g$, and let us prove that $m(|j_Y|>x)=o\left(\frac{\sqrt{\ln x}}{x}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}$. As $f_Y=g_Y+j_Y$, it will give $$m(g_Y>x(1+{\varepsilon})) -m(|j_Y|>x{\varepsilon}) {\leqslant}m(f_Y>x) {\leqslant}m(g_Y>x(1-{\varepsilon}))+m(|j_Y|>x{\varepsilon}),$$ which gives the conclusion. Let $\gamma>0$ with $\gamma<\min(\theta,\operatorname{{\alpha_{\text{max}}}})$ (where $\theta$ is the Hölder coefficient of $f$). Lemma \[lemme\_dans\_Lp\] gives that $j_Y\in L^p$ if $p<\min\left(\frac{2}{\operatorname{{\alpha_{\text{min}}}}},\frac{1}{\operatorname{{\alpha_{\text{min}}}}(1-\gamma/\operatorname{{\alpha_{\text{max}}}})} \right)$. We can in particular choose $p>1/\operatorname{{\alpha_{\text{min}}}}$. Then $m(|j_Y|>x) {\leqslant}\int \left(\frac{|j_Y|}{x}\right)^p =O(x^{-p})$, which concludes the proof of the fact. The same fact holds for ${\varphi}_Y$ and $|f|_Y$, with the same proof, whence we are in the third case of Theorem \[thm\_abstrait\_markov\]. This gives the desired result. Assume finally that $\frac{1}{2}{\leqslant}\operatorname{{\alpha_{\text{min}}}}<1$ and that $c=0$. Under the hypotheses of the theorem, we can apply Lemma \[lemme\_dans\_Lp\] with $p=2$, and get that $f_Y \in L^2$. The proof of this lemma shows in fact that the function $M$ (defined in Theorem \[thm\_abstrait\_markov\]) is also in $L^2$. Finally, Lemma \[controle\_mesure\_phi\] shows that $m[{\varphi}_Y>x] \sim \left(\frac{\sqrt{\ln x}}{x}\right)^{1/\operatorname{{\alpha_{\text{min}}}}}A$. We have checked all the hypotheses of the first case of Theorem \[thm\_abstrait\_markov\]. Induced maps and limit theorems {#appendice:loi_stable} =============================== The aim of this section is to prove very general results stating that, if a function satisfies a limit theorem for an induced map, it also satisfies one for the initial map. Similar theorems have been proved in [@gouezel:stable], by spectral methods. We will describe here a more elementary method, essentially due to Melbourne and Török for flows ([@melbourne_torok]). If $Y$ is a subset of a probability space $(X,m)$, $T:X\to X$, and $T_Y$ is the induced map on $Y$, we will write $S_n^Y g=\sum_{k=0}^{n-1} g\circ T_Y^k$: this is the Birkhoff sum of $g$, for the transformation $T_Y$. We will also write $E_Y(g)=\frac{\int_Y g}{m[Y]}$. Finally, for $t\in {\mathbb{R}}$, $\lfloor t \rfloor$ denotes the integer part of $t$. \[thm\_probabiliste\_general\] Let $T:X\to X$ be an ergodic endomorphism of a probability space $(X,m)$, and $f:X\to {\mathbb{R}}$ an integrable function with vanishing integral. Let $Y\subset X$ have positive measure. For $y\in Y$, write ${\varphi}(y)=\inf\{n>0 {\ |\ }T^n(y)\in Y\}$ and $f_Y(y)=\sum_{k=0}^{{\varphi}(y)-1} f(T^k y)$, and $M(y)=\max_{1{\leqslant}k{\leqslant}{\varphi}(y)} \left|\sum_{j=0}^{k-1} f(T^j y) \right|$. We assume the following properties: 1. There exists a sequence $B_n\to +\infty$, with $\sup_{r {\leqslant}2n} \frac{B_r}{B_n} < \infty$ and $\inf_{r {\geqslant}n} \frac{B_r}{B_n}>0$, such that $(f_Y, {\varphi})$ satisfies a mixing limit theorem for the normalization $B_n$: there exists a random variable $Z$ such that, for every $t\in {\mathbb{R}}$, $$\label{limite_mixing} E_Y\left({\varphi}e^{it \frac{S^Y_{\lfloor n m(Y) \rfloor} f_Y}{B_n}}\right) \to E_Y({\varphi}) E\left(e^{itZ}\right).$$ 2. For every ${\varepsilon}>0$, there exists $C$ such that, for any $n\in {\mathbb{N}}^*$, $$\label{majore_fY} m\{y\in Y {\ |\ }M(y) {\geqslant}{\varepsilon}B_n \} {\leqslant}\frac{C}{n}.$$ 3. \[hypothese\_3\] There exists $b>0$ such that, in the natural extension of $T_Y$, $\frac{1}{N^b} \sum_0^{N-1} f_Y(T_Y^k)$ tends almost everywhere to $0$ when $N \to \pm \infty$. 4. \[hypothese\_4\] For every ${\varepsilon}>0$, there exists $A>0$ and $N_0$ such that, for every $n{\geqslant}N_0$, $$\label{eq_hyp4} m\left\{ y\in Y {\ |\ }\left| \frac{S_n^Y {\varphi}- n E_Y({\varphi})}{B_n^{1/b}} \right| {\geqslant}A \right\} {\leqslant}{\varepsilon}.$$ Then the function $f$ satisfies also a limit theorem: $$E\left(e^{it \frac{S_n f}{B_n}}\right) \to E(e^{it Z}),$$ i.e. $\frac{S_n f}{B_n}$ tends in distribution to $Z$. The hypotheses of the theorem are tailor-made so that the following proof works, but they are in fact often satisfied in natural cases. Let us comment on these 4 hypotheses: 1. The convergence is very often satisfied when $f_Y$ satisfies a limit theorem. Namely, the martingale proofs or spectral proofs of limit theorems automatically give this kind of convergence. 2. If $Z_0,Z_1,\ldots$ are independent identically random variables such that $\frac{\sum_{0}^{n-1} Z_k}{B_n}$ converges in distribution to a nontrivial limit, then for all ${\varepsilon}>0$, there exists $C$ such that $P(|Z_0| {\geqslant}{\varepsilon}B_n) {\leqslant}\frac{C}{n}$: this is a consequence of the classification of the stable laws, see [@feller:2]. Here, we are not in the independent setting, and there is no such classification. However, the same kind of results holds very often: usually, it is not hard to check in practical cases that $m(|f_Y(x)| {\geqslant}{\varepsilon}B_n) {\leqslant}\frac{C}{n}$, since $f_Y$ satisfies a limit theorem by the first assumption. Set $|f|_Y(y)=\sum_{j=0}^{{\varphi}(y)-1} |f(T^j y)|$. As $|f|_Y$ and $f_Y$ have more or less the same distribution, $|f|_Y$ satisfies also often $$\label{majore_fY'} \tag{\ref{majore_fY}'} m\{ y\in Y {\ |\ }|f|_Y(y) {\geqslant}{\varepsilon}B_n \} {\leqslant}\frac{C}{n}.$$ Since $M {\leqslant}f_Y$, implies . Thus, it will often be sufficient to check . However, is sometimes strictly weaker than , because of cancellations, which is why we have stated the theorem with . 3. The natural extension is useful so that we can let $N$ tend to $-\infty$, and consider $T_Y^{-1}$ in the proof. Generally, Birkhoff’s Theorem yields that this assumption is satisfied for $b=1$. This is often sufficient. However, sometimes, it is important to have better estimates. It is then possible to use [@vitesse_birkhoff Theorem 16], for example: this theorem ensures that, if the correlations of $f_Y\in L^2$ decay at least as $O(1/n)$, then the hypothesis is satisfied for any $b>1/2$ (for $N \to -\infty$, use the fact that $\int f_Y \cdot f_Y \circ T_Y^n = \int f_Y\circ T_Y^{-n} \cdot f_Y$, and apply the result to $T_Y^{-1}$). 4. The fourth assumption is weaker than $$\label{eq_hyp4'} \tag{\ref{eq_hyp4}'} \exists B'_n=O(B_n^{1/b}) \text{ such that } \frac{S_n^Y {\varphi}- nE_Y({\varphi})}{B'_n}\text{ converges in distribution.}$$ Moreover, ${\varphi}$ is often simpler than $f_Y$. Since $f_Y$ satisfies a limit theorem (this is more or less the first hypothesis), this is also often the case of ${\varphi}$, which implies . Thus, – and hence – are satisfied quite generally. Without loss of generality, we can work in a tower, i.e. assume that $X=\{ (y,i) {\ |\ }y\in Y, i\in\{0,\ldots,{\varphi}(y)-1\} \}$ and that, for $i<{\varphi}(y)-1$, $T(y,i)=(y,i+1)$, while $T(y,{\varphi}(y)-1)=(T_Y(y),0)$. Namely, it is possible to build an extension of $X$ satisfying these properties, and it is equivalent to prove a limit theorem in $X$ or in this extension (see for example [@gouezel:stable Section 4.1]). Note that $E_Y({\varphi})=1/m(Y)$ by Kac’s Formula. Let $\pi$ be the projection from $X$ to $Y$, given by $\pi(y,i)=y$. In this proof, we will write $S_t f(x)$, even when $t$ is not an integer, for $S_{\lfloor t \rfloor }f(x)$. In the same way, $T^t$ should be understood as $T^{\lfloor t \rfloor}$. We also extend $B_n$ to ${\mathbb{R}}_+$, setting $B_t:=B_{\lfloor t \rfloor}$. As $T$ is ergodic, $T_Y$ is also ergodic ([@aaronson:book Proposition 1.5.2]). Birkhoff’s Theorem gives that $$\label{asymp_phin} S_n^Y {\varphi}= \frac{n}{m(Y)} +o(n)$$ almost everywhere on $Y$. For $y\in Y$ and $N\in {\mathbb{N}}$, let $n(y,N)$ be the greatest integer $n$ such that $S_n^Y {\varphi}(y) < N$. If $y$ is such that $S_n^Y {\varphi}(y)=\frac{n}{m(Y)}+o(n)$ (which is true almost everywhere), then $n(y,N)$ is finite for every $N$, and $\frac{n(y,N)}{m(Y)} \sim N$, i.e.$$\label{renouvellement_basique} \frac{n(y,N)}{N m(Y)} \to 1.$$ Since $\int_X e^{it (S_N^Y f_Y)\circ \pi }=\int_Y {\varphi}e^{it S_N^Y f_Y}$, yields that $$\label{Thm_limite_SY} \frac{(S^Y_{Nm(Y)} f_Y)\circ \pi}{B_N} \to Z$$ in distribution on $X$. The idea of the proof will be to see that $(S^Y_{Nm(Y)}f_Y)\circ \pi$ and $S_N f$ are close (this is not surprising, since one iteration of $T_Y$ corresponds roughly to $1/m(Y)$ iterations of $T$). This will give that $\frac{S_N f}{B_N}$ tends to $Z$. We write $$\begin{aligned} S_N f(y,i)= \left(S_N f(y,i)-S_N f(y,0) \right) &+ \left(S_N f(y,0)-S^Y_{n(y,N)}f_Y(y) \right) \\& +\left( S^Y_{n(y,N)}f_Y(y) - S^Y_{Nm(Y)}f_Y(y) \right) +S^Y_{ Nm(Y)}f_Y(y). \end{aligned}$$ The last term, equal to $\bigl(S^Y_{N m(Y)} f_Y\bigr)\circ \pi$, satisfies a limit theorem by . To conclude the proof, we will see that the three other terms, divided by $B_N$, tend to $0$ in probability. The second and third terms depend only on $y$. Thus, the following lemma will be useful to prove that they tend to $0$ on $X$: \[proba\_sur\_Y\_donne\_proba\_sur\_X\] Let $f_n$ be a sequence of functions on $Y$, tending to $0$ in probability on $Y$. Then $f_n \circ \pi$ tends to $0$ in probability on $X$. Take ${\varepsilon}>0$. As $f_n \to 0$ in probability, the measure of $E_n:=\{ y\in Y {\ |\ }|f_n(y)|{\geqslant}{\varepsilon}\}$ tends to $0$. As ${\varphi}\in L^1$, dominated convergence yields that $\int_{E_n} {\varphi}\to 0$, i.e. the measure of $\pi^{-1}(E_n)$ tends to $0$. But $\pi^{-1}(E_n)$ is exactly the set where $|f_n \circ \pi|{\geqslant}{\varepsilon}$. *Fact: $\frac{1}{B_N} \left(S_N f(y,i)-S_N f(y,0) \right)$ tends to $0$ in probability on $X$.* Set $V_N(y)=\sum_{i=0}^{{\varphi}(y)-1} |f \circ T^N(y,i)|$ on $Y$. Then ${\left\| V_N \right\|}_{L^1(Y)} ={\left\| f\circ T^N \right\|}_{L^1(X)} ={\left\| f \right\|}_{L^1(X)}$ since $T$ preserves the measure. Thus, $V_N/B_{N}$ tends to $0$ in $L^1(Y)$, and in probability. Lemma \[proba\_sur\_Y\_donne\_proba\_sur\_X\] yields that $\frac{1}{B_N} V_N\circ \pi$ tends to $0$ in probability on $X$. As $S_N f(y,i)-S_N f(y,0) =\sum_N^{N+i-1} f(T^k(y,0)) -\sum_0^{i-1} f(T^k(y,0))$, we get $|S_N f(y,i)-S_N f(y,0)| {\leqslant}V_N(y)+V_0(y)$. Thus, $\frac{1}{B_N} \left(S_N f(y,i)-S_N f(y,0) \right)$ is bounded by a function going to $0$ in probability. *Fact: $\frac{1}{B_N} \left(S_N f(y,0)-S_{n(y,N)}^Y f_Y(y)\right)$ tends to $0$ in probability on $X$.* By Lemma \[proba\_sur\_Y\_donne\_proba\_sur\_X\], it is sufficient to prove it on $Y$. We have $$\left|S_N f(y,0)-S_{n(y,N)}^Y f_Y(y)\right| = \left| \sum_{S_{n(y,N)}^Y {\varphi}(y)}^{N-1} f\circ T^k(y,0) \right| {\leqslant}M\left(T_Y^{n(y,N)}y\right).$$ Let $a>0$ be very small, we show that $m\left\{ y {\ |\ }M\left(T_Y^{n(y,N)}y\right) {\geqslant}a B_N \right\} \to 0$. Let ${\varepsilon}>0$. Let $C$ be such that $m( M(y) {\geqslant}a B_n) {\leqslant}\frac{C}{n}$, by . Set $\delta=\frac{{\varepsilon}}{2Cm(Y)}$. By , for large enough $N$, $$m\left\{\left|\frac{n(y,N)}{m(Y) N} - 1 \right| {\geqslant}\delta\right\} {\leqslant}{\varepsilon}.$$ When $\left|\frac{n(y,N)}{m(Y) N} - 1 \right| {\leqslant}\delta$, the fact that $M\left(T_Y^{n(y,N)}y\right) {\geqslant}aB_N$ implies that there exists $n \in [(1-\delta) m(Y)N, (1+\delta)m(Y)N]$ such that $M(T_Y^n y) {\geqslant}aB_N$. Thus, $$m\left\{ y {\ |\ }M\left(T_Y^{n(y,N)}y\right) {\geqslant}a B_N \right\} {\leqslant}{\varepsilon}+ \sum_{n=(1-\delta) m(Y)N}^{(1+\delta) m(Y)N} m\{M(T_Y^n y) {\geqslant}a B_N\}.$$ As $m$ is invariant by $T_Y$, we have $m\{M(T_Y^n y) {\geqslant}a B_N\} =m\{ M {\geqslant}a B_N\} {\leqslant}\frac{C}{N}$. Thus, $$m\left\{ y {\ |\ }M\left(T_Y^{n(y,N)}y\right) {\geqslant}a B_N \right\} {\leqslant}{\varepsilon}+ 2\delta m(Y)N \frac{C}{N} =2{\varepsilon}.$$ *Fact: $\frac{1}{B_N} \left(S^Y_{n(y,N)} f_Y- S^Y_{N m(Y)}f_Y\right)$ tends to $0$ in probability on $X$ when $N \to \infty$.* By Lemma \[proba\_sur\_Y\_donne\_proba\_sur\_X\], it is sufficient to prove it on $Y$. Without loss of generality, we can use the natural extension and assume that $T_Y$ is invertible. For $n<0$, write $S^Y_n f_Y=\sum_0^{|n|-1} f_Y\circ T_Y^{-j}$. Then, setting $\nu(y,N)=n(y,N)-N m(Y)$, $$\label{definit_nu} S^Y_{n(y,N)} f_Y(y)- S^Y_{N m(Y)}f_Y(y) =S^Y_{\nu(y,N)} f_Y \left(T^{N m(Y)}(y)\right).$$ If $A>0$ and $N\in {\mathbb{N}}$, as $E_Y({\varphi})=1/m(Y)$, we get $$\begin{gathered} \{y{\ |\ }\nu(y,N) {\geqslant}A B_N^{1/b}\} =\{ n(y,N) {\geqslant}A B_N^{1/b}+ N m(Y)\} =\{ S^Y_{A B_N^{1/b}+Nm(Y)}{\varphi}< N\} \\= \left \{ \frac{S^Y_{A B_N^{1/b}+N m(Y)}{\varphi}- (A B_N^{1/b}+ Nm(Y))E_Y({\varphi})}{\left(B_{AB_N^{1/b}+Nm(Y)}\right)^{1/b}} < -\frac{A}{m(Y)}\left(\frac{B_N}{B_{AB_N^{1/b}+Nm(Y)}}\right)^{1/b} \right\}. \end{gathered}$$ For some integer $k$, we have $N {\leqslant}2^k Nm(Y) {\leqslant}2^k (AB_{Nm(Y)}^{1/b}+Nm(Y))$. The assumption $\sup_{r{\leqslant}2n} \frac{B_r}{B_n} {\leqslant}C<\infty$ thus yields that $\frac{B_N}{B_{AB_{Nm(Y)}^{1/b}+Nm(Y)}} {\leqslant}C^k$. In particular, $$\{y{\ |\ }\nu(y,N) {\geqslant}A B_N^{1/b}\} \subset \left \{ \frac{S^Y_{A B_N^{1/b}+N m(Y)}{\varphi}- (A B_N^{1/b}+ Nm(Y))E_Y({\varphi})}{\left(B_{AB_N^{1/b}+Nm(Y)}\right)^{1/b}} < -\frac{AC^{k/b}}{m(Y)} \right\}.$$ Consequently, if $A$ is large enough, Assumption \[hypothese\_4\] yields that $m\{y{\ |\ }\nu(y,N) {\geqslant}A B_N^{1/b}\} {\leqslant}{\varepsilon}$ for large enough $N$. We handle in the same way the set of points where $\nu(y,N) {\leqslant}-A B_N^{1/b}$, using the assumption $\inf_{r{\geqslant}n}\frac{B_r}{B_n}>0$. We have thus proved: $$\label{convergence_nuy} \forall {\varepsilon}>0, \exists A>0, \exists N_0>0, \forall N {\geqslant}N_0,\ m\{ y{\ |\ }|\nu(y,N)| {\geqslant}A B_N^{1/b} \} {\leqslant}{\varepsilon}.$$ Set $W_N(y)=\frac{1}{B_N} S_{\nu(y,N)} f_Y \left(T_Y^{ N m(Y)}(y)\right)$, we will show that it tends to $0$ in distribution, which will conclude the proof, by . Take $a>0$, we show that $m(|W_N|>a) \to 0$ when $N\to \infty$. Let ${\varepsilon}>0$. Assumption \[hypothese\_3\] ensures that there exists ${\widetilde}{Y}$ with $m({\widetilde}{Y}){\geqslant}m(Y)-{\varepsilon}$ and $N_1$ such that $\frac{1}{|N|^b}|S_N^Y f_Y| {\leqslant}{\varepsilon}$ on ${\widetilde}{Y}$, for every $|N|{\geqslant}N_1$. Define $Y'_N=\{y\in Y {\ |\ }|\nu(y,N)|< N_1\}$ and $Y''_N=\{y \in Y {\ |\ }|\nu(y,N)|{\geqslant}N_1\}$. We estimate first the contribution of $Y'_N$. Set $\psi(y)=\sum_{-N_1}^{N_1-1} |f_Y\circ T_Y^j|$. Since $\psi$ is measurable, there exists a constant $C$ and a subset $Z$ of $Y$ with $m(Z){\geqslant}m(Y)-{\varepsilon}$ and $\psi {\leqslant}C$ on $Z$. Then, for $y\in Y'_N$, we have $|W_N(y)|{\leqslant}\frac{1}{B_N} \psi\left(T_Y^{N m(Y)} y\right)$. Set $Z_N=Y'_N \cap T_Y^{- N m(Y)}(Z)$: it satisfies $m(Z_N) {\geqslant}m(Y'_N)-{\varepsilon}$. On $Z_N$, we have $|W_N|{\leqslant}\frac{C}{B_N}$, whence, for large enough $N$, $|W_N|<a$ on $Z_N$. Thus, for large enough $N$, $$m\left\{y \in Y'_N {\ |\ }|W_N(y)|{\geqslant}a\right\} {\leqslant}m\left\{y\in Z_N {\ |\ }|W_N(y)|{\geqslant}a\right\}+{\varepsilon}={\varepsilon}.$$ We estimate then the contribution of $Y''_N$. Set ${\widetilde}{Y}''_N=Y''_N \cap T_Y^{-N m(Y)}({\widetilde}{Y})$, satisfying $m({\widetilde}{Y}''_N) {\geqslant}m(Y''_N)-{\varepsilon}$. Thus, $$m(|W_N|{\geqslant}a) {\leqslant}m\{y\in {\widetilde}{Y}''_N {\ |\ }|W_N(y)|{\geqslant}a\}+2{\varepsilon}.$$ On ${\widetilde}{Y}''_N$, $|\nu(y,N)|{\geqslant}N_1$, whence $\frac{1}{|\nu(y,N)|^b}\left|S^Y_{\nu(y,N)}f_Y \left(T_Y^{N m(Y)}y\right)\right| {\leqslant}{\varepsilon}$. Thus, $|W_N(y)|{\leqslant}{\varepsilon}\frac{|\nu(y,N)|^b}{B_N}={\varepsilon}\left(\frac{|\nu(y,N)|}{B_N^{1/b}}\right)^b$. Consequently, $$m(|W_N|{\geqslant}a) {\leqslant}m\left (\frac{|\nu(y,N)|}{B_N^{1/b}} {\geqslant}\left(\frac{a}{{\varepsilon}}\right)^{1/b} \right) +2{\varepsilon}.$$ Thus, if ${\varepsilon}$ is small enough, and $N$ large enough, yields that $m(|W_N|{\geqslant}a) {\leqslant}3{\varepsilon}$. The three facts we have just proved imply that $\frac{S_N f(y,i)}{B_N}-\frac{S_{Nm(Y)}^Y f_Y(y)}{B_N} \to 0$ in distribution on $X$. As $\frac{S_{Nm(Y)}^Y f_Y(y)}{B_N} \to Z$ in distribution on $X$, by , this concludes the proof. Multiple decorrelations and $L^p$-boundedness {#appendice:pene} ============================================= The following theorem has been useful in this paper: \[thm\_borne\_Lp\_pene\] Let $F: \omega \to 4\omega$ on the circle $S^1$. Then, for every $p \in [1,\infty)$, there exists a constant $K_p$ such that, for every $n\in {\mathbb{N}}$, for every $f_0,\ldots,f_{n-1}:S^1 \to {\mathbb{R}}$ bounded by $1$, of zero average and $1$-Lipschitz, $${\left\| \sum_{k=0}^{n-1} f_k \circ F^k \right\|}_p {\leqslant}K_p \sqrt{n}.$$ This result has essentially been proved by Françoise Pène, in a much broader context. Her proof depends on a property of multiple decorrelations, which is implied by the spectral gap of the transfer operator: Let ${\left\| f \right\|}$ be the Lipschitz norm of the function $f$ on the circle $S^1$. Then, for every $m,m'\in {\mathbb{N}}$, there exist $C>0$ and $\delta<1$ such that, for every $N\in {\mathbb{N}}$, for every increasing sequences $(k_1,\ldots,k_m)$ and $(l_1,\ldots,l_{m'})$, for every Lipschitz functions $G_1,\ldots,G_m,H_1,\ldots, H_{m'}$, $$\label{decorr_mult} \left|\operatorname{Cov}\left(\prod_{i=1}^m G_i \circ F^{k_i}, \prod_{j=1}^{m'} H_j\circ F^{N+l_j}\right)\right| {\leqslant}C \left(\prod_{i=1}^m {\left\| G_i \right\|}\right) \left(\prod_{j=1}^{m'} {\left\| H_j \right\|} \right) \delta^{N-k_m}.$$ Here $\operatorname{Cov}(u,v)=\int uv -\int u \int v$. Let ${\widehat}{F}$ be the transfer operator associated to $F$, and acting on Lipschitz functions. It is known that it admits a spectral gap and that its iterates are bounded, i.e. there exist constants $M>0$ and $\delta<1$ such that $\bigl\|{\widehat}{F}^n f\bigr\|{\leqslant}M {\left\| f \right\|}$, and $\bigl\|{\widehat}{F}^n f\bigr\|{\leqslant}M\delta^n {\left\| f \right\|}$ if $\int f=0$. We can assume that $N{\geqslant}k_m$ (otherwise, $\delta^{N-k_m}{\geqslant}1$, and the inequality becomes trivial). Then, writing ${\varphi}=\prod_{i=1}^m G_i \circ F^{k_i}$ and $\psi=\prod_{j=1}^{m'} H_j\circ F^{l_j}$, we get $$\begin{aligned} \left|\operatorname{Cov}({\varphi},\psi\circ F^N)\right| & =\left| \int \left({\varphi}-\int {\varphi}\right) \psi\circ F^N\right| =\left|\int {\widehat}{F}^N \left({\varphi}-\int {\varphi}\right) \psi \right| \\& {\leqslant}{\left\| {\widehat}{F}^N \left({\varphi}-\int {\varphi}\right) \right\|} {\left\| \psi \right\|}_\infty. \end{aligned}$$ But $$\begin{aligned} {\widehat}{F}^N({\varphi})& ={\widehat}{F}^N \left(\prod G_i^{k_i}\right) ={\widehat}{F}^{N-k_m} (G_m {\widehat}{F}^{k_m-k_{m-1}}(G_{m-1} {\widehat}{F}^{k_{m-1}-k_{m-2}}(\ldots {\widehat}{F}^{k_2-k_1} (G_1))\ldots) \\& =:{\widehat}{F}^{N-k_m}(\chi). \end{aligned}$$ As the iterates of ${\widehat}{F}$ are bounded on Lipschitz functions, we get a bound on the Lipschitz norm of $\chi$: ${\left\| \chi \right\|}{\leqslant}M^{m-1} \prod {\left\| G_i \right\|}$. Moreover, $\int \chi=\int {\varphi}$, whence $$\begin{aligned} {\left\| {\widehat}{F}^N \left({\varphi}-\int {\varphi}\right) \right\|} & ={\left\| {\widehat}{F}^{N-k_m} \left(\chi -\int \chi\right) \right\|} {\leqslant}M\delta^{N-k_m} {\left\| \chi-\int \chi \right\|} \\& {\leqslant}M\delta^{N-k_m} M^{m-1} \prod {\left\| G_i \right\|}. \end{aligned}$$ When $p$ is an even integer, Theorem \[thm\_borne\_Lp\_pene\] is then a consequence of [@pene:averaging Lemma 2.3.4]. The Hölder inequality gives the general case. \[remarque\_pene\] The same result holds for Hölder functions instead of Lipschitz functions, with the same proof. We will also need the following result: \[ineg\_max\_abstraite\] Let $T$ be a measure preserving transformation on a space $X$. Let $f:X\to {\mathbb{R}}$ and $p>2$ be such that $$\exists C>0, \forall n\in {\mathbb{N}}^*, {\left\| S_n f \right\|}_p {\leqslant}C \sqrt{n}.$$ Write $M_nf(x)=\sup_{1{\leqslant}k{\leqslant}n} |S_k f(x)|$. Then there exists a constant $K$ such that $$\forall n{\geqslant}2, {\left\| M_n f \right\|}_p {\leqslant}K (\ln n)^{\frac{p-1}{p}} \sqrt{n}.$$ Let $n\in {\mathbb{N}}^*$. Let $k<2^n$, and write its binary decomposition $k=\sum_{j=0}^{n-1} {\varepsilon}_j 2^j$, with ${\varepsilon}_j \in \{0,1\}$. Set $q_j=\sum_{l=j}^{n-1} {\varepsilon}_l 2^l$ (in particular, $q_0=k$ and $q_n=0$). Then $S_k f=\sum_{j=0}^{n-1} (S_{q_j}f-S_{q_{j+1}}f)$. Consequently, the convexity inequality $(a_0+\ldots +a_{n-1})^p {\leqslant}n^{p-1} (a_0^p+\ldots+a_{n-1})^p$ gives that $$|S_k f|^p {\leqslant}n^{p-1} \sum_{j=0}^{n-1} |S_{q_j}f-S_{q_{j+1}}f|^p.$$ Note that $q_{j+1}$ is of the form $\lambda 2^{j+1}$ with $0{\leqslant}\lambda {\leqslant}2^{n-j-1}-1$, and $q_j$ is equal to $q_{j+1}$ or $q_{j+1}+2^j$. Thus, $$|S_k f|^p {\leqslant}n^{p-1} \sum_{j=0}^{n-1} \left( \sum_{\lambda=0}^{2^{n-j-1} -1} \left|S_{\lambda 2^{j+1} +2^j}f -S_{\lambda 2^{j+1}}f \right|^p \right).$$ The right hand term is independent of $k$, and gives a bound on $|M_{2^n -1} f|^p$. Moreover, $$\int \left|S_{\lambda 2^{j+1} +2^j}f -S_{\lambda 2^{j+1}}f \right|^p =\int \left|S_{2^j} f\right|^p {\leqslant}C^p \sqrt{2^j}^{p}.$$ Thus, we get $$\int |M_{2^n -1} f|^p {\leqslant}n^{p-1} \sum_{j=0}^{n-1} 2^{n-j-1} C^p 2^{pj/2} {\leqslant}K n^{p-1} 2^n 2^{(\frac{p}{2}-1)n} = K n^{p-1} \sqrt{2^n}^p.$$ For times of the form $2^n-1$, this is a bound of the form ${\left\| M_t \right\|}_p {\leqslant}K (\ln t)^{\frac{p-1}{p}} \sqrt{t}$. To get the same estimate for an arbitrary time $t$, it is sufficient to choose $n$ with $2^{n-1} {\leqslant}t<2^n$, and to note that $M_t {\leqslant}M_{2^n -1}$. \[ineg\_max\] Let $F: \omega \to 4\omega$ on the circle $S^1$, let $\chi:S^1 \to {\mathbb{R}}$ be a Hölder function with $0$ average, and $p>2$. Write $M_n \chi(x)=\sup_{1{\leqslant}k{\leqslant}n} |S_k \chi(x)|$. Then there exists a constant $K$ such that $$\forall n{\geqslant}2, {\left\| M_n \chi \right\|}_p {\leqslant}K (\ln n)^{\frac{p-1}{p}} \sqrt{n}.$$ Theorem \[thm\_borne\_Lp\_pene\] (or rather the remark following it, for the Hölder case) shows that ${\left\| S_n \chi \right\|}_p {\leqslant}C\sqrt{n}$. Consequently, Theorem \[ineg\_max\_abstraite\] gives the conclusion. [^1]: Département de Mathématiques et Applications, École Normale Supérieure, 45 rue d’Ulm 75005 Paris (France). e-mail `Sebastien.Gouezel@ens.fr` [^2]: *keywords*: intermittency, countable Markov shift, central limit theorem, stable laws. *2000 Mathematics Subject Classification:* 37A50, 37C40, 60F05
{ "pile_set_name": "ArXiv" }
--- abstract: 'The kaons decays to the pairs of charged and neutral pions are considered in the framework of the non-relativistic quantum mechanics. The general expressions for the decay amplitudes to the two different channels accounting for the strong interaction between pions are obtained. The developed approach allows one to estimate the contribution of terms of any order in strong interaction and correctly takes into account the electromagnetic interaction between the pions in the final state.' author: - '**S.R.Gevorkyan [^1] [^2], A.V.Tarasov, O.O.Voskresenskaya[^3]**' title: '**Final state interaction in kaons decays.**' --- Joint Institute for Nuclear Research, 141980 Dubna, Russia PACS:11.30.Rd;13.20.Eb\ Keywords:Decays of K-mesons; two channels decays, relative momenta, pions scattering lengths\ Introduction ============ It has long been known  [@gribov58; @gribov61] that the K-mesons decays with a pions in the final state can give unique information on the pions s-wave scattering lengths $a_0,a_2$ , whose values are predicted by Chiral Perturbation Theory (ChPT) with high accuracy [@colangelo01].\ Recently the high quality data on $K^\pm\to \pi^\pm\pi^0\pi^0$ decays have been obtained by NA48/2 collaboration at CERN [@batley06]. The dependence of the decay rate on the invariant mass of neutral pions $M^2=(p_1+p_2)^2$ reveals a prominent anomaly (cusp) at the charged pions production threshold $M_c^2=4m_c^2$.\ As was explained in  [@cabibbo04; @cabibbo05] this anomaly is due to the possibility for the kaon to decay to three charged pions, which after charge exchange reaction $\pi^+\pi^-\to \pi^0\pi^0$ gives the observed neutral pions. This possibility is provided by mass difference of charged and neutral pions. The detail consideration of this decay using the technique of non-relativistic field theory [@colangelo06] or ChPT  [@gamiz07] supports the proposed picture.\ Nevertheless there are two challenges crucial in scattering lengths extraction from kaons decays. One needs a reliable way to estimate the contribution of higher order terms in strong interaction and calculates the electromagnetic interaction among the charged pions.\ These issues are very close connected with each other. Calculation of the electromagnetic interaction in every order of strong interaction [@bisseger09] doesn’t solve the problem of bound states (pionium atoms), as to take into account electromagnetic interaction leading to unstable bound states one needs expressions for decay amplitudes including the strong interaction between pions in all orders [@gevorkyan07]. The problem of correct accounting of the electromagnetic effects are also necessary in a wide class of decays with two pions in the final state as for instance $K_{e4}$ decay [@gevorkyan1; @gevorkyan2].\ The phenomenon of cusp in elastic scattering at the threshold relevant to inelastic channel is known for many years and was widely discussed in the framework of non-relativistic quantum mechanics [@wigner48; @breit57; @baz57].For the elastic process $\pi^0\pi^0\to\pi^0\pi^0$ this anomaly at the $\pi^+\pi^-$ threshold was firstly discussed in the framework of ChPT in  [@meisner97]. In the present work we consider the kaon decay to pion pairs with pions of different masses. Using the well known results of quantum mechanics we obtain the matrix elements for decay $K\to \pi\pi$ where the final pions consist from pions of different masses ($\pi^+\pi^-,\pi^0\pi^0$). Two channel decay ================= We are interested in two channel decay of kaon to the pion pairs in the final state, where the pions in the pair can be neutral or charged . The well examples are $K_L\to \pi\pi$ as well as $K^\pm\to\pi^+\pi^-e^\pm\mu$ ($ K_{e4}$ decay). In what follows all quantities relevant to the neutral pions pair ($\pi^0\pi^0$) are labeled by index “n”, whereas the charged pions pairs ($\pi^+\pi^-$) are labeled by index “c”.\ We do not consider here the electromagnetic interaction in the pair, the effect discussed in our previous work [@gevorkyan2]. Our main goal is to obtain the matrix elements of the kaon decay to the pion pair accounting for different masses of neutral and charged pions and the possibility of charged exchange reaction $\pi^+\pi^-\to \pi^0\pi^0$ and the elastic scattering of pions in the final state.\ The general form of matrix element for kaon decay to the final state with two charged or neutral pions can be written in the operator form: M\_c=\_c\^+(r)M\_0(r)d\^3r; M\_n=\^+\_n(r)M\_0(r)d\^3r The two component operator $M_0=\left (\begin{array}{c}M_c^{(0)}(r)\\M_n^{(0)}(r)\end{array}\right)$, where $M_c^{(0)}(r),M_n^{(0)}(r)$ are the matrix elements of kaon decay to noninteracting charged and neutral pions pairs, while $\Psi_c(r), \Psi_n(r)$ are the appropriate two component wave functions.\ These wave functions would satisfy to couple Shrődinger equations [^4] -\_c(r)+U\_[cc]{}\_c(r)+U\_[cn]{}\_n(r)&=&k\_c\^2\_c(r)-\_n(r)+U\_[nn]{}\_n(r)+U\_[nc]{}\_c(r)&=&k\_n\^2\_n(r) where $U_{ij}$ are the strong potentials describing elastic $cc\to cc; nn\to nn$ scattering and charge exchange reaction $cn\to cn$ . $k_c,k_n$ are the charge and neutral pions momenta in the appropriate center of mass system.\ According to the general principles of scattering theory the asymptotic behavior of the wave functions $\Psi_c(r) ,\Psi_n(r)$ can be written through the s-wave amplitudes $f_{cc}, f_{nn}, f_{cn}, f_{nc}$ in the following form: \_c(r)&=&( [c]{}\ 0 )+ ( [c]{}f\^\*\_[cc]{}\ f\^\*\_[nc]{} )\_n(r)&=&( [c]{}0\ )+ ( [c]{}f\^\*\_[cn]{}\ f\^\*\_[nn]{} ) The first columns in these expressions describe the noninteracting s-waves pions pairs, whereas the second columns correspond to the interacting charge and neutral pions pair in the far asymptotic of corresponding wave function.\ One can rewritten these equations through the elements of appropriate S-matrix  [@landau63]: S\_[cc]{}=1+2ik\_cf\_[cc]{}; S\_[nn]{}=1+2ik\_nf\_[nn]{}; S\_x=2if\_x Substituting these relations in the expressions (3) one immediately obtains: \^\*\_c(r)=( [c]{}i\ -i ) \^\*\_n(r)=( [c]{} -i\ i ) From the other hand the wave functions $\Psi_c(r)$ and $\Psi_n(r)$ can be constructed as the linear combination of two real solutions of equations (2) \^[(1)]{}=( [c]{}\_c\^[(1)]{}\ \_n\^[(1)]{} )\^[(2)]{}=( [c]{}\_c\^[(2)]{}\ \_n\^[(2)]{} ) with the standard boundary conditions[^5] $ \Psi^{(1)}(0)=\Psi^{(2)}(0)=0.$ Keeping this in mind we will look for the desired wave functions in the form: \^\*\_c(r)=A\_c\^[(1)]{}\^[(1)]{} +A\_c\^[(2)]{}\^[(2)]{}; \^\*\_n(r)=A\_n\^[(1)]{}\^[(1)]{} +A\_n\^[(2)]{}\^[(2)]{} where $A_c^{(1)},A_c^{(2)},A_n^{(1)},A_n^{(2)}$ are arbitrary complex numbers.\ Substituting the expressions (6),(7) in (1) one gets: M\_c&=&A\_c\^[(1)]{}(\_c\^[(1)]{}M\_c\^[(0)]{} +\_n\^[(1)]{}M\_n\^[(0)]{})d\^3r&+&A\_c\^[(2)]{}(\_c\^[(2)]{}M\_c\^[(0)]{} +\_n\^[(2)]{}M\_n\^[(0)]{})d\^3r =A\_c\^[(1)]{}I\_1+A\_c\^[(2)]{}I\_2M\_n&=&A\_n\^[(1)]{}(\_c\^[(1)]{}M\_c\^[(0)]{} +\_n\^[(1)]{}M\_n\^[(0)]{})d\^3r&+&A\_n\^[(2)]{}(\_c\^[(2)]{}M\_c\^[(0)]{} +\_n\^[(2)]{}M\_n\^[(0)]{})d\^3r =A\_n\^[(1)]{}I\_1+A\_n\^[(2)]{}I\_2 Making use that any real solution of equations (2) out of the potential range $(U_{ij}=0)$ can be taken in the form: (r)==(e\^[ikr+i(k)]{}-e\^[-ikr-i(k)]{}) we will look for the real solutions out of potential range as [^6]: \_c\^[(1)]{}(r)&=&; \_c\^[(2)]{}(r)= \_n\^[(1)]{}(r)&=&; \_n\^[(2)]{}(r)= In order to obtain the relations between the unknown coefficients[^7] in the above expressions let us at first compare the asymptotic behavior of the initial wave functions $\Psi_c(r)$ in (5) with the first raw in the parametrization (10): (g\_1e\^[ik\_cr]{}-g\_1\^\*e\^[-ik\_cr]{})&+&(g\_2e\^[ik\_cr]{}-g\_2\^\*e\^[-ik\_cr]{})= (e\^[-ik\_cr]{}-S\_[cc]{}e\^[ik\_cr]{})(h\_1e\^[ik\_nr]{}-h\_1\^\*e\^[-ik\_nr]{})&+&(h\_2e\^[ik\_nr]{}-h\_2\^\*e\^[-ik\_nr]{})= -S\_[cn]{}e\^[ik\_nr]{}Gathering the structures in front of the appropriate exponents and solving the system of obtained equations after a bit algebra we get: A\_c\^[(1)]{}&=&; A\_c\^[(2)]{}=-;H=g\_1\^\*h\_2\^\*-h\_1\^\*g\_2\^\*;S\_[cc]{}&=&; S\_[cn]{}= Carry out the same procedure for $\Psi(r)$ we obtain the relevant relations for the case of kaon decay to pair of neutral pions: A\_n\^[(1)]{}&=&-; A\_n\^[(2)]{}=-;S\_[nn]{}&=&-; S\_[nc]{}= In respect that due to T-invariance $S_{cn}=S_{nc}$ it can be checked that obtained relations satisfied the unitarity constraints: |S\_[nn]{}|\^2+|S\_[nc]{}|\^2=1; |S\_[cc]{}|\^2+|S\_[cn]{}|\^2=1 As has been seen from expression (9) the imaginary parts of functions $g_{1(2)}, h_{1(2)}$ are determined by appropriate phases.[^8] For instance, from first equation in (10): $$g_1=ge^{i\delta(k_c)}=g\cos{\delta(k_c)}+ig\sin{\delta(k_c)}= g\cos{\delta(k_c)}+ik_cg\frac{\sin{\delta(k_c)}}{k_c}$$ At considered low energy one can safely confined by linear term in phases dependence on momenta : g\_1&=&d\_c\^[(1)]{}+ik\_ca\_c\^[(1)]{};g\_2=d\_c\^[(2)]{}+ik\_ca\_c\^[(2)]{}h\_1&=&d\_n\^[(1)]{}+ik\_na\_n\^[(1)]{};h\_2=d\_n\^[(2)]{}+ik\_na\_n\^[(2)]{} Substituting these relations in expressions (12), (13) after cumbersome, but simple algebra we obtain the energy dependence of S-matrix elements in the two channel case[^9]: S\_[cc]{}&=&S\_[nn]{}&=&S\_[cn]{}&=&S\_[nc]{}= where : a\_[nn]{}&=&; a\_[cc]{}=;a\_x&=&= Now we are in the position to get the dependence of matrix elements (1) on pairs momenta $k_c,k_n$. Introducing the real combinations[^10]: M\_[0c]{}= M\_[0n]{}= and making use the expressions (8),(12), (13),(15) we obtain our final result : M\_c&=&M\_[0c]{}+ik\_nM\_[0n]{} M\_n=M\_[0n]{}+ik\_cM\_[0c]{}D&=&(1-ik\_ca\_[cc]{})(1-ik\_na\_[nn]{})+k\_nk\_ca\_x\^2 For applications it is more convenient to rewritten these relations through the amplitudes of elastic pion-pion scattering $f_{cc},f_{nn}$ and charge exchange $f_x$: M\_c&=&M\_[0c]{}(1+ik\_cf\_[cc]{})+ik\_nM\_[0n]{}f\_x; M\_n=M\_[0n]{}(1+ik\_nf\_[nn]{})+ik\_cM\_[0c]{}f\_xf\_[cc]{}&=&; f\_[nn]{}=;f\_x=;These relations expressing the decay matrix elements (1) through the amplitudes of pion-pion scattering are the main result of present work. Their application to $K\to 3\pi$ and $K^\pm\to \pi^+\pi^-e^{\pm}\nu$ decays allow us  [@gevorkyan07; @gevorkyan2] to take into account the electromagnetic interaction among the charged pions in the final state for any invariant mass of the pion pair.\ The first terms in the expansion (20) coincide with appropriate expressions in  [@cabibbo04; @cabibbo05], i.e. the $M_{0c}, M_{0n}$ introduced above (see eq. (18)) can be interpreted as so called “unperturbed” amplitudes introduced in  [@cabibbo04].\ The two channel task considered in the present work permits to estimate the accuracy of the scattering lengths values extracting from experimental data on kaons decays. Moreover obtained expressions allows one to correctly take into account the electromagnetic effects in the final state not only above the charged pions production threshold, but also for bound states. [@gevorkyan07; @gevorkyan2] We are grateful to V.D. Kekelidze who initiated and support this work and D.T. Madigozhin for many stimulating and useful discussions. [99]{} V.N. Gribov, Nucl. Phys. 5 (1958) 653. V.N. Gribov, ZhETF 41 (1961) 1221. G.Colangelo, J.Gasser, H.Leutwyler, Nucl. Phys. B 603 (2001) 125. J.R.Batley et al., Phys. Lett. B 633 (2006) 173. N. Cabibbo, Phys. Rev. Lett. 93 (2004) 121801. N. Cabibbo, G. Isidori, JHEP 0503 (2005) 021. G.Colangelo, J.Gasser, B.Kubis, A.Rusetsky, Phys. Lett. B 638 (2006) 187. E.Gamiz, J. Prades, I. Shiemi, Eur.Phys. J. C50 (2007) 405 M. Bisseger, A. Fuhrer , J. Gasser, B. Kubis, A. Rusetsky, Nucl. Phys. B 806 (2009) 178 S.Gevorkyan, A.Tarasov, O.Voskresenskaya, Phys.Lett. B649 (2007) 159 S.Gevorkyan, A. Sissakian, A.Tarasov, H.Torosyan, O.Voskresenskaya, arXiv: 0704.2675 \[hep-ph\]; To be published in Yad. Phys. 73 (2010) S.Gevorkyan, A. Sissakian, A.Tarasov, H.Torosyan, O.Voskresenskaya, arXiv: 0711.4618 \[hep-ph\]; To be published in Yad.Phys. 73 (2010) E.Wigner, Phys. Rev. 73 (1948) 1002. G. Breit, Phys. Rev. 107 (1957) 1612. A.Baz, ZhETF 33 (1957) 923. Ulf-G.Mei$\beta$ner, G.M$\ddot u$ller, S.Steininger, Phys. Lett. B 406 (1997) 154. L.Landau and I.Lifshitz, Quantum mechanics, FM,Moscow (1963) A.I. Baz, Ya.B. Zeldovich, A.M. Perelomov, “Scattering, reactions and decays in nonrelativistic quantum mechanics”, Nauka, Moscow, 1971. [^1]: Corresponding author: gevs@jinr.ru (S.Gevorkyan) [^2]: On leave of absence from Yerevan Physics Institute [^3]: On leave of absence from Siberian Physical Technical Institute [^4]: Throughout this paper we restricted by s-wave $\pi\pi$ scattering in the final state. [^5]: In terms of the wave function $\Phi(r)=\frac{\Psi(r)}{r}$ this condition requires the regularity at r=0. [^6]: We consider only the class of strong potentials with the sharp boundary. [^7]: These factors are functions of pions momenta $k_c,k_n$ [^8]: The phases are odd functions of relevant momenta $\delta(-k)=-\delta(k)$ . [^9]: The similar expressions are cited in the textbook [@baz71], but with wrong numerator in the inelastic case. [^10]: The integrals $I_{1,2}$ are real quantities.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present an integrated microsimulation framework to estimate the pedestrian movement over time and space with limited data on directional counts. Using the activity-based approach, simulation can compute the overall demand and trajectory of each agent, which are in accordance with the available partial observations and are in response to the initial and evolving supply conditions and schedules. This simulation contains a chain of processes including: activities generation, decision point choices, and assignment. They are considered in an iteratively updating loop so that the simulation can dynamically correct its estimates of demand. A Markov chain is constructed for this loop. These considerations transform the problem into a convergence problem. A Metropolitan Hasting algorithm is then adapted to identify the optimal solution. This framework can be used to fill the lack of data or to model the reactions of demand to exogenous changes in the scenario. Finally, we present a case study on Montréal Central Station, on which we tested the developed framework and calibrated the models. We then applied it to a possible future scenario for the same station.' author: - 'Alexis Pibrac [^1]' - 'Bilal Farooq[^2]' bibliography: - 'PibracFarooq\_PedDynamics.bib' title: 'Integrated Microsimulation Framework for Dynamic Pedestrian Movement Estimation in Mobility Hub [^3]' --- Introduction ============ With the constant increase in the population of urban areas around the world, transportation and logistic is facing more organizational problems in order to deal with complex networks, mixing new technologies, and modern modes of transport. Never in the history, society has offered such a number of different possibilities, from the traditional individual modes (such as cars or bikes) to new concepts born in the growing market of sharing economy. Public transit systems such as metro, bus or tramway are now available in all sufficiently big cities. Moreover, these cities are well interconnected, thanks to various long distance modes of transportation. Thus rendering the current network of transportation facilities highly efficient as well as highly complex. Despite the improvements in transportation technologies and increasing demand, the mode that has always remained central is the walking mode. Mobility hubs (e.g. train stations, terminals, etc.) within which walking is the only mode, are the key connections in the dominantly prevalent inter-modal urban travel patterns. They have a high risk of overcrowdedness and thus playing more and more prominent role in the fluidity and efficiency of the whole transportation network.\ Despite significant advances in the individual level microscopic models to describe and reproduce pedestrians movement, the main limitation for simulations remains the lack of data for such situations. Indeed, with enough data, one particular case can be reproduced in a consistent manner (not exact but at least representative). But problems may arise when the information is incomplete; when it comes to validation (where additional data are needed for another time period); or extension of the scenario to future situations (where data are impossible to get).\ That is why here, we develop a novel framework for pedestrian dynamics in which the demand part is no more a static estimation directly obtained from the data. The demand which is technically a time dependent Origin-Destination matrix, will for sure be based on the available data, but will also be influenced by other kinds of information, such as the schedule of transportation systems, infrastructure in which the agents are moving, estimates of the transfer times for each trajectory, etc. By bringing in new processes and dynamic supply information, we aim to account for incomplete data when it comes to generating the exact demand using a microsimulation. Furthermore, we will be able to estimate changes in this demand induced by the changes in exogenous inputs. For instance if the design of a train station has changed, we need to adapt the departure time of each individual following what would be their reaction in real life. In such a case, the demand still depends on the observations already gathered, but the link between them is no more direct. Some of these observations may not be exactly satisfied, but adapted depending on the changes we made in the scenario. Thereby now that the demand description is adapted depending on the situations, we are making a step forward in terms of realism, when it comes to simulating non-existent scenarios i.e. testing potential future changes.\ Traditionally, the demand part of a scenario has been the starting point of a simulation–especially in case of pedestrian simulations [@Abdelghany2016; @sahaleh2012scenario]. For example, in the four-step model, after the generation and distribution steps, the demand is completely described. Then comes the modal choice and assignment that are using the so-called demand and models like discrete choice theory, model of transport modes, etc. that describe the behavior of each agent. In this approach, the simulation is divided into two phases: first we create the demand, and then we use it into successive behavioral models. In fact, we create an agent and its characteristics, then we describe its movement thanks to a description of its behavior. And this behavior is simulated with a chain of models that are successively going deeper in term of information (first only the mode of transport is chosen, then the global itinerary is computed etc. until we obtain the complete time dependent description of the movement). Here the demand is no more considered completely exogenous or known a priori, but dependent on other parts of the scenario, that can also be partial results of behavioral models. We can’t consider its generation into a separate phase. We have no longer a clear chain of objects to generate in a simple order thanks to deterministic models. But we have to find an equilibrium between all the different parts of the state, verifying all dependencies settled between them. The behaviors depend on the demand, for example the transfer times of each travel or the occupancy of each transport mode is directly influenced by the number of pedestrian in the station and their temporal distribution. And the demand depends on the behavioral simulation results, for example, the departure time is impacted by the time agents need to transfer or availability of modes.\ The problem of finding such an equilibrium is analogous to the one in Dynamic Traffic Assignment for vehicular traffic. However, due to the presence of a well-defined network and clear constraints, the search process for equilibrium in vehicular network is relatively trivial. Due to the complex movement of pedestrians and the high number of external factors that influence it (for example, arrival or departure time of a bus or a train is such an external factor that does not exist in vehicular traffic), the resolution for the case of pedestrian is of a higher complexity. To solve our equilibrium search problem, we are using a similar solution: a looping process running several times the same models until the convergence is reached. The classic behavior models of pedestrians simulations will be looped and computed as long as an equilibrium has not been found (see Figure \[markov\]) i.e. until the simulated demand is consistent with the state generated from the partially observed demand. The purpose here is to present a novel microsimulation framework that controls the generation of the demand, intended movement patterns, and assignment in order to search for the equilibrium. In next section we present the existing work, after which the core methodology is presented. The case study of Montréal Central Station is developed as an implementation of the proposed methodology. The results of base case and future scenario are discussed in details. In the end we present the conclusions and future direction. ![ \[markov\]Organization of the proposed framework.](generalF) Literature Review ================= Extensive research on various aspects of pedestrians simulation can be found in the literature. This has resulted in variety of tools to model the problem [@daamen2004modelling]. Past research has either focused on a specific operation within a train station [@zhang2008modeling] or the whole station [@sahaleh2012scenario]. The classical way to describe the pedestrian behavior is divided into three levels [@daamen2004modelling]: strategic, tactical and operational.\ The generation of OD matrix, that contains complete information about departure location, arrival location, and departure time for all agents, is a classical but tough problem in transportation research. It has been extensively studied in different contexts, e.g. for vehicles at urban area level for planning purposes [@national2012travel], as well as at a smaller spatial scale like ours. Various available datasets have been used, beginning with traffic counts on the network in order to directly generate the matrix [@cascetta1988unified] or more recently with a Bayesian resolution [@cheng2014bayesian]. The schedule can also be used for this step [@hanseler2015schedule]. These different information can be mixed in order to generate the matrix with a crucial time dependency [@ashok1996estimation]. Depending on the type of specific problem, a wide range of algorithms have been developed and tested, [@antoniou2014framework] provide an extensive literature review and propose a framework to compare them.\ The tactical level is the process that affects for each pedestrian their global route, depending on the OD matrix. In this step, we consider that all agents think in a graph-styled simplified network that represents the practical space and decision points. The classical formulation of this problem is the search of a Nash equilibrium [@wardrop1952road]. For vehicular simulation, the tactical level proceeds to the route choice of all agents [@bovy1990route]. Similar works have been developed for pedestrians [@hoogendoorn2004pedestrian]. However, in case of pedestrians we are of the view that it is behaviorally more consistent to consider this step as selection of decision points. The pedestrian choose their way through a succession of crucial decision points at a rather aggregate and abstract level. For example: which door, or coffee stand, etc. The process can be similar to the way finding algorithms for urban navigation that often use a graph representation of the network [@gaisbauer2008wayfinding]. Contrary to a continuous simulation of the trajectories where the space of possible solutions is also continuous, this level is characterized by discrete choices and so a finite number of possible configurations. The discrete choice theories have played a crucial role in the transportation research [@ben1985discrete] since they are related to different levels, such as modal choice [@hausman1978conditional]. Finally the proposed process strongly depends on a route cost function that should take into account the main phenomena, such as the travel time [@avineri2006impact] or even the perception of the facilities [@sisiopiku2003pedestrian].\ The operational level goes one step further in terms of precision. Using the high level paths generated from previous process, it computes trajectories of all agents. Variety of models have been developed in this context. The more efficient are often aggregate model, where agents are gathered in order to consider the whole crowd like a flow [@hughes2002continuum]. This kind of approach can also be solved with a Cell Transmission Model, that discretize the space into cells [@daganzo1994cell]. [@hanseler2014macroscopic] have developed the cell transmission based model for pedestrians. The main advantages of these approaches are a quick simulation/enumeration time and a relatively good aggregate level precision for real crowd despite overly simplified assumptions. But in our case, we are interested in precise results with information on each pedestrian. As all the other levels are individual level, we want to maintain the consistency and disaggregation at operation level as well. We are interested in a microscopic scale. Several models have been developed at micro-scale, such as the use of discrete choices to model the next step of pedestrians[@robin2009specification] or an analogy with physical forces called the social force model [@helbing1995social]. In these models, it is always possible to go deeper in description to have better precision. Some studies have developed even more complicated description of agents taking into account for example the social or natural effects such as the use of field of view [@turner2002encoding]. These kind of agent-based model are now efficient on complex networks [@batty2003agent] and bring depth to the analysis.\ Once the estimation of all trajectories have been done, our goal is to authenticate the previous departure time and to correct them if needed. In the literature, algorithms have been developed that include a choice in the departure time generation [@de2002real]. The clear advantage is that it coincides more easily with real traffic conditions. Other algorithms try to deal with a real time correction of OD matrices [@bierlaire2004efficient]. But the new problem we are facing is that previous state estimated by the simulation step doesn’t match any more with the new departure times. These simulations need to be recomputed. We now have a loop and need to find a convergence (Figure \[markov\]). This problem is known as the Dynamic Traffic Assignment [@peeta2001foundations]. Some recent works proposed processes in order to solve this kind of problem [@nagel2012agent].\ We propose a stochastic approach to solve this convergence problem. The output of each process will no longer be deterministic, but subject to probabilities as it has been proposed in [@daganzo1977stochastic]. The outputs we will now consider are probability distributions over the possible states space. In such case, a Bayesian resolution can be used [@maher1983inferences]. Specifically, we propose to consider the series of processes as a Markov Chain, using only the previous estimation of the state and giving back a new one, following a stochastic rule. The Monte Carlo algorithms therefore be used in order to identify the most probable states. One of such algorithm, Metropolis-Hasting algorithm [@hastings1970monte] has already be used for route choice set generation in a complex traffic network with high numbers of alternatives [@flotterod2011bayesian]. Methodology =========== Problem Statement ----------------- Given the infrastructure $I$, schedule of all modes of transportation $C$ and the location of different considered activities $A$, we are interested in estimating the state $S$ of the station that matches as good as possible to a set of incomplete observations $D$. A state $S$ contains the complete information of each pedestrian i.e. their activity chain $A^i$, their start and end location $(l_s^i,l_e^i)$, their starting time $t_{dep}^i$ and their exact trajectory $T^i:t\mapsto l$.\ $$(I,C,A,D) \mapsto S=(A^i,l_s^i,l_e^i,t_{dep}^i,T^i)_i$$ Indeed, if all these information were contained in $D$, the proposed framework is obsolete. However, in reality $D$ is not sufficient to directly extract $S$: $D\neq \subset S$. Moreover, we may have to confront cases where $D$ was collected in a different scenario than the one in which $I$, $C$ and $A$ are defined. This happens when $(I,C,A)$ represents a non-existent scenario (for example possible perturbations of the reality or extensive changes that may occurs in the future). $D$ always corresponds to a scenario that has already happened i.e. base case. In such case, $D$ is still bringing a necessary amount of information, but they will not be directly considered as constraints for $S$. A necessary level of abstraction have to be brought to these observations: for example if a pedestrian $j$ is observed at a certain point of the time and space (this information $D_j$ is contained in $D$) it will not necessary be the case in $S$, the information could be transformed into $\overline{D_j}=$“$j$ $b$”. In $S$, $\overline{D_j}$ can bring pedestrian $j$ to have another trajectory if bus $b$ has a different departure time, $D_j$ is not satisfied. By calling $I_D$, $C_D$ and $A_D$ the respective infrastructure, schedule and activity of the scenario where $D$ was observed, we can write: $$(I=I_D,C=C_D,A=A_D) \Rightarrow D \subset S(I,C,A,D)$$ \[ovD\] By calling $\overline{D}$ the abstract information of $D$. This set is defined, when $D\not\subset S(I,C,A,D)$, such as it verifies: $$\overline{D} \subset S(I_D,C_D,A_D,D)$$ $$\overline{D} \subset S(I,C,A,D)$$ In cases where $D\neq S$, it means that one or more estimated states may represent D. These estimated states will only differ on the part where we have the lack of information. Note that we are assuming that there always exists at least one state in the search space that can verify all our constraints. This assumption is reasonable in the case where the search space is well-defined and D has enough information. The goal of the simulation is to fill in the exact amount of information needed and thus choose one final state S\*. We can’t assure that there will be a unique state to which the convergence can bring us. Since it would mean that we have perfectly described all human phenomena that come into effect in the station. In fact we only want a representative of what could happen in reality, just a consistent case that allows us to understand the main phenomena in the station. We will be able to converge to several different and completely consistent solutions. But if we don’t bring enough constraints, this space of possible final solutions will be oversized, we need to restrain it enough to have usable results. This is why constraints such as schedule dependence and behavioral models’ consistency will be added in these cases where we don’t have enough data. Inputs ------ Different kind of inputs will be considered. The four main ones are the infrastructure, schedule, activity list, and observed data: $(I, C,A, D)$. - Infrastructure $I$: Spatial description of the infrastructure. Mainly composed of a CAD design model of the studied station, available facilities, and the main entrances. - Schedule $C$: List of all transportation modes, with their arrival or departure time, location and capacity. - Considered activities $A$: Description of all activities available, inside as well as outside the station, for considered agent. It should contain the type, location and possibly the time at which it is available. In case of mobility hubs, the prime activity is to go from one mode to another, so in this paper we will model a unique activity for every pedestrian. However, the proposed methodology can easily be extended to include full activity chain modeling. - Observations $D$: These data can be of various form. The less precise are aggregated counts on different point of the station, for example the number of people entering/exiting it per unit minute of the scenario. More precise data can be incorporated if they are available, for example observation of the exact time each pedestrian entered the station (or cross a specific point); information on the origin and destination of each travel; or even some local trajectories observed within the field of view of cameras in the station directly. A detailed discussion on the types of data commonly available on pedestrians in public spaces can be found in [@Farooq2016]. The different behaviorial models used in the simulation are also inputs: different results can be obtained depending on the accuracy of each model and their consistency with reality. As for the observation $D$, the models can be considered as constraints. Indeed there are constraints on the kind of behavior agents can have. The final state $S$ will have to verify these constraints to be consistent. We can bring more constraints with more restrictive models: where possible agents movement are more precisely defined. Simulation Processes -------------------- Here we use the chosen behavioral models, and the inputs $(I,C,A)$ in order to simulate pedestrian agents moving in the station and obtain a description of state $S$. The three levels of simulation (strategic, tactical and operational) are respectively implemented with the activities generation, decision points choice, and assignment models. ### Activities Generation {#gene} The generation phase aims to estimate the demand. At the end of this process we obtain a part of $S$: the number of pedestrians, the activity chain of each one, their start and end locations, and their starting time: $$(A^i,l_s^i,l_e^i,t_{dep}^i)_i$$ Since we only consider one type of activity (work), the model we choose here for the generation is Location Choice Model (LCM) that assign a destination for each pedestrian. This model, coupled with an estimation of the occupancy for every transportation mode and a description of the variability upon time, is sufficient to generate the demand. Estimating demand of a new scenario exactly corresponds to the calibration of location choice model. Thanks to the information contained in $D$, our framework corrects the demand until it is consistent with all parts of the scenario by calibrating this model. We can then use it for other scenarios, where at least one of $(I,C,A)$ is changed. Such calibrated model contains exactly $\overline{D}$ (see Section \[ovD\]). The information in $D$ is absorbed in the form of a model to have the abstraction necessary to be generic for several different scenarios. ### Decision Points Choice The decision point level generates a global movement pattern for each pedestrian depending on their origin and destination. In this phase, the station is viewed as a simplified network representing all different paths. At each node or decision point, a pedestrian is confronted to a choice scenario. Pedestrian chooses one of the possible direction towards the destination. At this level there is no description of time, and the pedestrian is not considering other agents or obstacle. But several kind of information can be brought, from a simple estimation of the different transfer times on each link to real information of perception: signs, sized of corridor, light etc. In our case we are using a basic model for the sake of simplification in the simulation i.e. shortest path model, but random utility based choice models can be used. ### Assignment Finally the assignment uses all information generated in the previous phases to compute the exact time-dependent trajectory of each involved agent. It depends on their global routes defined by decision points; on their interactions with other agents and obstacles; and on different personal characteristics that may change their behavior in order to represent the diversity. Here we used the social force model [@helbing1995social] in the simulations. Convergence ----------- After the three previous processes have been executed, a state is obtained. But, like traditional simulations, the demand was generated before the state of the station was estimated. This demand could not have used some crucial information on the station’s load such as transfer times or occupancy, because they were not observable yet. However, this demand may be in total adequacy with the obtained results. The first step is to observe this adequacy or not and measure what is not coherent. Then this information can be used in a correction process that will correct the previous estimation of the demand, now that more information are available. This correction process close a loop that we can be represented as a Markov process. We will then use the Simulated Annealing algorithm, a special case of Metropolis-Hasting sampler [@ross2013], to make it converge to the desired state. ### Corrections {#twoscenarios} The correction process is required to correct the estimated demand. As we saw in Section \[gene\], a set of rules formulated in a model and applied in the generation step results in obtaining the estimated demand for a scenario. The correction process has two kind of possible actions that directly define the kind of simulation direction we are interested in: - *Calibration of the demand.* First application of the simulation is to calibrate our model used in the generation step. This simulation is based on the available observation $D$ and generates the exact demand to satisfy it. It corresponds to the creation and calibration of $\overline{D}$ that absorbs all the information of $D$. - *Simulation of unknown scenario.* Second application supposes that the first one has already found the equilibrium point in order to calibrate the demand generation step. It means that $\overline{D}$ has been created. This second application uses it to simulate a new scenario without the availability of $D$. The correction process for the first application is used to correct the location choice model: based on the difference observed between the current estimated state and $D$, it changes its rules $\overline{D}$ to try a different search point. Concretely, in our case, the probabilities of LCM are changed. After the observations are made, a correction is chosen depending on the lack or excess of people going into each kind of location. This correction can be focused on a specific location trying to increase or reduce the number of agent interested on it, or can be a mix of different changes. At each application of the correction process, since the choice is random any correction can be applied, but the probabilities are made set such that the correction has better chance to correct the observed difference in a right manner.\ The correction process for the second application is simpler: the estimated state is analyzed and the abstract conditions of $\overline{D}$ (that are gathered in the generation step) are tested. If some are not satisfied, the behavior of corresponding agents are changed with a probability according to the correction they need. For example if some agents miss their bus or train (that they should take according to the generation model) their starting time is corrected. The same principle can be applied if certain occupancy of a transportation mode needs to be reached by adding or deleting agents in the simulation. ### Markov Process #### Construction of the chain. The correction we just defined closes the loop. When applied on an estimated state, it gives us a new potential state with some probability of being chosen. This loop can be considered as the transition of a Markov chain (see figure \[markov\]). In order to use the powerful properties of Markov chains, we have to prove that the one we defined is one. This is done here by proving the two properties: *irreducibly* and *aperiodicity*.\ These properties are easily proved in the case of chains applied in finite space of state. This is definitely not the case here: the probabilities in LCM, that define the current position of the state in search space, are continuous (from 0 to 1). The space is not finite, neither discrete. In the case of such continuous space of state, it is common to use distribution probability (on which the chain is applied) in order to find the same properties than discrete spaces. In our case, it is not possible since the transition probabilities are not obtained with a formal function that we can easily integrate or apply in a region of states. Our transition is a function we can compute on only one state at a time. And since it is a whole simulation, we can’t apply to a consequent number of state at each transition. \[consideration\] In order to prove Markov chain properties, we are using another kind of consideration i.e. the location choice model can be continuous, but it is always applied on a finite number of pedestrians. There are infinite scenarios and demand that can be generated thanks to LCM, but for any particular scenario $(I,C,A)$, there is a maximum number of pedestrians that can be generated. In such case, the proportions of LCM may be continuous, but since they will be applied on a finite number, their effect is discrete. More precisely, around each LCM configuration of parameters, there is a small interval in which all other configuration of parameters have the same effect in a scenario. All these set of parameters correspond, in fact, to the same state. Finally we find that there is a finite number of different LCM in the particular scenario we are considering for the simulation. We have to prove that there exists $N$ for which from any set of parameters $P$ we can reach any other one $P'$ in $N$ iterations with a non zero probability. A set has finite number of parameters in our LCM, so we can write $P=(p^j)_{j\in [1,J]}$, $P'=(p'^j)_{j\in [1,J]}$. In each iteration, at least one of the parameters is changed. The amplitude of this change has a maximum, let’s call it $A$. $p^j\in [0,1]$ can reach any other parameter $p'^j$ in $|p^j-p'^j|/A < 1/A$ steps. The probability to jump from $p^j$ to $p'^j$ in $1/A$ steps is not zero since there is a finite number of values that can be reached–we just need to reach a value close enough to $p'^j$. There is $J$ parameters to change, each has a non-zero probability to be changed in any other value in $1/A$ steps. Moreover, each has a non zero probability to be chosen and changed at each iteration. It means from any state $P$, we can reach any other $P'$ in $J\times 1/A$ step with a non zero probability: $$N=\frac{J}{A}$$ Corollary \[consideration\] ensures that two “very close” sets of parameters can have the same effect in the generation process for one particular scenario. In fact it ensures that around one state $P$ there is a small open set (not empty) of parameter configurations that define the same state. Since a change at any iteration have a maximum amplitude of $A$ and minumum of $0$, the change made to a parameter can be so small that the new configuration of parameters is still in the open set of the same states. For any state, at any iteration, there is a non zero probability that we stay in the same state. The periodicity of all states can’t be higher than 1. Our chain is aperiodic. ### Search Algorithm A Markov Chain Monte Carlo (MCMC) simulation process can be used to sample from the developed Markov chain. In particular Simulated Annealing algorithm is used to converge to an optimal state. Transition of the process is already defined, the algorithm need an objective function and a temperature to decide whether or not each new state will be kept. ### Objective Function The objective function drives the choice of search towards the optimal state. This state have to be consistent with all inputs we had: $(I,C,A,D)$ and the behavioral models. Even if we can integrate the behavioral models consistency in the objective function by implementing a rating system, we don’t have to in our case. This can be easily done in future works. For example, it is possible to integrate the comfort (or security) appreciation for each pedestrian following a behavioral model that measure the perceived comfort of everyone. Integrating it in the objective function would lead to states where people tend to choose the travel by maximizing their evaluation of comfort.\ The behavioral models are already used to generate the processes $I$ and $A$. Elements that the objective function should integrate to assure their impact on the simulation are $D$ and $C$: the conformity to external observation and the consistency with schedule, respectively. - Observations $D$: We use the comparison between these observation with the exact same information taken from the estimated state. $D$ is a set of values $(D_i)$ that correspond to a list of observation function $(O_i)$ applied on the real life station $S_{rl}$. These functions can be, for example, the number of pedestrians going through a particular door between two points in time. $$D=(D_i)=(O_i(S_{rl}))$$ We evaluate these functions on the current estimated state $S$ to obtain $(s_i)$ the values to be compared with $D$. A rate is settled using the residual sum of square: $$OF_1(S,D)=RSS((s_i),(O_i(S_{rl})))=\sum_i (O_i(S)-D_i)^2$$ - Schedule $C$: the coherence of schedule measures the embarking and disembarking pattern for each train or any other mode of transport. It uses the list of pedestrians $(p_i^{(m)})$ taking each mode $m$. This list is obtained from the state $S$ thanks to the list of origin and destination of each pedestrian (time and space are considered) and the list of arrival and departure of each mode (time and platform also) in $C$. We assign a pedestrian going to or coming from a platform to a consistent bus or train.\ From this list of pedestrians we compute the arrival pattern $f(p_i^{(m)})$ of each mode for the state $S$. This pattern is compared to our embarking and disembarking pattern model $C_m$ (see Figure \[UNLDM\]) and a rate is given with the residual sum of square to measure the consistency: $$OF_2(S,C)=\prod_m RSS(f(p_i^{(m)}),C_m)$$ Finally, we may be unable to assign some pedestrians to a mode of transportation (origin or departure) if they are created before a mode arrives in the station or if they arrive too late to take the mode corresponding to their platform. We strongly penalize states with these incoherent observations. By denoting $Y(S,C)$ the number of such incoherent pedestrians in state $S$ with the schedule $C$, we have: $$OF_3(S,C)=e^{-Y(S,C)}$$ The relative importance of the three functions can be settled with two parameters $\alpha$ and $\beta$. We obtain the objective function: $$OF(S,C,D)=OF_1(S,D) \quad OF_2(S,C)^\alpha \quad OF_3(S,C)^\beta$$ ![\[UNLDM\]Unloading model for trains: the bottleneck model is used due to the form of the connecting stairs between platforms and main hall.](goulotCurve.png "fig:") ![\[UNLDM\]Unloading model for trains: the bottleneck model is used due to the form of the connecting stairs between platforms and main hall.](goulotMM.png "fig:") Implementation ============== Since the general algorithm, various behavioral models, and types of data are separate entities, we implement them in a way that each component can be independently and easily plugged in or replaced. We use an object oriented paradigm to implement in Java programming language. The implementation is available upon direct request to the corresponding author. Figure \[imple\] shows the UML diagram of the framework. Different kind of scenarios and type of states can be plugged to the corresponding classes. The implemented code can be used in many different scenarios and is able to take various kind of models. Please also note that a commercial software called MassMotion by Oasis Software is used for running the *Assignment* process. ![ \[imple\]UML diagram of the framework.](UML) Case Study ========== As a case study we explored our framework on the Montréal Central Train Station. Here 14 tracks are exploited by several national and local railways companies. The station is also linked to two metro stations, 16 bus lines, encloses an active underground mall, and is directly connected to several buildings. It is an important part of the Montréal city centre since it is located downtown, and is a central part of the Montreal’s Underground City, the biggest pedestrian indoor network in the world. Presentation ------------ ### Simulation Setup We will study a fixed part of the space and time of the scenario i.e. we will model the main hall of the central station (see Figure \[station\]). Pedestrians will be able to enter and leave through different portals that model the entrance/exit of boundaries of the station. These portals are the different corridors (1 to 8) arriving into the main hall of the station, and also the stairs connecting the platforms just under the hall (RA to RG), where trains arrive. The time of day that we are interested in is when the station is the most crowded i.e. the peak period. Since the afternoon peak period is more spread, it is less intense (see Figure \[data\]). We have chosen the morning peak period. Agent based simulations are computationally very demanding and because we are running several simulations in a single iteration, we began with a short window of time (i.e. 15 minutes of highest demand) to minimize the computational time. So our simulation concerns the duration between 8:30am and 8:45am, the most crowded quarter, during which several trains arrive and leave the station. ![\[station\] Representation of the station. Usable space is in light blue and obstacles in dark blue. Portal with which people can enter and leave the simulation are in green.](stationNumbered) The schedule $C$ gathers all departure and arrival trains of a normal day, including their capacity information. During the time window we are studying, several of them are unloading and others are taking passengers for the suburb. ### Behavioral Models {#models} For this first simulation we selected basic behavioral models. The three levels have to be implemented with one model: strategical, tactical and operational. In the strategical level, we should model the activity chain. Because we are simulating the morning peak hour, we assume that the main purpose of the displacements is work. The only dimension to generate here is the location of this activity. That is why we use an activity location choice model. This is particularly consistent because the studied space (the hall of central station) is small and doesn’t host too many different activities. There are still some coffees and restaurants. In future simulations the model could integrate them and propose a full activity chain modeling.\ The operational level, where we model decision points, is also impacted by the size of the station i.e. when there are not too many different ways to go from one point to another, its importance is diminished. We choose the simplest model there is, the shortest path. It is still particularly consistent since everyone is going to work at that time, people may mainly choose their trajectory to go as fast as possible. Finally the operational level is very important in term of realism. We used the social force model that gives a good description of real behavior. The parameter values calibrated in [@sahaleh2012scenario] were used. ### Scenario Development As explained in Section \[twoscenarios\], two kinds of simulation are possible: one using observations $D$ on existing use so as to calibrate the demand generator model. The second kind of simulation use the calibrated model to estimate the station’s load in a scenario for which $D$ does not exist. For this case study we are executing both kind of simulations. First, thanks to the data we collected on the real scenario, we will calibrate the behavioral models. Then it will be used in a possible future scenario for which no data could be collected. Base Case Scenario ------------------ $I$, $C$ and $A$ are already detailed, as well as the behavioral models. Only $D$ is needed to launch the simulation. ### Inputs We gathered observations by installing magnetic sensors on all entrances of the station’s main hall i.e. portal 1-8. Unfortunately, logistical issues bared us to measure the flow at access points to platforms. We also did not have access to the occupancy data of trains. The commercial sensors were provided by the manufacturer, Eco-Counter. Due to the data collection rate of these sensors, pedestrian counts were only recoded for 15 minute intervals during one typical day. Note that the pedestrian loading at portals in the simulation was done at 1 minute interval. So for this purpose the initial departures at entrance portals 1-8 were assigned based on Poisson Arrival Process for every 15 minutes of sensor counts. The departures from portals connected to platform were based on the arrival times of the trains and unloading curve. ![\[data\]Data recorded on October 1, 2014, at portal 2. Pedestrians using the portal as entrance are in green and leaving through it are in blue.](data) ### Optimal Solution Search After 500 iterations, the simulation converged to an optimal solution. Figure \[c500\] shows values of the objective function for each iteration, and for the selected states. We can observe that there is a gradual and steady progression towards search regions with better values and thus the selected value constantly improved towards optimal solution. The convergence is very slow[—]{}it took several hundreds of iteration to obtain an acceptable result. The reason is that, for this particular simulation, we begin with a particularly incoherent set of parameters for the location choice model. The goal here is to show the robustness of the method i.e. that it converges, though slowly, to a proper solution. For sure, when the goal of a simulation is only to have consistent results, we can begin with a more coherent set of parameters, simply generated with the common sense of the analyst. Final solution is an estimation of the station’s load with the trajectory of all pedestrians. For illustration purposes, we can see a 3D representation of the pedestrian movement in Figure \[screen\]. ![\[c500\]Left: $log(OF(S_i))$ for the state generated at each iteration $i$. Right: $log(OF(S^*))$, value of the objective function at step $i$ i.e. its value for the best known state.](curve1 "fig:") ![\[c500\]Left: $log(OF(S_i))$ for the state generated at each iteration $i$. Right: $log(OF(S^*))$, value of the objective function at step $i$ i.e. its value for the best known state.](curve2 "fig:") ![\[screen\]3D representation of a simulation. Two trains just arrived. The disembarking passengers mix with a continuous flow of pedestrians crossing the station.](screen) ### Validation In order to validate the results, we need real observations that have not been used in the convergence process. The problem is that the lack of data is exactly what we try to solve here. So, instead, we used real train occupancy information in order to validate the coherence with the real life. Since people disembarking trains can take several other exits than through the main hall, we particularly compare the count of pedestrian leaving through the ones we considered. In the simulation, we found that an average of 760 people using these exits after the arrival of a train. In the real data we have an average of 850 people disembarking from the trains, which is higher. This difference can be explained because some exits from platform to the hall were not considered, so the flow is limited, in the simulation, to the principal exits only.\ In order to validate the convergence, we can also analyze at what point the observed demand is satisfied by the solution. Figure \[table\] shows fit of the optimal solution with the observed data. We can clearly see that most of conditions are satisfied with only one exception. According to the observation, more people should be leaving the hall through portal 4, but this error is less than 10 %.\ ![\[table\]Comparison between the real life observations ($D_i$) and the same observation on the final state ($O_i(S_f)$) for the inflow and outflow of all portals of the station. The percentage of error is written when it is not 0%.](table2) Future Scenario --------------- After calibrating the simulation, we used it in a future scenario for which we did not have any real observations, but the scenario close enough to base case. Thus the utilization of base case calibrated models was consistent. We simulated here the same station, with the exact same facilities and people using it, but with an increase in the population by 50%. This is a possible and very realistic scenario, if the infrastructure at station is not updated in the near future. ### Inputs The inputs are almost the same: $I$, $A$ and $C$ are unchanged, as well as the behavior models. Only $D$ is no more used. The LCM is now simply used instead of being calibrated. The total number of agents involved in the station is multiplied by 150%. ### Simulation Even if the LCM is now static, we still need to make the Markov Chain converge. The demand still has to find an equilibrium with the estimation of the station. For example, an estimation of the transfer time is used in the generation step so that, after using the LCM, a departure time is assigned for each pedestrian. This is after several iterations that these estimations are consistent with the scenario so that the demand is generated in a consistent way. ### Results From the converged state, we can extract information on pedestrian trajectories over space and time. Figure \[flowR\] shows the principal paths used by pedestrians during the simulated time window. We can see what parts of the station are overcrowded and may present a risk of traffic congestion. The results also provide information on each agent. For example, a criterion could be used to measure the safety or satisfaction of each agent. The general OD matrix over time in the station can also be obtained from the pedestrian trajectories in the simulation. We represented it in Figure \[flowL\]. Each strip represent a flow from an origin (to which it is attached) to a destination. Its thickness is proportional to the number of agent using it. We can identify that the major flow is from portal 2 to 3. The two most loaded trains are arriving in platform B and G. Passengers arriving with the first one are mainly going to portal 2 and 3, while those arriving with second are oriented to portal 5 and 7. Only one platform is considered as a destination by pedestrians i.e. platform E. This is consistent since it is from this platform that the only train leaving in our simulation window time departs.\ The comparison between base case and possible future scenario can bring a detailed picture of the evolution of station. Figure \[density\] shows the densities over time in both scenarios. Densities are represented according to the standard [@fluin1971] and IATA (International Air Transport Association) level of service mappings. We can clearly see how an augmentation of the population in the station does not linearly increase the measured densities. With the 50% augmentation, the presence of higher densities explodes. Indeed, when serious congestion appears, pedestrians get blocked and stay longer in the station. This leads to even more pedestrians in the station and so a higher danger of congestion–it a vicious circle phenomenon. Also we can identify some details of what parts could require some improvements. We can see in Figure \[congested\] the different intersections where high levels of congestion appear.\ ![\[flowR\] Densities of pedestrians on each path](flux) ![\[flowL\] Flow between each portal of the main hall.](circos) ![\[density\]Representation of densities over time in the station for base case (left) and future scenario (right).](densityLOS "fig:") ![\[density\]Representation of densities over time in the station for base case (left) and future scenario (right).](density150 "fig:") ![\[congested\]Spatial representation of densities for base case (right) and future scenario (left).](densities "fig:") ![\[congested\]Spatial representation of densities for base case (right) and future scenario (left).](densities150 "fig:") Discussion ---------- The first point that we would like to discuss is the 8.8% error with respect to outflow observations at portal 4 (see Figure \[table\]). This error means that our model did not manage to send enough people to this portal. The Location Choice Model is responsible for this assignment. The error corresponds to a default aspect of our model, which defined a type of attraction for each portal: city, metro, train, etc. The probability for pedestrians to choose one of these attractions was calibrated in the base scenario. Once an attraction was set for an agent, a destination was assigned by selecting the nearest portal that proposes this attraction. The lack of people going to portal 4 means that the attraction we assigned to this portal put it in competition with other portals that were surely closer to the major inflows. We can see in Figure \[flowL\] that pedestrians leaving through portal 4 were essentially coming from platform D. And the flow created by this platform is limited.\ The error means that the function of portal 4 was not properly assessed. In order to improve the model, we can use the random utility theory and define a utility function for the attractivity of each portal based on their attributes. We could also imagine to simply extend the location choice model by adding a type of attraction, just for portal 4. But we have to be careful with these options, since they mean more parameters to calibrate in the model. The search algorithm will be faced with a higher degree of complexity in search space to explore. More information will be needed for the algorithm to be able to determine an optimal solution.\ We can identify in Figure \[flowR\] another limitation of the implementation: some path used are not coherent with the reality. They cross an area of shops that is not attractive for pedestrians, in reality they try to avoid it and mostly take the wider corridor just next to the area. This difference between the real observation and the simulation is also coming from the model at tactical level i.e. the shortest path model. We observed that in the simulation, pedestrians are taking the path through shops because it is shorter and so it corresponds to the model. But in real life, the choice of path is more complex than a simple shortest path choice. When it comes to choosing between the two directions, people tend to take the corridor because it seems more attractive. Such phenomena in the choice are not described by the model. In future, we can suggest the use of a random utility based decision points model–especially the dynamic mixed logit model, which fits very well with the choice scenario. Such model could describe some human behavior such as the impact of the perceived environment in the choice, same person making successive decisions, and correlation between the decisions. People may tend to be more attracted by bigger corridors, shown by direction signs, and presenting less obstacles in the sides (such as tables or shops’ advertising).\ Conclusion ========== We presented an agent based microsimulation framework for pedestrian movement in moblity hubs and public spaces. The problem is formulated as a Markov chain of activity generation, decision points choice, and assignment processes. Thanks to behavioral considerations of the demand and dependence on public transit schedules, the resulting framework is truly dynamic and can fill the lack of complete observations. We propose MCMC process that converges to an optimal solution depending on the type of behvaioral models, infrastructure data, public transit schedules, and incomplete observed demand. As a result the framework is able to predict the activities, location, start time, duration, and detailed trajectory of individual pedestrian.\ A case study of Montréal Central Station has been implemented for the base case and a future scenario with demand augmentation of 50%. The validation of the base case shows a good fit. We also observed several differences between the result and the data in station. They were all explained by the choices of model: a too simplistic description of the infrastructure and of the possible activities in the station; a calibrated location choice model not perfectly adaptable; and a shortest path model at tactical level that needs to be more representative of behavior and dynamic conditions. These are the dimensions where improvements can be made to the current implementation of the case study. The general algorithm is computed in a way that these changes can be easily integrated.\ Finally, there is a great potential and applicability of the proposed microsimulation framework. Once behaviorally richer models are implemented, the simulation will be able to render how the station will be affected if specific changes are made to the design or schedule, with a dynamic demand that effectively reacts to these changes. For example a change in train’s departure time will force pedestrians to leave at a different time in order to have a coherent behavior in the simulation. If the demand were not dynamic, these pedestrians’ departure time could not be changed and we could observe absurd situations where the arrival time of the pedestrian is completely not coherent with his/her train. Such a framework will be very useful in the network-level optimization of the schedule for various modes of transportation, in order to have perfect connections between them following what the population needs, and avoiding high densities that could lead to unstable situations. Acknowledgements {#acknowledgements .unnumbered} ================ We would like to thank Natural Sciences and Engineering Research Council of Canada (NSERC) and Fonds de recherche du Québec Nature et technologies (FRQNT) for funding this research. We would also like to thank Société de transport de Montréal (STM), Agence métropolitaine de transport (AMT), Oasis Software, and Eco-Counter for providing us the critical support to make the research possible in this paper. [^1]: Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: [alexis.pibrac@polymtl.ca]{} [^2]: Laboratory of Innovations in Transportation (LITrans), Department of Civil, Geotechnical, and Mining Engineering, Polytechnique Montréal, Montréal, Canada, Email: [bilal.farooq@polymtl.ca]{} [^3]: An extended abstract appeared in IATBR 2015
{ "pile_set_name": "ArXiv" }
--- abstract: 'With the growing popularity of microblogging services such as Twitter in recent years, an increasing number of users are using these services in their daily lives. The huge volume of information generated by users raises new opportunities in various applications and areas. Inferring user interests plays a significant role in providing personalized recommendations on microblogging services, and also on third-party applications providing social logins via these services, especially in cold-start situations. In this survey, we review user modeling strategies with respect to inferring user interests from previous studies. To this end, we focus on four dimensions of inferring user interest profiles: (1) *data collection*, (2) *representation* of user interest profiles, (3) *construction and enhancement* of user interest profiles, and (4) the *evaluation* of the constructed profiles. Through this survey, we aim to provide an overview of state-of-the-art user modeling strategies for inferring user interest profiles on microblogging social networks with respect to the four dimensions. For each dimension, we review and summarize previous studies based on specified criteria. Finally, we discuss some challenges and opportunities for future work in this research domain.' author: - Guangyuan Piao - 'John G. Breslin' bibliography: - 'library.bib' date: 'Received: date / Accepted: date' title: 'Inferring User Interests in Microblogging Social Networks: A Survey' --- =1 [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore =1 Introduction {#intro} ============ Microblogging[^1] social networks such as Twitter[^2] and Facebook[^3] are being widely used in our daily lives. Twitter and Facebook have 328 million and 2 billion monthly active users[^4][^5], which shows the popularity of these services. The abundant information generated by users in OSNs creates new opportunities for inferring user interest profiles, which can be used for providing personalized recommendations to those users either on those OSNs or on third-party services allowing social login functionality[^6] from the same OSNs. Social login is a technology which allows visitors to a website to log in using their OSN accounts rather than having to register a new one[^7]. A recent survey showed that over 94% of 18-34 year olds have used social login via Twitter, Facebook, etc.[^8] With the continued widespread development of the social login functionality, inferring user interest profiles from their OSN activities plays a central role in many applications for providing personalized recommendations with the permission of those users, especially for cold-start users who have joined those services recently. In the literature, there have been many studies that focused on inferring user interest profiles with different purposes such as providing personalized recommendations with respect to news [@Abel2011g; @Gao:2011:ITU:2052138.2052335], research articles [@Bolting2015; @Nishioka:2016:PVT:2910896.2910898], and Points Of Interest (POI) [@Abel2012a]. Despite the popularity of inferring user interests in OSNs, there is a lack of an extensive review on user modeling strategies for inferring user interest profiles in OSNs. To our knowledge, only one related short survey [@Abdel-Hafez2013] has been formally published. [@Abdel-Hafez2013] provided a general overview of user modeling in social media websites which includes all types of OSNs without focusing on a specific type. As a result, the details of user modeling techniques for microblogging websites were not presented in [@Abdel-Hafez2013]. For example, including OSNs such as Delicious[^9] and Flickr[^10] which are based on *folksonomies* (folks taxonomies) together with microblogging OSNs for a single survey presents some difficulties due to the volume of literature on *folksonomy*-based user modeling [e.g., @Hung2008; @Szomszor2008; @Abel2011c; @Mezghani:2012:UPM:2187980.2188230; @Carmagnola2008a to name a few]. In addition, the survey conducted by [@Abdel-Hafez2013] does not cover studies from recent years. In this survey, we focus in particular on user modeling strategies in microblogging OSNs in terms of several user modeling dimensions, and analyze over 50 studies including more recent ones (see Appendix \[appendix:works\] for details of the surveyed studies). There has been a varied set of terms used to denote inferring user interests in the literature, such as “user (interest) modeling/profiling/detection”, “inferring/modeling/predicting user interests”. User modeling/profiling, as a broad term, may refer to different meanings without a specific definition. A general definition of *user profiling* given by [@Zhou2012] is “the process of acquiring, extracting and representing the features of users”. Similarly, in [@Brusilovsky2007], the *user model* is defined in the context of adaptive systems as “a representation of information about an individual user that is essential for an adaptive system to provide the adaptation effect”. Based on a specific definition of what the *features* and *information* are in these definitions by [@Zhou2012] and [@Brusilovsky2007], the corresponding user models/profiles and the process of obtaining them might be different. [@Rich1979] along with [@Cohen1979] and [@Perrault1978], where the terms *user model* and *user modeling* can be traced back to, also pointed out the need for classifying your user model as it might refer to several different things without a proper definition. Three major dimensions were used in [@Rich1979] for classifying user models: - Are they models of a canonical user or are they models of individual users? - Are they constructed explicitly by the user themselves or are they abstracted by the system on the basis of the user’s behavior? - Do they contain short-term or long-term information? Explicit information denotes the information which requires direct input by users such as surveys or forms, which will impose an additional burden on the users. Figure \[explicit\] shows an example of collecting *explicit* information about user interests during sign up on Twitter for the first time. ![image](./explicit1.pdf){width="\textwidth"} Definition of User Modeling in This Survey ------------------------------------------ In the context of research on inferring user interests on OSNs, most studies have focused on exploiting *implicit* information such as the posts of users in order to infer user interest profiles. Based on the classification criteria from [@Rich1979], user models discussed in this survey are about individual users constructed implicitly based on their activities. For the third criterion used in [@Rich1979], there is no clear cut option as both short- and long-term information have been used in different user modeling strategies in the literature. In addition, user models can refer to various types of information relevant for each user in the domain of OSNs. For example, they might contain basic information such as age, gender, country, etc., or keywords that represent their interests. In this paper, we focus particularly on user models with respect to user interests. Although several terms such as “user model" and “user profile" have been used interchangeably in the literature, here we formally define these terms as follows: A *user model* is a (data) structure that is used to capture certain *characteristics* about an individual user, and a *user profile* is the actual representation in a given user model. The process of obtaining the user profile is called *user modeling*. Given this definition of a user model and the classification criteria from [@Rich1979], user model in this survey aims to capture user *interests* with respect to an *individual* user *implicitly* based on *long-term* or *short-term* knowledge via a user modeling strategy, to derive the interest profile of that user. Figure \[fig:2\] presents an overview of the modified user profile-based personalization process from [@Abdel-Hafez2013] and [@Gauch:2007:UPP:1768197.1768200]. We modified the process from [@Abdel-Hafez2013] in order to reflect different aspects of user modeling strategies proposed in previous studies in the context of OSNs in detail. For example, we focus on data collection from *user activities*, *social networks/communities* or *external data* of an OSN instead of *explicit* or *implicit* feedback as most previous studies have focused on exploiting *implicit* information for inferring user interests. The modified user profile-based personalization process consists of three main phases. The first step is collecting data which will be used for inferring user interests. Subsequently, user interest profiles are constructed based on the data collected. We use *primitive interests* [@Kapanipathi2014] to denote the interests directly extracted from the collected data. Those primitive interests can either be used as the final output of a profile constructor or can be further enhanced, e.g., based on background knowledge from Knowledge Bases (KBs) such as Wikipedia[^11]. The output of the profile constructor is user interest profiles represented based on a predefined representation of interest profiles, e.g., word-based user interest profiles. Finally, the constructed user profiles are evaluated, and can be used in specific applications such as recommender systems for personalized recommendations. ![image](./S3.pdf){width="\textwidth"} In this paper, we mainly discuss four dimensions of the user modeling process: (1) *data collection*, (2) *representation* of user interest profiles, (3) *profile construction and enhancement*, and (4) the *evaluation* of the constructed user interest profiles. In summary, the contribution of this paper is threefold. - First, we provide a detailed review of user modeling approaches on microblogging services in terms of the three phases in Figure \[fig:2\] with the following focuses: 1. *What information is used for inferring user interest profiles?* 2. *How are the user interest profiles represented?* 3. *How are the user interest profiles constructed?* 4. *How are the constructed user profiles evaluated?* - Second, we summarize the approaches with respect to these focuses based on specified criteria to be specified later on. - Finally, we discuss the challenges and opportunities based on the strengths and weaknesses of different approaches. [|l|l|]{} --------------------- **OSNs** **(\# of studies)** --------------------- : Online Social Networks used for previous studies.[]{data-label="osns"} & **Examples**\ Twitter (47) & ---------------------------------------------------------------------------------------------------------------------------------------------- [@Chen2010], [@Lu2012], [@Kapanipathi2014; @Kapanipathi2011], [@piao2016exploring; @Guangyuan2017; @Piao2017; @Piao2016b; @Piao2016d], [@Besel:2016:ISI:2851613.2851819; @Besel:2016:QSI:3015297.3015298], [@Abel2011g; @Abel2011e; @Abel2012a; @Abel2011d; @Abel:2013:TUM:2540128.2540558], [@Siehndel:2012:TUP:2887379.2887395], [@Michelson2010], [@Bhattacharya:2014:IUI:2645710.2645765], [@Orlandi2012], [@Hannon2012], [@Jiang2015], [@Budak2014], [@Faralli2015; @Faralli2017], [@Weng:2010:TFT:1718487.1718520], [@Zarrinkalam2015a; @Zarrinkalam2016], [@Narducci2013], [@Xu2011], [@GarciaEsparza:2013:CCT:2449396.2449402], [@Nishioka:2016:PVT:2910896.2910898; @Nishioka:2015:ITU:2809563.2809601], [@Gao:2011:ITU:2052138.2052335], [@Vu:2013:IMU:2505515.2507883], [@Phelan:2009:UTR:1639714.1639794], [@Penas2013], [@Sang:2015:PFT:2806416.2806470], [@Karatay2015a], [@Kanta2012], [@OBanion2012], [@Nechaev], [@Lim:2013:ICT:2491055.2491078], [@Bolting2015], [@AnilKumarTrikhaFattaneZarrinkalam], [@Spasojevic:2014:LLS:2623330.2623350], [@Jipmo2017] ---------------------------------------------------------------------------------------------------------------------------------------------- : Online Social Networks used for previous studies.[]{data-label="osns"} \ Facebook (7) & ----------------------------------------------------------------------------------------- [@Kang2016], [@Orlandi2012], [@Kapanipathi2011], [@Narducci2013], [@Bhargava:2015:UMU:2678025.2701365], [@Ahn:2012:IUI:2457524.2457681], [@Spasojevic:2014:LLS:2623330.2623350] ----------------------------------------------------------------------------------------- : Online Social Networks used for previous studies.[]{data-label="osns"} \ LinkedIn (2) & [@Kapanipathi2011], [@Spasojevic:2014:LLS:2623330.2623350]\ Google+(1) & [@Spasojevic:2014:LLS:2623330.2623350]\ Table \[osns\] provides a summary of OSNs used for the works discussed in this survey. As we can see from the table, Twitter has been widely used due to its popularity and the higher degree of openness. Other OSNs such as Facebook or LinkedIn[^12] need to gain the permissions of users to access their data. Therefore, users have to be recruited for conducting an experiment, which results in less studies using these OSNs. In contrast to other studies, the study from Klout[^13], Inc. [@Spasojevic:2014:LLS:2623330.2623350], which is a social media platform that aggregates and analyzes data from multiple OSNs, leveraged all the OSNs listed in Table \[osns\]. As different design choices can be made for user modeling with different purposes, Table \[purpose\] provides an overview of the purpose of user modeling in each study. As we can see from the table, the majority of the previous studies have been conducted with the purpose of predicting user interests followed by recommending different types of content such as news, URLs, publications, and tweets. [|l|l|]{} & **Examples**\ ---------------- Predicting user interests ---------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} & --------------------------------------------------------------------------------------------------------------- [@Kapanipathi2014], [@Kang2016], [@Michelson2010], [@Budak2014], [@Bhattacharya:2014:IUI:2645710.2645765], [@Besel:2016:ISI:2851613.2851819; @Besel:2016:QSI:3015297.3015298], [@Orlandi2012], [@Narducci2013], [@Bhargava:2015:UMU:2678025.2701365], [@GarciaEsparza:2013:CCT:2449396.2449402], [@Vu:2013:IMU:2505515.2507883], [@Ahn:2012:IUI:2457524.2457681], [@Abel2011e] [@Zarrinkalam2016], [@Ahn:2012:IUI:2457524.2457681], [@Spasojevic:2014:LLS:2623330.2623350], [@Jipmo2017], [@Faralli2017], [@Jiang2015], [@Xu2011], [@Penas2013], [@Lim:2013:ICT:2491055.2491078] --------------------------------------------------------------------------------------------------------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} \ ----------------- News recommendations ----------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} & -------------------------------------------------------- [@Abel2011g], [@Gao:2011:ITU:2052138.2052335], [@Zarrinkalam2015a], [@Sang:2015:PFT:2806416.2806470], [@Kanta2012], [@OBanion2012] -------------------------------------------------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} \ ----------------- URL recommendations ----------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} & ------------------------------------------------------------------------------------- [@Chen2010], [@Abel2011d], [@piao2016exploring; @Piao2016a; @Guangyuan2017; @Piao2017; @Piao2016b; @Piao2016d] ------------------------------------------------------------------------------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} \ ----------------- Publication recommendations ----------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} & [@Nishioka:2016:PVT:2910896.2910898], [@Bolting2015]\ ----------------- Tweet recommendations ----------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} & ------------------------------------------------------- [@Lu2012], [@Sang:2015:PFT:2806416.2806470], [@Karatay2015a], [@AnilKumarTrikhaFattaneZarrinkalam] ------------------------------------------------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} \ ----------------- Researcher recommendations ----------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} & [@Nishioka:2015:ITU:2809563.2809601]\ POI recommendations & [@Abel2012a]\ ---------------------- User recommendations and classifications ---------------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} & [@Faralli2015]\ ---------------- Concealing user interests ---------------- : Purposes of user modeling in OSNs from previous studies.[]{data-label="purpose"} & [@Nechaev]\ [|p[10cm]{}|]{} **Data Collection**\ 1. using user activities 2. using the social networks/communities of a user 3. using external data \ **Representation of User Interest Profiles**\ 1. keyword profiles 2. concept profiles 3. multi-faceted profiles \ **Construction and Enhancement of User Interest Profiles**\ 1. profile construction with weighting schemes - heuristic approaches - probabilistic approaches 2. profile enhancement - leveraging hierarchical knowledge - leveraging graph-based knowledge - leveraging collective knowledge 3. temporal dynamics - constraint-based approaches - interest decay functions \ **Evaluation**\ [Table \[conceptual\_framework\] is a conceptual framework for discussing user modeling strategies proposed in the related work and to act as a “guide” to the rest of this survey.]{} The rest of this paper is organized as follows. In Section 2, we discuss what kind of information has been collected for inferring user interests. Section 3 introduces various representations of user interest profiles proposed in the literature. In Section 4, we review how user profiles have been constructed based on different dimensions such as considering the temporal dynamics of user interests. In Section 5, we discuss how those constructed user profiles have been evaluated in the literature. Finally, we conclude the paper with some discussions of opportunities and challenges with respect to user modeling on microblogging OSNs in Section 6. Data Collection =============== Overview -------- This section of the survey discusses the first stage of user modeling, which is the data collection. In the context of OSNs, there are various information sources for collecting data in order to infer user interest profiles such as user information including the tweets or profiles with respect to a user and information from that user’s social network. The information used for user modeling is important as it might directly affect later stages such as the representation and construction of user interest profiles, and the quality of final profiles. The discussion is carried out over the criteria of whether the information is collected from a *user’s activities* or the *social networks/communities* of that user from the target microblogging platform (where the target users come from) or *external data*. Given Twitter is the largest microblogging social networking platform and is the most used OSNs in the literature as depicted in Table \[osns\], here we mainly focus on inferring user interest profiles on Twitter. ### Using user activities A straightforward way of inferring user interests for a target user is leveraging information from the user’s activities in OSNs. Take Twitter as an example, a user can have different activities such as posting, re-tweeting, liking or replying to a tweet. Users can also describe themselves in their profiles or follow other people on Twitter which might reveal their interests. Therefore, we can leverage these user activities to infer user interests. This could be analyzing data from the posts, profiles or following activities of users. For instance, we can assume that a user is interested in `Microsoft` if the user mentions `Microsoft` frequently in the tweets or is following the Twitter account `@Microsoft`. However, inferring user interests from their activities such as posting tweets or re-tweeting requires users to be active, which is not always the case. For example, [@Gong2015] reported that a significant portion of Twitter users are *passive ones* who keep following other users in order to consume information on Twitter but who do not generate any content. ### Using the social networks/communities of a user Leveraging information from the social networks/communities of a user can be useful to infer user interest profiles, especially for *passive users* who have little activity but who keep following other users to receive information. In this case, the generated content such as the posts and the profiles of users in a user’s social network can be used for inferring that user’s interests. For example, if many followees of a user post tweets with respect to `Microsoft` frequently or belong to a common community related to `Microsoft`, we can assume that the user is interested in `Microsoft` as well. ### Using external data The ideal length of a post on any OSN ranges between 60 to 140 characters for better user engagement[^14]. Analyzing microblogging services such as Twitter is challenging due to their nature of generating short, noisy texts. Understanding those short messages plays a key role in user modeling in microblogging services. To this end, previous studies have investigated leveraging external data such as the content of embedded links/URLs in a tweet, in order to enrich the short text for a better understanding of it. [@Haewoon2010a] showed that most of the topics on Twitter are about news which could also be found in mainstream news sites. In this regard, some researchers have proposed linking microblogs to news articles and exploring the content of news articles in order to understand short texts in microblogging services better. Review ------ ### Using user activities {#Using information inside the platform} The posts generated by users are the most common source of information for inferring user interests. Take Twitter as an example, the tweets or retweets of users provide a great amount of data that might implicitly indicate what kinds of topics a user might be interested in. Therefore, using the post streams of target users for inferring user interest profiles has been widely studied in the literature regardless of the different manners for how user interests are represented. For instance, [@Kapanipathi2014] extracted Wikipedia entities from the tweet streams of users while [@Chen2010] extracted keywords from them. Inferring user interests based on users’ posts requires users to be active, i.e., continuously generating content. On the one hand, there is an increasing number of users leveraging OSNs to seek the information they need, e.g., one in three Web users look for medical information, and over half of surveyed users consume news in OSNs[^15] [@Sheth2016]. On the other hand, there is also a rise of passive users in OSNs. For example, two out of five Facebook users only browse information without active participation within the platform[^16] [@Besel:2016:ISI:2851613.2851819], and [@Gong2015] reported that a significant portion of Twitter users are *passive ones* who consume information on Twitter without generating any content. Therefore, it is also important to infer user interest profiles for those *passive users* in OSNs. Some studies pointed out that exploring posts for inferring user interests is computationally ineffective and unstable due to the changing interests of users [@Besel:2016:QSI:3015297.3015298; @Faralli2015; @Faralli2017; @Besel:2016:ISI:2851613.2851819; @Nechaev]. Instead of analyzing posts to infer user interests, these studies proposed using the *followeeship* information of users, which can infer more stable user interest profiles as the relationships of common users tend to be stable [@Myers2014]. In this line of work, *topical followees* that can be mapped to Wikipedia entities often need to be identified, e.g., identifying the followee account `@messi10stats` on Twitter as `wiki:Lionel_Messi`. One of the problems with these approaches based on topical followees is that only a small portion of users’ followees are topical ones. The authors from [@Faralli2015] and [@Guangyuan2017] both showed that, on average, only 12.7% and 10% of followees of users in their datasets can be linked to Wikipedia entities. Therefore, a lot of information from followees that do not have corresponding Wikipedia entities is missed. For example, based on the topical-followees approach we cannot infer any interests for a user who is following `@Alice` who has a biography as *“User Modeling and Recommender Systems researcher”*.\ \ **Pros and cons.** Analyzing user activities for inferring user interests collects data from users themselves which can reflect their interests better compared to inferring from their social networks which will be discussed later. However, it requires users actively generate content in order to infer their interests from their generated content such as tweets, retweets, and likes on Twitter. Although leveraging the *topical-followees* approach can be used for inferring user interests for passive users, the usage of followees’ information is limited. ### Using the social networks/communities of a user To cope with some problems such as inferring user interest profiles for passive users, information from social networks such as tweets from followees or followers or posts from Facebook friends can be utilized for inferring user interests for *passive users* as well as *active ones*. All aforementioned activities used for inferring a user’s interests can be analyzed with respect to a user’s social network as well for inferring that user’s interests. For instance, [@Chen2010] and [@Budak2014] explored the tweets of target users and their followees to infer user interests. Although using posts generated by users is of great potential for mining user interests, it also faces some challenges due to the short and noisy nature of microblogs. Compared to the aforementioned topical-followees approach, information from the social networks of users such as their followees can provide much more information. Returning to the example of inferring user interests for a user who is following `@Alice` in the previous subsection, we can infer this user is interested in `User Modeling` and `Recommender Systems` based on the biography of `@Alice` - *“User Modeling and Recommender Systems researcher”*. In [@Guangyuan2017], the authors proposed leveraging *biographies* of followees to extract entities instead of mapping followees to Wikipedia entities, and showed the improvement of inferred user interest profiles in the context of URL recommendations. *List membership*, which is a kind of “tagging” feature on Twitter, has been explored as well. A list membership is a topical list or community which can be generated by any user on Twitter, and the creator of the list can freely add other users to the topical list. For instance, a user `@Bob` might create a topical list named “Java” and add his followees who have been frequently tweeting about news on this topic. Therefore, if a user `@Alice` is following users who have been added into many topical lists related to the topic `Java`, it might suggest that `@Alice` is interested in this topic as well. [@Kim2010a] studied the usage of Twitter lists and confirmed that lists can serve as good groupings of Twitter users with respect to their characteristics based on a user study. Based on the study, the authors also suggested that the Twitter list can be a valuable information source in many application domains including recommendations. In this regard, several studies have exploited list memberships of followees to infer user interest profiles [@Hannon2012; @Bhattacharya:2014:IUI:2645710.2645765; @Piao2017]. User interests might be following global trends in some trends-aware applications such as news recommendations. To investigate it, [@Gao:2011:ITU:2052138.2052335] proposed interweaving global trends and personal user interests for user modeling. In addition to leveraging the tweets of a target user for inferring user interests, the authors constructed a trend profile based on all tweets in the dataset in a certain time period. Afterwards, the final user interest profile was built by combining the two profiles. The results showed that combined user interest profiles can improve the performance of news recommendations while the first profile based on personal tweets plays a more significant role in the combination.\ \ **Pros and cons.** On the one hand, a lot of data can be collected from the social networks of users, which is useful in the case of when inferring user interest profiles for passive users who do not generate much content but who keep following other users. On the other hand, it is difficult to distinguish the activities of a user’s followees that are relevant to the interests of that user. For example, the followees of a user can tweet a wide range of topics that they are interested in, and the user is not always interested in all those topics. ### Using external data One of the challenges of inferring user interests from OSNs is that the generated content is often short and noisy [@Bontcheva2014]. To better understand the short texts of microblogging services such as tweets, external information beyond the target platform has been explored on top of the information sources discussed in the previous subsections. For instance, [@Abel2011g; @Abel2011e; @Abel:2013:TUM:2540128.2540558] proposed linking tweets to news articles and extract the *primitive interests* of users based on their tweets as well as the content of related news articles. Several strategies were proposed in [@Abel2011e], which were later on developed as a Twitter-based User Modeling Service [TUMS, @Tao2012]. However, it requires maintaining up-to-date news streams from mainstream news providers such as CNN[^17] in order to link tweets to relevant news articles. Instead, [@Abel2011d] and [@Piao2016d] leveraged the content of the embedded URLs in tweets. [@Hannon2012] used a third-party service Listorious[^18], which is a service providing annotated tags of list memberships on Twitter, for inferring user interest profiles. Given a target user *u*, the authors construct *u*’s interest profile based on the tags of list memberships with respect to the user. With the popularity of different OSNs, users nowadays tend to have multiple OSN accounts across various platforms [@Liu2013b]. In this context, some of the previous studies have investigated exploiting user interest profiles from other OSNs for cross-system user modeling. For instance, [@Orlandi2012] and [@Kapanipathi2011] presented user modeling applications that can aggregate different user interest profiles from various OSNs. However, the evaluation of aggregated user interest profiles has not been provided. [@Abel2012a] investigated cross-system user modeling with respect to POI, and showed that the aggregation of Twitter and Flickr user data yields the best performance in terms of POI recommendations compared to modeling users separately based on a single platform. The result is in line with another study by them which aggregated user interest profiles on social tagging systems such as Delicious[^19], StumbleUpon[^20], and Flickr [@Abel2013]. The work from Klout [@Spasojevic:2014:LLS:2623330.2623350], which allows their users to add multiple OSN identities on their services, showed many insights on aggregating user information from multiple information sources in different OSNs for inferring user interests. The authors pointed out that using user-generated content (UGC) alone leads to a high precision but low recall for topic recommendations, and therefore, other information sources such as the ones from followees are needed. They also observed that the overlap of a user’s interests from different OSNs is very small, which shows that a user may not reveal all his/her interests on any single OSN alone due to the different characteristics of OSNs. Therefore, aggregating users’ information in different OSNs leads to a better understanding of their interests [@Spasojevic:2014:LLS:2623330.2623350].\ \ **Pros and cons.** Leveraging external data such as the content of embedded URLs in a tweet can provide a better understanding of short microblogs, and exploring information from other OSNs of users can reveal their interests better compared to exploring a single OSN. Nevertheless, analyzing external data requires an additional effort and it is not always available. In addition, external data can also have irrelevant content with respect to user interests and might introduce some noise. Summary and discussion ---------------------- In this section, we reviewed different information sources that have been used for collecting data in order to infer user interest profiles. Table \[datacollection\] summarizes information sources used for inferring user interest profiles in the literature. As we can see from Table \[datacollection\], user activities have been used widely for inferring user interest profiles in microblogging social networks in previous studies. Although there have been many information sources used for inferring user interests, the comparison of different data sources for inferring user interest profiles has been less explored. Some approaches have utilized different aspects of information of followees such as *topical followees, biographies*, or *list memberships* [e.g., @Besel:2016:ISI:2851613.2851819; @Besel:2016:QSI:3015297.3015298; @Hannon2012; @Bhattacharya:2014:IUI:2645710.2645765; @Guangyuan2017]. However, it has not been clearly shown in these studies if these approaches perform better than exploiting users’ posts. The usefulness of user interest profiles built from various information sources might be different depending on different applications. For instance, [@Chen2010] showed that user interest profiles based on the user’s own streams perform better than profiles based on followee streams in the context of URL recommendations on Twitter. However, those profiles based on followee streams might be more useful for recommending followees. In addition, combining different information sources have shown its efficiency in a few studies [e.g., @Abel2012a; @Piao2017]. However, how to combine different information sources for inferring user interests, and whether there is a synergistic effect on application performance by the combination might require more study. For instance, user interests extracted from different data sources can be either aggregated into a single user interest profile [e.g., @Orlandi2012; @Abel2012a] or remain as separate profiles [@Piao2017] to measure the preference score of a candidate item for recommendations. Also, combining different data sources has mainly been studied for aggregating user interests from multiple OSNs. Instead, combining different data sources inside the target platform might be useful for inferring user interests as well, e.g., combining extracted user interests from different information sources of followees and users. Representation of User Interest Profiles ======================================== Overview -------- In this section, we provide an overview of how user interest profiles have been represented in the different approaches. Here we first provide an overview of user representations for personalized information access that was introduced in [@Gauch2007], and *multi-faceted profiles* which have been proposed in several studies in the literature. We then carry out the review based on three different types of representations in the context of inferring user interest profiles in OSNs in the literature, which include (1) *keyword profiles*, (2) *concept profiles*, and (3) *multi-faceted profiles*. In [@Gauch2007], the authors defined three types of user representations for personalized information access: - keyword profiles; - concept profiles; - semantic network profiles. **Keyword profiles.** In this representation of user interest profiles, each *keyword* or a *group of keywords* can be used for representing a topic of interest. This approach was predominant in every adaptive information retrieval and filtering system and is still popular in these areas [@Brusilovsky2007]. When using each keyword for representing user interests, the importance of each word with respect to users can be measured using a defined weighting scheme such as TF$\cdot$IDF (Term Frequency $\cdot$ Inverse Document Frequency) from information retrieval [@Salton1986]. In the case of using groups of keywords for representing user interests, the user interest profiles can be represented as a probability distribution over some topics, and each topic is represented as a probability distribution over a number of words. The topics can be distilled using topic modeling approaches such as Latent Dirichlet Allocation (LDA) [@Blei2003], which is an unsupervised machine learning method to learn topics from a large set of documents. **Concept profiles.** Concept-based user profiles are represented as conceptual nodes (concepts) and their relationships, and the concepts usually come from a pre-existing knowledge base [@Gauch2007]. They can be useful for dealing with the problems that keyword profiles have. For example, WordNet [@Miller1995] groups related words together in concepts called *synsets*, which has been proved useful for dealing with *polysemy* in other domains. For example, [@Stefani] used WordNet synsets for representing user interests in order to provide personalized website access instead of using keywords as they are often not enough for describing someone’s interests. Another type of concept is *entities with URIs* (Uniform Resource Identifiers). For instance, this involves using `dbr:Apple_Inc.` to denote the company `Apple`, which is disambiguated based on the context of the word *apple* in a text such as tweet and linked to knowledge bases such as Wikipedia or DBpedia [@Auer2007]. DBpedia is the semantic representation of Wikipedia and it has become one of the most important and interlinked datasets on the Web of Data, which indicates a new generation of technologies responsible for the evolution of the current Web from a Web of interlinked documents to a Web of interlinked data [@Heath2011]. To facilitate reading, we use DBpedia concepts to denote concepts from Wikipedia or DBpedia. **Semantic network profiles.** This type of profile aims to address the polysemy problem of keyword-based profiles by using a weighted semantic network in which each node represents a specific word or a set of related words. This type of profile is similar to concept profiles in the sense of the representation of conceptual nodes and the relationships between them, despite the fact that the concepts in semantic network profiles are learned (modeled) as part of user profiles by collecting positive/negative feedback from users [@Gauch2007]. As most previous works have focused on implicitly constructing user interest profiles in microblogging services, this type of profile has not been used in the domain of user modeling in microblogging services. **Multi-faceted profiles.** Based on these representation strategies, user interest profiles can include different aspects of user interests such as interests inferred from their tweets, profiles or list memeberships. These different aspects of user interests can be combined to construct a single user interest profile or maintained separately as several user interest profiles for a target user. Although it is common to use a single representation with respect to a user interest profile, the *polyrepresentation theory* [@Ingwersen1994] based on a cognitive approach indicates that the overlaps between a variety of aspects or contexts with respect to a user within the information retrieval process can decrease the uncertainty and improve the performance of information retrieval. Based on this theory, [@White:2009:PUI:1571941.1572005] studied polyrepresentation of user interests in the context of a search engine. The authors combined five different aspects/contexts of a user for inferring user interests, and showed that polyrepresentation is viable for user interest modeling. Review ------ ### Keyword profiles Similar to other adaptive information retrieval and filtering systems, representing user interests using *keywords* or *groups of keywords* is popular in OSNs as well. For instance, [@Chen2010] and [@Bhattacharya:2014:IUI:2645710.2645765] represented user interest profiles by using vectors of weighted keywords from the tweets and the descriptions of list memberships of users, respectively. Despite the huge volume of information from UGC, extracting keywords from microblogs for inferring user interest profiles is challenging due to the nature of short and noisy messages [@Liao2012]. As an alternative approach, another special type of keyword such as *tags* and *hashtags*[^21] has been used for inferring user interest profiles. In contrast to the words mined from the short texts of microblogs, keywords from tags/hashtags might be more informative and categorical in nature. [@Abel2011g; @Abel2011d] investigated hashtag-based user interest profiles by extracting hashtags from the tweets of users, and [@Hannon2012] leveraged keywords from the tags of users’ list memberships for representing their interest profiles. Topics distilled from topic modeling approaches such as LDA are also popular for representing user interest profiles. A topic has associated words with their probabilities with respect to the topic. For example, an information technology-related topic can have some top associated words such as “google, twitter, apple, web”. [@Weng:2010:TFT:1718487.1718520] used LDA to distill 50 topics and represented each user as a probability distribution over these topics. In [@Abel2011e; @Abel2011g; @Abel:2013:TUM:2540128.2540558], the authors also used topics for representing user interests where those topics were extracted by ready-to-use NLP (Natural Language Processing) APIs such as OpenCalais[^22].\ \ **Pros and cons.** Keyword profiles are the simplest to build, and do not rely on external knowledge from a knowledge base. One of the drawbacks of the keyword-based user profiles is *polysemy*, i.e., a word may have multiple meanings which cannot be distinguished by using keyword-based representation. In addition, these keyword-based approaches lack semantic information and cannot capture relationships among these words, and the assumption of topic modeling approaches that a document has rich information is not the case for microblogs [@Zarrinkalam2015]. [@Spasojevic:2014:LLS:2623330.2623350] further pointed out that topic modeling approaches cannot provide a scalable solution for inferring topics for millions of users which include a great number of passive users. ### Concept profiles To address some problems of keyword-based approaches, researchers have proposed leveraging *concepts* from KBs such as DBpedia for representing user interests. One of the advantages of leveraging KBs is that we can exploit the background knowledge of these concepts to infer user interests which might not be captured if using keyword-based approaches. For instance, a big fan of the `Apple` company would be interested in any brand-new products from `Apple` even the names of these products have never been mentioned in the user’s primitive interests [@Lu2012]. Concepts from various types of KBs have been leveraged for different purposes of user modeling, such as the ones from simple concept taxonomies with respect to news [@Kang2016], domain-specific KBs such as STW[^23], ACM CCS, and Medical Subject Headings[^24] (MeSH) [@Nishioka:2016:PVT:2910896.2910898; @Nishioka:2015:ITU:2809563.2809601; @Bolting2015], and cross-domain KBs such as DBpedia [@Lu2012; @piao2016exploring; @Guangyuan2017; @Piao2017; @Faralli2015; @Piao2016b; @Piao2016d; @Abel2011g; @Abel2011d; @Abel2011e]. In the following, we discuss some details of the representation strategy using DBpedia concepts which have been the most widely used for representing user interest profiles. **Entity-based profiles.** This approach extracts entities from information sources such as a user’s tweets, and uses these entities to represent user interest profiles. Take the following real-word tweet as an example [@Michelson2010]:\ \ “*\#Arsenal winger Walcott: Becks is my England inspiration: http://tinyurl.com/37zyjsc*”,\ \ there are four entities such as `dbr:Arsenal_F.C.`, and `dbr:Theo_Walcott` within the tweet, which can be used for constructing entity-based user interest profiles. However, this approach is difficult to infer more specific interests which might need to be represented by combining multiple related entities or interests that cannot be found in a knowledge base. To address this issue, some studies have proposed representing each topic of interest as a *conjunction of multiple entities*, which are correlated on Twitter in a certain timespan [@Zarrinkalam2015a; @Zarrinkalam2016]. These sets of entities for representing a topic of interest can be learned via unsupervised approaches in a similar manner to learning topics with topic modeling approaches for keyword-based profiles. **Category-based profiles.** An alternative approach is using DBpedia *categories*, which represents more general user interests compared to using DBpedia *entities*. Returning to the example in the previous paragraph, the categories of the mentioned entities in that tweet such as `dbr:Category:English_Football_League` can be used for representing the topic of interests instead of those entities. One can also choose the level or depth of categories in a KB for representing user interest profiles or use all categories related to primitive interests. The top-level DBpedia categories can refer to general ones such as `dbr:Category:Sports` and `dbr:Category:Health` compared to the categories in a lower level such as `dbr:Category:English_Football_League`. For example, [@Michelson2010] and [@Nechaev] used top-level categories to represent user interest profiles while other studies [@Faralli2017; @Kapanipathi2014; @Flati2014 etc.] used hierarchical categories to represent user interest profiles. Figure \[twixonomy\] shows an example of category-based representation of user interests based on extracted entities from followees’ account names, which is called *Twixonomy* [@Faralli2017]. ![image](./twixonomy.pdf){width="\textwidth"} **Hybrid representations.** Each aforementioned representation has its strengths and weaknesses. In terms of entity- or category-based representations, extracting entities with URIs is a fundamental step for constructing either *entity-* or *category-based* user interest profiles. However, the task of extracting entities is non-trivial [@Kapanipathi2014] due to the noisy, informal language of microblogs [@Ritter2011]. In addition, knowledge bases might be out-of-date for emerging concepts on microblogging services, and therefore cannot capture these concepts during the entity extraction process. To overcome the drawbacks of using a single interest format, *hybrid representations* based on various interest formats have been explored as well. Instead of using only entities or categories for representing user interests, hybrid approaches combine different interest formats for constructing user profiles [@piao2016exploring; @Guangyuan2017; @Piao2017; @Faralli2015; @Piao2016d; @Nishioka:2016:PVT:2910896.2910898; @OBanion2012]. For example, [@OBanion2012] used categories as well as entities to represent user interest profiles. [@Piao2016d; @Piao2016b] proposed a hybrid approach using both DBpedia entities and WordNet synsets for representing user interests in order to capture user interests that might be missed due to the problem with entity recognition in microblogs.\ \ **Pros and cons.** On the one hand, concept-based approaches present the semantics between concepts and can leverage background knowledge about concepts for propagating user interest profiles. On the other hand, these approaches rely on pre-existing or pre-constructed KBs which might be not always available in or lack of coverage with respect to some domains. ### Multi-faceted profiles Multi-faceted profiles model multiple aspects for a target user based on different information sources or using different representation strategies in order to derive a comprehensive view of that user. The assumption here is that different aspects of users may complement each other and improve the inferred user interest profiles. [@Hannon2012] proposed a multi-faceted user profile which includes user interests from target users, their followees, and followers. Figure \[multi\] shows an example from [@Hannon2012] for representing user interests, where user interests are represented based on the tags of list memberships of users, followees, or followers provided by a third-party service. The figure shows that user interests inferred from different aspects can complement each other and lead to a better understanding of a target user. However, they did not evaluate the effectiveness of multi-faceted profiles in the context of personalized recommendations and left it as a future work. ![image](./multi.jpg){width="\textwidth"} The authors in [@Lu2012] and [@Chen2010] both constructed two keyword-based user interest profiles for each user. In [@Chen2010], two keyword-based user interest profiles were built based on the tweets of users and those of their followees for recommending URLs on Twitter. The results in [@Chen2010] showed that using user interest profiles based on the tweets of users performs better than using those based on the tweets of their followees. [@Lu2012] proposed using DBpedia entities and the affinity of other users to construct two user interest profiles for recommending tweets on Twitter. For a given user, the first user profile was represented as a vector of DBpedia entities, which were extracted from the user’s tweets. Both of these studies did not investigate the synergistic effect of combining these two aspects compared to considering a single aspect of users. More recently, [@Piao2017] showed that leveraging concept-based profiles from the biographies and list memberships of followees can complement each other and improve the URL recommendation performance on Twitter.\ \ **Pros and cons.** Multi-faceted profiles provide a comprehensive view of a user with respect to his/her interests and can improve recommendation performance. On the other hand, multiple information sources have to be explored for constructing multi-faceted profiles. Summary and discussion ---------------------- In this section, we reviewed various ways of representing user interests such as using *keywords*, various types of *concepts*, and some multi-faceted approaches. Table \[representation\] shows a summary of different representations of user interests adopted by previous studies. Those different representations of user interests might work differently depending on the application where these user profiles are used. For example, we usually have to construct item profiles in the same way as constructing user interest profiles in order to measure the similarity between them for providing recommendations. The entity-based representation strategies for user interests might be appropriate for recommending items with long content, e.g., news or URL recommendations as the content of them is usually long. In contrast, these representation strategies might not work well for recommending items with short descriptions such as tweets due to the difficulty of extracting entities from them. For example, the low recall of entities on Twitter has been reported in both [@Kapanipathi2014] and [@Piao2016d] using several state-of-the-art NLP APIs. In a recent study [@Manrique:2017:SDA:3106426.3109440], the authors also showed that 30% of the titles of a research article cannot extract any entity at all. Some hybrid approaches such as combining word- and concept-based representations might be useful in this case. In addition, different facets should be considered carefully for constructing multi-faceted profiles in the context of item recommendations. Each facet of multi-faceted profiles can have different importance for the recommended items, and leveraging completely unrelated facets might introduce noise to the constructed profiles. For example, [@Piao2017] showed that different weights are required for different facets in order to achieve the best performance in URL recommendations on Twitter. [@Abel2013] showed that it is helpful to have sufficient overlap between different facets of multi-faceted profiles for tag recommendations in a cold start. It is also worth noting that the structure of user interest profiles can be different even with the same user interest format. Take a category-based user interest profile as an example, it can be a *vector*, *taxonomy* or *graph* by retaining the hierarchical or general relationships among categories. Also, the final profile extracted from the same structure can be different. For instance, both user interest profiles proposed in [@Faralli2017] (see Figure \[twixonomy\]) and [@Kapanipathi2014] were represented as a *taxonomy* at first, but were used differently for the final representation of user interests. In [@Faralli2017], entities or categories in different levels were used separately as an interest vector for representing a user, e.g., using categories that were two hops away from the user’s primitive interests as the final interest profile. However, using a specific abstraction level of the category taxonomy for all users does not consider that different users might have different depths or expertise levels in terms of a topic of interests. In contrast, [@Kapanipathi2014] sorted all categories in the taxonomy of a user based on their weights for representing the user’s interest profile. The different usages of the category taxonomy indicate some opportunities and challenges. On the one hand, the taxonomy structure of user interests is flexible enough to extract different abstraction levels of user interests or an overview of them. On the other hand, it has not been investigated which type of user interest profile obtained from the taxonomy structure is better. Construction and Enhancement of User Interest Profiles ====================================================== Overview -------- So far we have focused our discussion on collecting data from various sources for inferring user interests, and different representations for interest profiles. In this section, we provide details on how user interest profiles of a certain representation can be constructed based on the collected data. The overview of the construction and enhancement of user interest profiles is carried out based on three criteria: - profile construction with weighting schemes; - profile enhancement; - temporal dynamics of user interests. Based on a defined representation of user interest profiles, a profile constructor aims to determine the weights of user interest formats such as words or concepts in user profiles with a certain *weighting scheme*. The weights of interest formats denote the importance of these interests with respect to a user. In Section \[Profile Construction\], we review different weighting schemes based on various information sources such as users’ posts or their followees, etc. Primitive interest profiles, e.g., entity-based user profiles, can be further enhanced by using background knowledge from knowledge bases. For instance, this can be achieved by inferring category-based user interest profiles on top of the extracted entities from the data collected. Section \[Profile Enhancement\] describes the approaches leveraging knowledge bases for enhancing primitive interest profiles. User interests can change over time in OSNs. For instance, a user interest profile built during the last two weeks might be totally different from one built from two years ago. In Section \[Temporal Dynamics of User Interests\], we look at whether or not the temporal dynamics of user interests have been considered when constructing user interest profiles, and if yes, how they have been incorporated during the construction process. Review ------ ### Profile Construction with Weighting Schemes {#Profile Construction} The output of a profile constructor is a primitive user interest profile represented by weighted interests based on a predefined representation. A *weighting scheme* is a function or process to determine the weights of user interests. **Heuristic approaches.** A common and simple weighting scheme is using the frequency of an interest $i$ (e.g., a keyword or an entity) to denote the importance of $i$ with respect to a user *u*, which can be formulated as below when the data source is *u*’s posts: $$TF_u(w_i) = frequency\mbox{ }of\mbox{ }i\mbox{ }in\mbox{ }u's\mbox{ }posts.$$ \ Despite its simplicity, this approach has been widely used in the literature, particularly in entity-based user interest representations [@Kapanipathi2014; @Abel2011e; @Tao2012]. Interests represented as concepts such as entities extracted from tweets might come with their confidence scores, and these scores can be incorporated into a weighting scheme. For instance, [@Jiang2015] used TF with the confidence scores of extracted entities from tweets as their weighting scheme. One problem with TF is that common words or entities which appear frequently in many users’ interest profiles and may not be important as user interests. TF$\cdot$IDF is another common weighting scheme to cope with this problem. The IDF score of $i$ with respect to a user *u* based on *u*’s tweets can be measured as below [@Chen2010]: $$IDF_u(i) = log\left[\frac{\#\mbox{ }all\mbox{ }users}{\#\mbox{ }users\mbox{ }using\mbox{ }i\mbox{ }at\mbox{ }least\mbox{ }once}\right].$$ \ Instead of using users for measuring the IDF score of an interest, IDF has been applied in other ways as well. For example, [@Nishioka:2016:PVT:2910896.2910898] applied IDF with randomly retrieved tweets from the streaming API of Twitter, and [@Gao:2011:ITU:2052138.2052335] applied IDF to value the specificity of an interest within a given period of time. It is worth noting that the IDF weighting can also be applied after the *profile enhancement* process [e.g., @Piao2016d; @Nishioka:2016:PVT:2910896.2910898]. More sophisticated approaches can be applied for weighting user interests. In [@Vu:2013:IMU:2505515.2507883], the authors compared different weighting schemes such as TF$\cdot$IDF, TextRank [@mihalcea-tarau:2004:EMNLP], and TI-TextRank which was proposed by the authors by combining TF$\cdot$IDF and TextRank. Based on a user study, the authors showed that TI-TextRank performs best for ranking keywords from the tweets of users. In the context of OSNs, specific approaches have to be devised for constructing user interest profiles by exploiting their social networks such as followees on Twitter [@Chen2010; @Lu2012]. To this end, several methods have been proposed. For example, [@Chen2010] first retrieved a set of *high-interest words* for followees as follows in order to build a user profile based on followees’ tweets: First, keyword-based user interest profiles were created using the TF$\cdot$IDF weighting scheme based on the tweets of followees, which are called *self-profiles*. Next, for each *self-profile* for followees of *u*, they picked all words that have been mentioned at least once, and selected the top 20% of words based on their occurrences. In addition, the words that are not in other followees’ profiles were removed. Subsequently, the weight of each word in the set of *high-interest words* was measured as below: $$\begin{split} FTF_u(i) = \mbox{ }& \#\mbox{ }u's\mbox{ } followees\mbox{ } who\mbox{ } have\mbox{ } i\mbox{ }\\ \mbox{ }& as\mbox{ } one\mbox{ } of\mbox{ } their\mbox{ } high-interest\mbox{ } words. \end{split}$$ \ Similar approaches of $FTF_u(i)$ were adopted in [@Piao2017] and [@Bhattacharya:2014:IUI:2645710.2645765] but by exploring the list memberships of followees instead of their tweets for extracting user interests. An alternative approach for aggregating the weights of interests in the followees’ profiles is normalizing each followee’s profiles and then aggregating those normalized weights for building user interest profiles [@Piao2017; @Spasojevic:2014:LLS:2623330.2623350]. In [@Piao2017], the authors showed that this simple alternative approach performs better compared to $FTF_u(i)$ for weighting entities extracted from the list memberships of followees when using inferred user interest profiles for URL recommendations on Twitter. These approaches assume that each followee is equally important when aggregating their interest profiles for building the user interest profile of a target user. However, some followees’ profiles can be more important compared to others with respect to the target user. In [@Karatay2015a], the authors incorporated the relative ranking scores of social networks into their weighting scheme to weight the entities of users. **Probablistic approaches.** The aforementioned approaches focus on interests such as entities appearing in users’ posts, however, not all the entities related to a post explicitly appear in that post. In this regard, some approaches extracted interests such as entities by measuring the similarity between a post and an entity. For instance, [@Lu2012] and [@Narducci2013] used the Explicit Semantic Analysis (ESA) [@Gabrilovich] algorithm, which is designed to compute the similarity between texts, for obtaining the weights of entities for each tweet of a user. Those weights of entities were then aggregated for constructing entity-based primitive interests of users. [@Ahn:2012:IUI:2457524.2457681] quantified the degree of an interest, i.e., a Facebook entity, based on two factors: (1) the familiarity with each social neighbor, and (2) the similarity between the topic distributions of a social content and an interest. *Social content* is the combined text of a post and its comments between users, and the topic distributions of it is obtained using LDA. The weights of user interests have also been learned in unsupervised ways in the literature. For instance, [@Weng:2010:TFT:1718487.1718520] treated tweet histories of each user as a big document, and used LDA to learn topic distributions for each user. [@AnilKumarTrikhaFattaneZarrinkalam] and [@Zarrinkalam2017] also used LDA to infer topic distributions for each user in time intervals where a topic is a set of DBpedia entities. Similarly, user interest profiles were represented as topic vectors where each topic is a set of temporally correlated entities on Twitter in [@Zarrinkalam2015a]. To this end, an entity graph based on their temporal correlation as defined by the authors was constructed, and the topics in a time interval were extracted using some existing community detection algorithms such as the *Louvain* method [@Rotta:2011:MLS:1963190.1970376]. The Louvain method is a simple and efficient algorithm for community detection, and relies upon a heuristic for optimizing modularity which quantifies the density of the links inside of the communities as compared to the links between communities. Subsequently, each topic $z$ was transformed into a set of weighted entities using the *degree centrality* of an entity in the topic (community). Finally, they obtained the weight of a topic based on the weight of an entity *c* with respect to the topic and the frequency of *c* in *u*’s tweets. [@Budak2014] proposed a probabilistic generative model to infer user interest profiles which are represented as an interest probability distribution over ODP (Open Directory Project[^25]) categories. In their proposed approach, the authors considered three aspects such as (1) the posts of a target user, (2) the activeness of the user, and (3) the influence of friends. They assumed that time is divided into fixed time steps, and transformed the problem into inferring the probability of a user being interested in each of the interests, given a social network that evolves over time including posts and social network information. [@Sang:2015:PFT:2806416.2806470] also proposed a probabilistic framework for inferring user interest profiles. Differring from [@Budak2014], [@Sang:2015:PFT:2806416.2806470] assumed users have long- and short-term interest (topic) distributions. Long-term interests denote stable preferences of users while short-term interests denote user preferences over short-term topics of events in OSNs. However, they did not consider users’ social networks. In contrast to the aforementioned approaches, which assume all tweets posted by users are related to their interests, [@Xu2011] proposed a modified author-topic model [@Rosen-Zvi:2004:AMA:1036843.1036902] for distinguishing interest-related and unrelated tweets when learning the topic distributions of users. ### Profile Enhancement {#Profile Enhancement} One of the advantages of constructing primitive interest profiles using concepts such as entities is that they can be further enhanced by external knowledge to deliver the final interest profiles. The approaches used in the literature for enhancing primitive user interests have mainly leveraged *hierarchical*, *graph-based*, or *collective* knowledge. **Leveraging hierarchical knowledge**. One line of approach for enhancing entity-based primitive interest profiles is apply an adapted *spreading activation* [@Collins1975] function on a hierarchical knowledge base. For example, [@Kapanipathi2014] proposed representing user interest profiles as Wikipedia categories based on a hierarchical knowledge base, which is a refined Wikipedia category system built by the authors. The user interest profiles were then constructed using the hierarchical knowledge base with the following two steps. First, Wikipedia entities in users’ tweets were extracted as their primitive interests. Second, these entities were used as activated nodes for applying an adapted spreading activation function on the hierarchical knowledge base in order to infer weighted categories for representing user interest profiles. The spreading activation function proposed by [@Kapanipathi2014] can be applied to any case where a set of entities and a hierarchical knowledge base are available. Therefore, many studies that followed have adopted this function but with different approaches for extracting entities or with different hierarchical knowledge bases [@Besel:2016:QSI:3015297.3015298; @Besel:2016:ISI:2851613.2851819; @Guangyuan2017; @Nishioka:2016:PVT:2910896.2910898; @Bolting2015]. For instance, [@Nishioka:2016:PVT:2910896.2910898] extracted entities and applied the spreading activation function on STW, which is a hierarchical knowledge base from the economics domain. [@Bolting2015] investigated several spreading activation functions including the one proposed in [@Kapanipathi2014] with the ACM CCS concept taxonomy in the computer science domain. The results showed that using a basic spreading activation function provides the best user interest profiles compared to using other ones in the context of research article recommendations. [@Besel:2016:QSI:3015297.3015298; @Besel:2016:ISI:2851613.2851819] extracted entities by mapping followees’ Twitter accounts to Wikipedia entities, and used WiBi [@Flati2014] as their hierarchical knowledge base for applying the spreading activation function proposed in [@Kapanipathi2014]. Similarly, [@Faralli2015] also mapped followees’ Twitter accounts to Wikipedia entities, and used them as users’ primitive interests for propagation with WiBi. However, a simpler propagation strategy was adopted in [@Faralli2015]. In [@Faralli2017], the authors extended their previous work [@Faralli] and proposed a methodology to build *Twixonomy*, which is a Wikipedia category taxonomy. *Twixonomy* is built by using a graph pruning approach based on a variant of Edmonds optimal branching [@Edmonds]. The authors showed that the proposed approach can generate a more accurate taxonomy compared to the one proposed in [@Kapanipathi2014]. As we mentioned in Section \[Using information inside the platform\], one issue with these approaches mapping followees’ accounts to Wikipedia entities is that only a limited percentage of followees’ accounts can be mapped to corresponding entities. For example, [@Faralli2015] and [@Guangyuan2017] reported that only 12.7% and 10% of followees’ accounts can be mapped to Wikipedia entities. In this regard, [@Guangyuan2017] considered the use of followees’ *biographies* for extracting entities, and applied two different propagation strategies; one is the spreading activation function from [@Kapanipathi2014], and the other is an interest propagation strategy exploring the DBpedia knowledge graph which will be discussed later on [@piao2016exploring]. Instead of using refined hierarchical knowledge from Wikipedia, some studies have explored other types of hierarchical knowledge bases as well. [@Kang2016] proposed mapping news categories to tweets for constructing user interest profiles. The authors leveraged news categories from two popular news portals in South Korea (Naver News[^26] and Nate News[^27]) to build their category taxonomy. This taxonomy consists of 8 main categories and 58 sub-categories, and each category consists of all news articles in the two news corpuses. To assign categories to a tweet, each tweet and news category are represented as a term vector where the weights of terms are calculated using TF$\cdot$IDF first. As there might be a semantic gap between terms in social media and news portals, the authors leveraged Wikipedia to transform the term vectors of tweets and news categories into a same vector space. The top two news categories to each tweet based on the cosine similarity between their vectors, and these news categories of a user’s tweets are then aggregated to construct the final user interest profiles. [@Jiang2015] leveraged external knowledge sources such as DBpedia, Freebase [@Bollacker2008], and Yago [@Suchanek2007a] for constructing a topic hierarchy tree, which is a hierarchical knowledge base consists of over 1,000 topics distributed in 5 levels. However, the details for obtaining the topic hierarchy tree were not discussed in their study. The topic hierarchy tree used in Klout service is also bootstrapped using Freebase and Wikipedia, which consists of 3 levels with 15, around 700, and around 9,000 concepts in each level, respectively [@Spasojevic:2014:LLS:2623330.2623350]. In [@Bhargava:2015:UMU:2678025.2701365], the authors manually built a category taxonomy based on Facebook Page categories and the Yelp[^28] category list. The category taxonomy in [@Bhargava:2015:UMU:2678025.2701365] consists of three levels with 8, 58, and 137 categories in each level, respectively. The authors used features such as entities, hashtags, and document categories which can be extracted from Facebook *likes* and UGC as users’ primitive interests, and then measured the confidence of each concept in the category taxonomy based on these features using the Semantic Textual Similarity system [@Han2013]. **Leveraging graph-based knowledge**. Instead of leveraging hierarchical knowledge, many studies have leveraged graph-based knowledge for enhancing user profiles. For example, [@Michelson2010] exploited Wikipedia categories directly for propagating a user’s primitive interests. The authors summed the scores of a category which appeared in multiple depths in the category graph. Differing from exploring the categories of a specified depth [@Michelson2010], [@Siehndel:2012:TUP:2887379.2887395] represented user interest profiles using 23 top-level categories of the root node `Category:Main_Topic_Classifications` in Wikipedia. The Wikipedia entities in users’ tweets were extracted as their *primitive interests*, and these entities were then propagated up to the 23 top-level categories with a discounting strategy for the propagation. With the advent of large, cross-domain Knowledge Graphs (KGs) such as DBpedia, different approaches leveraging background knowledge from KGs have been investigated. A knowledge graph is a knowledge base which consists of an ontology and instances of the classes in the ontology [@Farber]. The difference between a hierarchical category taxonomy such as WiBi and a knowledge graph such as DBpedia is displayed in Figure \[fig:wibidbpedia\] [@Guangyuan2017]. As we can see from the figure, for an entity, DBpedia goes beyond just categories to provide related entities via the entity’s properties/edges. Depending on the propagation strategies for those entities in a user’s primitive interests, different aspects, e.g., *related entities*, *categories* or *classes* of the entities can be leveraged for the propagation. For example, [@Penas2013] enriched categories in users’ primitive interests using similar categories defined by the `categorySameAs` relationship in DBpedia. [@Abel2012a] proposed using background knowledge from DBpedia for propagating user interest profiles with respect to POI. The authors considered entities that were two hops away from a user’s primitive interests and that were related to places. However, this approach did not consider any discounting strategy for the weights of propagated user interests. In [@Orlandi2012], the authors leveraged DBpedia categories one hop away from of the entities in a user’s primitive interests using a discounting strategy for propagating user interests. Although [@Orlandi2012] leveraged DBpedia as the knowledge base instead of Wikipedia, they still exploited categories only, which makes no difference between using DBpedia and Wikipedia. To investigate other aspects of DBpedia such as related entities and classes of primitive interests, [@piao2016exploring] studied three approaches such as *category-*, *class-*, and *property-based* propagation strategies. This study found that exploiting categories and related entities via different properties of primitive interests provides the best performance compared to using corresponding categories only in the context of URL recommendations on Twitter. An alternative graph for propagating entity-based user interest profiles is the Wikipedia entity graph. Compared to the DBpedia graph, where the edges between two entities are predefined properties in an ontology, the edges in the Wikipedia entity graph denote the mentions of the other entities in a Wikipedia entity (article). [@Lu2012] exploited a Wikipedia entity graph to enhance the entity-based primitive interests. Different from exploiting Wikipedia categories, the intuition behind this approach is that if a user is interested in `IPhone`, the user might be interested in other products from `Apple`, instead of being interested in other mobile phones in the same category such as `Smartphones`. To this end, the authors used the ESA algorithm to extract entities from the tweets of users as their primitive interests, and then expanded these entities using a random walk on the Wikipedia entity graph. In [@Jipmo2017], the authors assumed there are a set of interests $i \in I$, e.g., `Sports`, `Politics`, etc., which the user modeling system needs to measure the corresponding weights for each interest. After building a bag of entities based on the ones extracted from a user’s tweets, the relevance score of an interest $i$ is measured as below, which can be seen as a spreading activation approach with some constraints: $$S_i^u = \sum_{a \in BOE_u} \frac{1}{min\{dist(a, c), c \in BOC_i\}} \bigskip$$ where $BOE_u$ denotes the bag of entities extracted from $u's$ tweets, and $BOC_i$ denotes a set of categories containing the name of $i$ in their titles. For example, for an interest `sports`, $BOC_i$ consists of categories such as `Category:Sports by year, Category:Sports in France`, etc. $dist(a, c)$ refers to the length of the shortest directed path from $a$ to $c$ in the Wikipedia graph. **Leveraging collective knowledge**. More recently, some studies proposed leveraging collective knowledge powered by the great amount of interest profiles of all users in a dataset, and enhancing a user profile with other related interests identified as frequent patterns in all profiles using frequent pattern mining (FPM). FPM was designed to find frequent patterns (itemsets or a set of items that appear together in a transaction dataset frequently). In the context of user modeling, previous studies have treated each user interest as an item, each interest profile as a transaction, and all user interest profiles as the transaction dataset [@Faralli2015; @AnilKumarTrikhaFattaneZarrinkalam]. [@AnilKumarTrikhaFattaneZarrinkalam] leverages frequent pattern mining techniques to identify topic sets. Here, a topic set consists of the topics frequently appear together in user profiles. Afterwards, the other topics in the topic sets that contain the topics in a user’s profile are added into that user’s profile as well. Take an example from [@AnilKumarTrikhaFattaneZarrinkalam], a topic set identified via FPM might consist of two topics $z_1$ and $z_2$, where $z_1=\{\texttt{Mixtape, Hip\_hop\_music, Rapping, Kanye\_West, Jay-Z, Remix}\}$ and $z_2=\{\texttt{Lady\_Gaga, Song, Album, Concert, Canadia\_Hot\_100}\}$. $z_1$ refers to the topic about hip hop music produced by two American rappers `Jay-Z` and `Kanye_West` while $z_2$ represents the topic about `Lady_Gaga`’s concert in Canada. As these two topics frequently appear together in user interest profiles, the users who are interested in $z_1$ might be also interested in $z_2$ even $z_2$ is not in their primitive interests. In contrast to [@AnilKumarTrikhaFattaneZarrinkalam], [@Faralli2015] did not directly enhance user interest profiles with other interests that occur together frequently, but used FPM for user classification and recommendation. It is worth noting that both [@Faralli2015] and [@AnilKumarTrikhaFattaneZarrinkalam] used the FP-Growth algorithm [@Han2000] for frequent pattern mining in their studies. ### Temporal Dynamics of User Interests {#Temporal Dynamics of User Interests} User interests in OSNs can change over time, and many studies have been conducted in order to investigate the temporal dynamics of user interests in OSNs. For example, [@Jiang2015] showed that the similarity of current user interest profiles with the profiles at the beginning of the observation period of their dataset is the lowest while the similarity of current profiles with the ones built in the last month is the highest. Similarly, [@Abel2011g] showed that a user interest profile built in an earlier week differs more from the current profile compared to one built recently. In order to incorporate the temporal dynamics of user interests into user modeling strategies, there are mainly two types of approaches: (1) *constraint-based* approaches, and (2) *interest decay functions*. **Constraint-based approaches.** Constraint-based approaches extract user interest profiles based on specified constraints, e.g., using a *temporal constraint* to build user interest profiles based on their tweets posted in the last two weeks or using an *item constraint* to construct user profiles based on the last 100 tweets of the users. For example, [@Abel2011g] investigated several temporal constraints such as *long-* and *short-term*, and *weekend* in their user modeling strategies on Twitter for a news recommender system. *Long-term* profiles extract user interests from entire historical tweets of users while *short-term* profiles extract user interests from tweets posted within the last two weeks. They showed that long-term entity-based profiles outperform short-term ones in the context of news recommendations. User interests can be different within different time frames such as during the week or on the weekends. The experimental results in [@Abel2011g] also showed that entity-based interest profiles based on their tweets posted on weekends can outperform long-term profiles for recommending news on weekends. Some interests of users such as professional interests are stable while other interests such as the ones related to a certain event can be temporary. A user modeling strategy can apply temporal dynamics selectively to different information sources based on their characteristics. This type of strategy has been adopted in practical user modeling systems such as the one in Klout [@Spasojevic:2014:LLS:2623330.2623350], in which a 90 day window is used for capturing the temporal dynamics of user interests for some temporal information sources, and an all-time window is used for more permanent sources such as professional interests. [@Nishioka:2016:PVT:2910896.2910898] compared both constraint-based approaches and interest decay functions for constructing user interest profiles on Twitter in the context of publication recommendations. Differing from the results in the domain of news [@Abel2011g], results from [@Nishioka:2016:PVT:2910896.2910898] showed that a constraint-based approach constructing user interest profiles within a certain period performs better than using an interest decay function in the context of publication recommendations. **Interest decay functions.** Constraint-based approaches include interests which meet predefined constraints, and exclude other interests completely. Instead of constructing user interest profiles in a certain period (e.g., short-term), or based on temporal patterns (e.g., weekends), interest decay functions aim at including all the interests of a user but decaying old ones. The intuition behind those interest decay functions is that a higher weight should be given to recent interests than old ones. A popular type of interest decay function applies exponential decay to user interests. For example, the interest decay function from [@Orlandi2012] is defined as follows: $$\label{eq:orlandi} x(t)=x_0 \cdot e^{-t/\beta}$$ \ Here, $x(t)$ is the decayed weight at time *t*, and $x_0$ denotes the initial weight (at time $t=0$). This interest decay function also has an initial time window (7 days), and the interests in the time window are not discounted. The authors in [@Orlandi2012] set $\beta =360days$ and $\beta =120days$ for their experiment, and showed that using $\beta =360days$ performs better than using $\beta =120days$ in terms of an evaluation based on a user study. We use `decay(Orlandi)` to denote this approach in this study. A similar decay function was used in [@Bhargava:2015:UMU:2678025.2701365] and [@Nishioka:2016:PVT:2910896.2910898], where a weight for the last update was used instead of initial weight [@Bhargava:2015:UMU:2678025.2701365]. In [@OBanion2012], the authors also used an exponential decay function: $x(t) = x_0 \cdot 0.9^d$ where *d* is the difference in days between the current date and the date that a concept was mentioned. [@Abel2011d] also proposed a time-sensitive interest decay function, which is denoted by `decay(Abel)` in this survey. The weight of an entity *e* with respect to a user *u* at a specific time is measured as below. $$\label{eq:abel} w(e, time, T_{tweets, u, e}) = \sum_{t \in T_{tweets, u, e}} (1 - \frac{|time-time(t)|}{max_{time}-min_{time}})^d$$ \ where $T_{tweets, u, e}$ denotes the set of tweets mentioning *e* that have been posted by *u*. *time*(*t*) denotes the timestamp of a given tweet *t*, and $max_{time}$ and $min_{time}$ denote the highest (youngest) and lowest (oldest) timestamp of a tweet in $T_{tweets, u, e}$. In addition, the parameter *d* determines the influence of the temporal distance [$d=4$ in @Abel2011d]. In contrast to the aforementioned exponential decay functions, this approach incorporates the age of an entity *e* at the recommendation time, and the time span of *e* with respect to *u*. In order to compare different interest decay functions in the context of user modeling in OSNs, [@piao2016exploring] investigated three interest decay functions for constructing user interest profiles on Twitter including `decay(Abel)` and `decay(Orlandi)`. The other one is a modified interest decay function from [@Ahmed2011], which was used in advertisement recommendations on web portals (i.e., Yahoo![^29]). The modified interest decay function used in [@piao2016exploring] is defined as follows: $$\label{eq:ahmed} w_{ik}^t = \mu_{2week}w_{ik}^{t, week} + \mu_{2month}w_{ik}^{t, month} + \mu_{all}w_{ik}^{t, all}$$ \ where $\mu_{2week} = \mu$, $\mu_{2month} = \mu^2$ and $\mu_{all} = \mu^3$ where $\mu = e^{-1}$. This decay function combines three levels of abstractions where the decay of user interests in each abstraction is $\mu$ times the previous abstraction. We use `decay(Ahmed)` to denote this approach in this survey. [@piao2016exploring] conducted a comparative study of user interest profiles constructed based on the three aforementioned interest decay functions and the profiles based on *short-* and *long-term* periods. Those interest profiles were then evaluated in the context of URL recommendations. The results showed that using `decay(Ahmed)` and `decay(Orlandi)` have competitive performance in terms of URL recommendations, and perform better than using `decay(Abel)` as well as *short-* and *long-term* profiles which were constructed without any interest decay. In addition, the experimental results indicate that although the performance increases by giving a higher weight to recent user interests, it starts decreasing once the weight of recent interests is too high. That is, although applying the decay function to recent user interests increases the performance, we still need the old history in order to provide the best performance in the context of URL recommendations. Instead of considering the temporal dynamics of user interests with respect to individual users, global trends in an OSN can be incorporated into a user modeling strategy. In [@Gao:2011:ITU:2052138.2052335], the authors combined user interests from tweets of a target user (user profiles) and of all users (trend profiles) for constructing user interest profiles. The TF weighting scheme is used for constructing user profiles. For trend profiles, they applied a time-sensitive TF$\cdot$IDF (t-TF$\cdot$IDF) weighting scheme to concepts: $$w_{t-TF \cdot IDF}(I_j, c) = w_{TF \cdot IDF}(I_j, c) \cdot (1-\hat{\sigma}(c))$$ \ where $w_{TF \cdot IDF}(I_j, c)$ denotes the TF$\cdot$IDF score of a concept *c* in a given time interval $I_j$, and $\hat{\sigma}(c)$ denotes the normalized standard deviation of timestamps of tweets that refer to *c*. [@Kanta2012] further incorporated location-aware trends into the trend-aware user modeling approach in [@Gao:2011:ITU:2052138.2052335] to improve the performance of inferred user interest profiles in the context of news recommendations. Summary and discussion ---------------------- This section reviewed a number of approaches for constructing and enhancing user interest profiles. Table \[construction\] summarizes the approaches discussed in this section in terms of the three dimensions: (1) weighting schemes for constructing primitive interests, (2) approaches for incorporating the temporal dynamics of user interests, and (3) profile enhancement methods. As we can see from the table, many studies have incorporated the temporal dynamics of user interests in their user modeling strategies. Among interest decay functions, exponential decay functions such as `decay(Orlandi)` have been adopted widely. When incorporating the temporal dynamics of user interests, it is important to choose constraint-based approaches or interest decay functions based on the purpose of user modeling. For instance, when using inferred user interest profiles for recommending items such as news or URLs in OSNs, interest decay functions perform better than constraint-based approaches such as short- and long-term profiles [@piao2016exploring]. However, the results from [@Nishioka:2016:PVT:2910896.2910898] indicate that a constraint-based approach based on a certain period for profiling outperforms the one applying exponential decay for building user profiles in the context of a publication recommender system. One possible explanation is that user interests change differently with respect to different domains. For example, user interests should be adapted to their recent interests for news or URL recommendations, however, user interests with respect to research may not. [@Jiang2015] also pointed out that users have two types of interests; (1) *stable interests* [which they call primary interests in @Jiang2015], and (2) secondary interests. The stable interests of a user are original preferences inherent to that user, such as programmers who like efficient algorithms or lawyers who like debate, etc. [@Jiang2015]. In contrast, secondary interests are temporary ones which closely follow hot topics or events in a specific timespan. This is in line with the user modeling strategy used in Klout [@Spasojevic:2014:LLS:2623330.2623350], which applies a short-term window for capturing user interests that are temporary and uses a long-term window for more stable user interests. Different types of knowledge from various knowledge bases have been leveraged for enhancing the primitive interests of users. The diversity of KBs and the different structures of hierarchical KBs indicate the complexity of representing knowledge in KBs as well. Table \[topicTree\] summarizes the differences between hierarchical KBs used in the literature. For instance, the constructed Wikipedia category taxonomy in [@Kapanipathi2014] consists of 15 levels with 802,194 categories while the topic hierarchy tree built by [@Jiang2015] consists of 5 levels with over 1,000 topics. The topic hierarchy tree used in Klout has 3 levels which consists of 15 main categories, around 700 sub-categories, and around 9,000 entities [@Spasojevic:2014:LLS:2623330.2623350]. A concept taxonomy built manually by referring to external websites such as news portals or Facebook Page categories has less complexity compared to a taxonomy based on KBs such as Wikipedia. For example, the category taxonomy built based on news portals [@Kang2016] has 8 main categories and 58 sub-categories. The one built based on Facebook and Yelp categories [@Bhargava:2015:UMU:2678025.2701365] also has 8 and 58 categories for the top-2 levels with an additional 137 categories in its third level. We can observe that the hierarchical knowledge bases used in practice or built based on taxonomies used in practice tend to have a small number of levels (2-5). Applying a spreading activation function, even the same one, to those different taxonomies might have different results. There is a lack of comparison of different hierarchical knowledge bases and their effect in the context of inferring user interest profiles. Furthermore, although some studies investigated the comparison between using different KBs such as Wikipedia categories and the DBpedia graph, there was no comparative study on exploiting the Wikipedia entity graph [@Lu2012], categories in other KBs such as ODP, and the DBpedia graph. In addition, despite the fact that different KBs might be useful in different domains [@Nguyen], enhancing user interests based on other KBs such as Wikidata [@Vrandecic2014], or BabelNet [@NavigliPonzetto:12aij] has not been fully explored. **Study** **\# Levels** **\# Topics** **Details** ---------------------------------------- --------------- --------------- ------------------------------------------------------ [@Kapanipathi2014] 15 802,194 N/A [@Jiang2015] 5 $\sim$1,000 N/A [@Spasojevic:2014:LLS:2623330.2623350] 3 $\sim$1,0000 15 $\rightarrow$ $\sim$700 $\rightarrow$ $\sim$9,000 [@Kang2016] 2 66 8 $\rightarrow$ 58 [@Bhargava:2015:UMU:2678025.2701365] 3 203 8 $\rightarrow$ 58 $\rightarrow$ 137 Evaluation Approaches ===================== Overview -------- In this section, we describe evaluation approaches used for evaluating different user interest profiles that are generated by different user modeling strategies in the literature. User modeling is one of the main building blocks in many adaptive systems such as recommender systems. Many previous studies on the evaluation of adaptive systems suggested that it is important to evaluate different blocks separately in order to identify the problems in the adaptive systems [@Paramythis2010; @Brusilovsky2001]. [@Gena2007] provided a list of methods for evaluating adaptive systems, where some of them can be used for evaluating the quality of user modeling component as well. These evaluation methods include (1) *questionnaires*, (2) *interviews*, and (3) *logging use*. **Questionnaires.** Questionnaires consist of pre-defined questions, which can be in different styles such as scalar or multi-choice, and ranked [@Gena2007]. In our context, this approach can be used for collecting users’ explicit feedback about their interest profiles for evaluation. To this end, this approach requires recruiting users for the experiment of building user interest profiles with their OSN accounts. At the end of the experiment, these users can provide feedback on user interest profiles constructed by different user modeling strategies. **Interviews.** The second approach is used to collect users’ opinions and experiences, preferences and behavior motivations [@Gena2007] with respect to adaptive systems. Interviews can be used after building users’ interest profiles to gather their opinion such as satisfaction and accuracy about the inferred user interest profiles. Compared to questionnaires, interviews are more flexible but more difficult to be administered. Therefore, this method has not been exploited for evaluating user modeling strategies in the literature. **Extrinsic evaluation (Logging use).** This approach uses the actions of users in the context of adaptive systems for evaluation, e.g., whether a user liked a recommend item in a recommender system. This can be considered an extrinsic way of evaluating user interest profiles in terms of the performance of applications where these profiles are applied. For example, one common approach is using constructed user interest profiles as an input to a recommender system, and adopting some well-established evaluation metrics of recommender systems for measuring the quality of user interest profiles indirectly. Manual analysis is sometimes used together with other evaluation approaches. In this case, the authors present some examples of user interest profiles built for several users (e.g., some representative users on Twitter such as *Barack Obama*), and discuss the quality of profiles with respect to these users. Review ------ ### Evaluation based on Questionnaires A common approach for evaluating constructed user interest profiles is based on a user study with questionnaires. For example, [@Narducci2013] evaluated user interest profiles built for 51 users from Facebook and Twitter based on their feedback on two aspects: *transparency* and *serendipity* using a 6-point discrete rating scale. The first aspect aims to evaluate to what extent the keywords in the profile reflect personal interests, and the second one aims to measure to what extent the profile contains unexpected interesting topics. Similarly, [@Kapanipathi2014] recruited 37 users and built category-based user interest profiles based on their tweets on Twitter. Afterwards, the 37 users provided explicit feedback, e.g., Yes/Maybe/No with respect to the categories in those profiles. Similar approaches have been used in [@Bhattacharya:2014:IUI:2645710.2645765], [@Besel:2016:ISI:2851613.2851819; @Besel:2016:QSI:3015297.3015298], [@Budak2014], and [@Orlandi2012]. However, instead of recruiting volunteers for an experiment, the authors in [@Budak2014] first inferred user interest profiles for 500 randomly chosen users on Twitter, and emailed them using the email addresses in their profiles to get feedback about their inferred interests. Instead of using the feedback from target users for inferred user interest profiles, [@Kang2016] and [@Michelson2010] labeled user interests themselves or used recruited annotators. Explicit feedback can be obtained in a system which has user interest profiles that can be modifed by users. For example, [@GarciaEsparza:2013:CCT:2449396.2449402] implemented a stream filtering system where users are represented based on 18 defined categories such as `Music` and `Sports`. For evaluation, the authors asked each participant to give explicit feedback on their profiles by deleting or adding categories that they felt were incorrect or missing. In contrast to obtaining explicit feedback on inferred user interest profiles, a user study can be conducted on the performance of a specific application where those inferred user interest profiles play an important role. For example, [@Chen2010] conducted a user study with respect to a URL recommender system on Twitter, which is based on the inferred user interest profiles. Therefore, instead of directly giving feedback on the constructed user interest profiles, the users participating in the study were given URL recommendations, and they marked each URL as one of their interests or not. Similarly, [@Nishioka:2016:PVT:2910896.2910898] obtained explicit feedback from users on publication recommendations based on their interest profiles. These user studies can also be considered as extrinsic evaluation, which we will discuss in the next section, as they are not evaluating user interest profiles directly.\ \ **Pros and cons.** Evaluation approaches based on the explicit feedback of profiled users with respect to their interest profiles would arguably be the most direct and accurate way for evaluating those profiles. However, this also requires recruiting volunteers and imposes an extra burden for users, and therefore limits the number of participants for evaluation [e.g., 37 users were recruited for evaluation in @Kapanipathi2014]. ### Extrinsic Evaluation To evaluate the quality of inferred user interest profiles without imposing an extra burden on users, offline evaluation in terms of the performance of a specific application has been used. In this case, user interest profiles are used as an input to an application such as a news recommender system where these profiles play an important role. Afterwards, different profiles created by different user modeling strategies are compared in terms of the recommendation performance using each profile. The recommendation performance can be evaluated by well-established evaluation metrics for recommender systems such as *mean reciprocal rank* (MRR) which denotes at which rank the first item relevant to the user occurs on average, *success at rank N* (S@N), which stands for the mean probability that a relevant item occurs within the top-N recommendations, and well-known *precision* and *recall*. For a complete list of evaluation metrics and their details we refer the reader to [@Bellogn] and [@Herlocker:2004:ECF:963770.963772] respectively. For instance, [@Abel2011g] evaluated three different user modeling strategies in terms of S@N and MRR in the context of news recommendations, and [@Spasojevic:2014:LLS:2623330.2623350] evaluated their user modeling strategy in terms of precision and recall in the context of topic recommendations on Klout. Similarly, [@Sang:2015:PFT:2806416.2806470] also evaluated user interest profiles in terms of news recommendations in addition to tweet recommendations. [@piao2016exploring; @Guangyuan2017; @Piao2017; @Piao2016b; @Piao2016d] evaluated different user modeling strategies in the context of URL recommendations on Twitter where the set of ground truth URLs is those shared by users on Twitter in the last two weeks. In [@Faralli2015], the authors evaluated user interest profiles in terms of user classifications and recommendations. For the classification task, the user interest profiles were used for classifying each user to the appropriate label, e.g., Starbucks fan. For the recommendation task, the authors evaluated the performance of leveraging different hierarchical levels of interests with respect to interest recommendations using itemset mining. In contrast to previous studies which have focused on inferring user interest profiles, [@Nechaev] focused on users’ privacy and evaluated different followee-suggestion strategies for concealing user interests which can be inferred from users’ activities in OSNs based on state-of-the-art user modeling strategies.\ \ **Pros and cons.** Extrinsic evaluation provides an offline setting for evaluating inferred user interest profiles. Therefore, it facilitates the evaluation process of different user modeling strategies as these strategies are evaluated based on a collected dataset (or logs). However, this approach does not directly evaluate the inferred user interest profiles, and lacks the opinions of users with respect to the inferred interest profiles.\ \ There are other evaluation approaches used in some studies besides the aforementioned two methods. For example, [@Abel2011e] compared the number of distinct entities and topics in user interest profiles for evaluating news-based enrichment of their tweets. In [@Faralli2017], the authors run two experiments to evaluate their approach of building interest taxonomies. First, they compared their approach against other approaches proposed for constructing user interest taxonomies using other gold standard taxonomies. Second, they provided samples of generated user interest profiles, and compared inferred Wikipedia categories with respect to several users based on different user modeling strategies. Similarly, [@Xu2011] evaluated their topic modeling approach by comparing it against other topic modeling methods in terms of *perplexity*, and then discussed some user interest profiles produced by different approaches. User interest profiles have also been used for specific applications such as followee, tweet, and news recommendations [@Weng:2010:TFT:1718487.1718520; @Chen2012b; @Hong:2013:CMM:2433396.2433467; @Phelan:2009:UTR:1639714.1639794], where user modeling strategies were not evaluated or compared to other alternatives. Summary and discussion ---------------------- [|c|c|l|]{} & & **Examples**\ & & --------------------------------------------------------------------------------------------------------------- [@Kapanipathi2014], [@Kang2016], [@Michelson2010], [@Budak2014], [@Bhattacharya:2014:IUI:2645710.2645765], [@Besel:2016:ISI:2851613.2851819; @Besel:2016:QSI:3015297.3015298], [@Orlandi2012], [@Narducci2013], [@Bhargava:2015:UMU:2678025.2701365], [@GarciaEsparza:2013:CCT:2449396.2449402], [@Vu:2013:IMU:2505515.2507883], [@Ahn:2012:IUI:2457524.2457681], [@Chen2010], [@Nishioka:2016:PVT:2910896.2910898] --------------------------------------------------------------------------------------------------------------- : Evaluation approaches for constructed user interest profiles.[]{data-label="evaluation"} \ & & ------------------------------------------------------------------------------- [@Abel2011d; @Abel2011g; @Abel2012a; @Abel2011e], [@Chen2010], [@Zarrinkalam2015a], [@Sang:2015:PFT:2806416.2806470], [@Kanta2012], [@OBanion2012], [@piao2016exploring; @Guangyuan2017; @Piao2017; @Piao2016b; @Piao2016d], [@Lu2012], [@Sang:2015:PFT:2806416.2806470], [@Gao:2011:ITU:2052138.2052335], [@Karatay2015a], [@AnilKumarTrikhaFattaneZarrinkalam], [@Nishioka:2015:ITU:2809563.2809601], [@Bolting2015], [@Zarrinkalam2016], [@Ahn:2012:IUI:2457524.2457681], [@Spasojevic:2014:LLS:2623330.2623350], [@Jipmo2017], [@Faralli2015], [@Nechaev] ------------------------------------------------------------------------------- : Evaluation approaches for constructed user interest profiles.[]{data-label="evaluation"} \ In this section, we reviewed different evaluation approaches that have been used in the literature for evaluating constructed user interest profiles. Table \[evaluation\] provides a summary of previous studies in terms of evaluation methods. Evaluating user interest profiles based on a user study is important for understanding different aspects of user interests, e.g., abstraction levels of user interests. For example, [@Orlandi:2013:CCI:2568488.2568810] studied the specificity of user interests and evaluated it based on a user study, which showed that users prefer to give a higher score over non-specific entities. However, the extra effort of recruiting users and gaining feedback from them is time consuming, and limits the scale of users for evaluation. The evaluation in terms of the performance of a specific application has the advantage of its offline setting and using a relatively larger number of users compared to a user study. Both evaluation approaches can be used in an appropriate way for designing and evaluating user modeling strategies. For example, based on a user study on the specificity of user interests [@Orlandi:2013:CCI:2568488.2568810], we can design ways to incorporate the feedback from users’ preferences regarding non-specific entities into a user modeling strategy, and evaluate the strategy at a large scale in offline settings based on a collected dataset such as the one from Twitter. One of the challenges of the offline evaluation in terms of the performance of a specific application is the lack of benchmarks that are freely available [@Faralli2015]. Despite the openness of some microblogging services such as Twitter, it is time consuming to collect all data used in different user modeling approaches, e.g., tweets, list memberships, biographies of followees/followers in addition to the information about users. In addition, different datasets with different user sizes might produce different results even using the same user modeling strategies for comparison. It is also important to evaluate different user interest profiles in the context of different applications beyond a specific one. For example, in [@Manrique:2017:SDA:3106426.3109440], the authors showed that user interest profiles based on different user modeling strategies perform differently in the context of recommending articles based only on titles, abstracts, and full texts. Although the study [@Manrique:2017:SDA:3106426.3109440] is in the context of research article recommendations, it is highly likely that different user interest profiles from microblogging services will have different levels of performance based on the applications in which these profiles are applied. Conclusions and Future Directions ================================= In previous sections, we reviewed the state-of-the-art approaches used in different user modeling stages for inferring user interest profiles, which is beneficial both for researchers who are interested in user modeling in the social networks domain as well as those researchers in some other domains. It is also useful for third-party application providers who aim to utilize user interest profiles via social login functionalities in terms of providing personalized services for their users. In this final section, we conclude this paper in Section \[conclusions\] with respect to the four dimensions of inferring user interest profiles: (1) data collection, (2) representations of user interest profiles, (3) construction and enhancement of user interest profiles, and (4) the evaluation of the constructed profiles. In Section \[fd\], we first review what progress has been made to date since [@Abdel-Hafez2013], and then outline some opportunities and challenges for inferring user interests on microblogging social networks which we envision can inspire future directions in this research field. Conclusions ----------- To sum up, user activities such as the tweets posted by users are the most widely used information source for inferring user interests. However, many recent studies have started exploring other information sources such as the social networks of users as an alternative to user activities as the passive usage of OSNs is on the rise. Regarding the representations of user interest profiles, a clear tendency of leveraging concepts such as DBpedia entities or categories can be observed given their advantages of using background knowledge about those concepts from a KB. In addition to leveraging the hierarchical or graph-based knowledge of a KB for enriching user interests, several recent studies also have shown the effectiveness of leveraging collective knowledge for enriching user interest profiles [@Faralli2015; @AnilKumarTrikhaFattaneZarrinkalam]. With respect to incorporating the temporal dynamics of user interests, there is no single best method for inferring user interests with different purposes. Instead, one should choose constraint-based or interest decay functions based on the application needs, and the characteristics of items. For evaluating user interest profiles, both questionnaires and extrinsic evaluation strategies have been adopted at comparable levels of popularity. Future Directions {#fd} ----------------- In [@Abdel-Hafez2013], the authors proposed three future directions with respect to user modeling in OSNs, which requires (1) more dynamicity, (2) more enrichment, and (3) more comprehensiveness. On the one hand, we observe that there have been many efforts towards the second direction. These efforts include leveraging the collective knowledge powered by all users [@Faralli2015; @AnilKumarTrikhaFattaneZarrinkalam] for enriching the interest profiles of each user, and the comparison between different KBs for enriching user interests [@Guangyuan2017]. On the other hand, the first and third directions proposed by [@Abdel-Hafez2013] have not made much progress. For example, [@Abdel-Hafez2013] proposed incorporating more dynamicity with respect to user interest profiles with some assumptions such as different topics might decay with different speed, and the interest weights of each user can have different weights in different context. On top of the directions proposed by [@Abdel-Hafez2013] and the recent studies we reviewed in this paper, we further proposed several future directions which are related to: - mining user interests; - multi-faceted user interests; - comprehensive user modeling; - evaluation of user modeling strategies. **Mining user interests.** To better infer user interests, researchers have proposed various approaches such as enriching short content, filtering noise in UGC, and exploring social networks. Many studies have adopted traditional weighting schemes from information retrieval such as TF or TF$\cdot$IDF to somehow filter the noise in UGC for mining user interests. However, some studies have shown that incorporating some special characteristics of the services (e.g., temporal dynamics, short content) into the design of a weighting scheme can improve the quality of user interest profiles. For example, TI-TextRank which combines TF$\cdot$IDF and TextRank performs better than either of them on their own as a weighting scheme for user modeling on Twitter. In this regard, more weighting schemes adapted towards microblogging services should be investigated, e.g., combining different weighting schemes used in the literature. Furthermore, mining interest-related items from data sources such as posts [e.g., @Xu2011] can be useful as microblogging services have multiple usages such as information seeking, sharing and social networking [@Java2007]. In addition, more sophisticated approaches for understanding the semantics of UGC are required. For example, for those approaches that rely on extracted entities for inferring user interest profiles, extracting entities from microblogs is a fundamental step which is challenging by itself. Only a few studies have considered the uncertainty (confidence) of the extracted entities, which we think might impact the overall quality of the primitive interests of users as well as the enhanced ones. Moreover, most approaches have extracted explicitly mentioned entities based on NLP APIs such as Tag.Me[^30], Aylien[^31], OpenCalais, etc. However, there can be many entities implicitly mentioned in tweets. In [@Perera2016], the authors showed that over 20% of mentions of movies are implicit references, e.g., a tweet referring the movie *Gravity* - “ISRO sends probe to Mars for less money than it takes Hollywood to make a movie about it”. It shows that advanced methods for extracting entities, such as the one proposed in [@Perera2016], have great potential to improve the quality of user modeling. Also, considering the context of a microblog might be useful when extracting entities instead of just considering the single microblog of a user. The context might refer to some previous microblogs posted by the user, or other microblogs with the same hashtag in the microblogging service. For example, [@Shen:2013:LNE:2487575.2487686] showed that the quality of entity extraction can be improved by incorporating user interests as contextual information. Furthermore, promising results from recent studies [@Faralli2015; @AnilKumarTrikhaFattaneZarrinkalam] indicate that leveraging collective knowledge via frequent pattern mining approaches is also effective in inferring implicit user interests. **Multi-faceted user interests.** There exists various aspects/views of users based on different dimensions of user modeling such as the data source, representation level, and temporal dynamics of user interests. Although many studies represent an individual user using a single user interest profile, we believe that multi-faceted user interest profiles should be given more attention as some previous studies have also shown their efficiency compared to a single model. It is not necessary to maintain several user interest profiles for a single user, but a single model can also be built with relevant information from different aspects, and a view/aspect made for the user based on the information needs for different applications. GeniUS [@Gao2012d] is a good example in this regard, which is a user modeling library that stores concept-based user interest profiles using the RDF[^32] format (a W3C recommendation) with widely used ontologies such FOAF [@Brickley2012], SIOC[^33], and WI[^34]. In GeniUS, user interest profiles are represented as DBpedia entities and enriched by background knowledge such as the type (domain) of an entity from DBpedia. Therefore, the constructed profile is flexible enough to retrieve its sub-profiles with respect to specific domains (e.g., `Music`), which is useful for recommending domain-specific items. The idea is that, for example, we only need your music-related interest profile in the context of music recommendations. The results in [@Gao2012d] indicate that domain-specific profiles clearly outperform the whole user profiles for domain-specific tweet recommendations in terms of six different domains. Although GeniUS only considers different views of users in terms of topical domains, the same idea can be extended to other views. For instance, different user profiles can be extracted dynamically with different approaches for incorporating temporal dynamics, e.g., retrieving short-term profiles for recommending tweets during an event, which might be more useful compared to using long-term profiles. Also, multiple user interest profiles in terms of representation level using different interest formats have been used in other domains such as personal assistants [@Guha2015], which can be useful for user modeling in microblogging services as well. In [@Guha2015], several user interest profiles based on different representations such as keywords and Freebase entities were constructed. **Comprehensive user modeling.** In the previous survey on user modeling [@Abdel-Hafez2013], the authors also suggested that more comprehensive user modeling strategies should be investigated by considering different dimensions of user modeling together. Many of the previous studies have ignored some of the dimensions such as temporal dynamics [e.g., @Phelan:2009:UTR:1639714.1639794]. Investigating the synergistic effect of different dimensions is important for developing better user modeling strategies, which is crucial for the performance of applications. To this end, several research questions should be answered such as “which combinations of different approaches in each dimension can provide the best user interest profiles” or “does a dimension really matter in the context of the combination for providing the best performance?”. For example, [@Piao2016d] showed that a rich representation of user interests (using WordNet synsets and DBpedia entities) and enriching short content with the text of embedded URLs are the most important factors followed by temporal dynamics in the context of URL recommendations on Twitter. However, enhancing user interest profiles has little effect when we have a rich representation or enriched content of microblogs. Similar results have been observed in the context of inferring research interests of users based on their publications [@Manrique:2017:SDA:3106426.3109440]. The results in [@Manrique:2017:SDA:3106426.3109440] indicate that enhancing primitive interests can improve the performance when only short texts (e.g., titles) are available but not in the case when longer texts (e.g., full texts of publications) are available. We believe that these studies are good starting points for some future works, e.g., using different user interest profiles for different data sources instead of using a single representation of an individual user for the combination. In addition, other user modeling dimensions which have been proposed in other domains can be considered in the social media domain as well. For example, a *scrutable* user model proposed in the context of teaching, which aims to let users have the right and possibility to have access to and control their user profiles [@Holden1999; @Carmagnola2011; @Kay2006], can be a promising dimension to be incorporated into user modeling strategies in OSNs and merits further investigation and evaluation. **Evaluation of user modeling strategies.** As we mentioned in Section 5.3, the lack of common benchmarks and datasets hinders comparison with other approaches, which ends up with several studies directly comparing to results reported in previous studies [@Faralli2015]. This does not reflect a correct comparison due to the difference of datasets in terms of platforms as well as user sizes. However, it is also challenging due to the regulations of microblogging services such as Twitter[^35], and the differences in data sources used in each study. Another possible direction is providing all proposed approaches as user modeling libraries that are publicly available, in the same way as GeniUS and TUMS[^36], so that other researchers can easily reimplement the approaches proposed in previous studies for comparison. It is also important to evaluate inferred user interest profiles in terms of multiple tasks or different settings to understand the strengths and weaknesses of different user interest profiles. For instance, [@Nishioka:2015:ITU:2809563.2809601] showed that considering the temporal dynamics of user interests has a positive influence on a computer science dataset but not on a medicine dataset. [@Manrique:2017:SDA:3106426.3109440] showed that different user modeling strategies work differently for different types of texts that are available in the context of research article recommendations. In this regard, evaluating the performance of different user modeling strategies based on different datasets or settings can provide a clear understanding of when to use what types of user profiles, which is important for researchers in different domains as well as third-party application providers with different types of content to be personalized. A recent work by [@Tommasso2018] provides a user interests dataset which is useful in this context. It includes half million Twitter users with an average of 90 multi-domain preferences per user on music, books, etc., where those preferences are extracted from multiple platforms based on the messages of those Twitter users who also use Spotify[^37], Goodreads[^38], etc. Finally, previous studies have adopted accuracy and ranking metrics such as precision, recall, and MRR for the extrinsic evaluation of inferred user interest profiles. However, non-accuracy metrics such as serendipity, novelty, and diversity have received increasing attention in recommender systems [@Bellogn; @Kaminskas:2016:DSN:3028254.2926720]. Therefore, it is worth investigating the effect of different user modeling strategies and their inferred interest profiles in the context of recommender systems in terms of those non-accuracy metrics. This publication has emanated from research conducted with the financial support of Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (Insight Centre for Data Analytics). Thanks for the anonymous reviewers and the editor for their constructive feedback to improve this work. Author Biographies {#author-biographies .unnumbered} ================== **Guangyuan Piao** (<https://parklize.github.io>) is a Ph.D. student at the Insight Centre for Data Analytics (formerly DERI) at the National University of Ireland Galway. He received his B.Sc. in Computer Science from Jilin University, China, and received his M.Eng. degree in Information and Industrial Engineering from Yonsei University, South Korea. His main research interests include User Modeling, Recommender Systems, and Knowledge Graph. His current research focuses on semantics-aware user modeling and recommender systems leveraging knowledge graphs and latent semantics.\ **John G. Breslin** ([www.johnbreslin.com](www.johnbreslin.com)) is a Senior Lecturer in Electrical and Electronic Engineering at the College of Science and Engineering at the National University of Ireland Galway, where he is Director of the TechInnovate / AgInnovate programmes. John has taught electronic engineering, computer science, innovation and entrepreneurship topics during the past two decades. He is also a Co-Principal Investigator at the Insight Centre for Data Analytics, and a Funded Investigator at Confirm Smart Manufacturing and VistaMilk. He has written 190 peer-reviewed academic publications (h-index of 37, 5500 citations, best paper awards from DL4KGS, SEMANTiCS, ICEGOV, ESWC, PELS), and co-authored the books “The Social Semantic Web” and “Social Semantic Web Mining”. He co-created the SIOC framework, implemented in hundreds of applications (by Yahoo, Boeing, Vodafone, etc.) on at least 65,000 websites with 35 million data instances. The List of Surveyed Works {#appendix:works} ========================== Search Strategy --------------- In order to draw up a list of search terms, the basic terms are extracted from primary articles are retrieved. After that, other search terms are obtained iteratively based on the keywords that were used interchangeably within the retrieved articles. Overall, the final list of terms used for searching articles is presented in Table \[terms\]. These search terms (ST) are used for constructing sophisticated search strings. For example, the search string can be constructed as ST1 AND ST3 while ST1 is a compound term from Term1 and Term2 (e.g., inferring user interests). Initial searches with these search terms for titles and abstracts from electronic databases can obtain many relevant articles but may not be sufficient [@Kitchenham2004]. In this regard, additional article candidates are obtained by checking the reference list from primary studies that are relevant, and searching relevant journals and conference proceedings. [@Abdel-Hafez2013] provided a review of user modeling in social media websites in 2013, which includes some approaches with respect to inferring user interests in the context of microblogging social networks. In addition to those approaches mentioned in [@Abdel-Hafez2013], we also review recent user modeling approaches for inferring user interests. Term1 Term2 ----- --------------------------------- -------------------------------- ST1 inferring, modeling, predicting (user) interests ST2 user (interest) modeling, profiling, detection ST3 : Search terms used in the search strategy of this survey.[]{data-label="terms"} Selection Criteria ------------------ In order to assess and select relevant articles from primary studies, inclusion and exclusion criteria should be defined based on the research questions [@Kitchenham2004]. The inclusion criteria are as follows: 1. Published in English from 2004. 2. Studies on microblogging social networks. 3. Focus on user modeling strategies for inferring user interest profiles. On the other hand, exclusion criteria can be defined as follows: 1. Studies that were not peer-reviewed or published. 2. Studies related to user modeling but not focus on microblogging social networks. 3. Studies related to user modeling, but not focus on inferring user interests. Finally, inclusion or exclusion decisions are made for the fully obtained articles and those papers that only meet our criteria are selected. As a result, 51 articles are selected in this survey. These articles are distributed from 2010 to 2018, and the majority of them were published in conferences or workshops such as WI, UMAP, CIKM, and ECIR. Surveyed Studies ---------------- The surveyed 51 works are retrieved from different journals, conferences, and workshops, mainly in the user modeling, recommender systems, and Web related fields as follows: 1. Journals - ACM SIGAPP Applied Computing Review: [@Besel:2016:QSI:3015297.3015298] - Web Semantics: Science, Services and Agents on the World Wide Web: [@Faralli2017] - Social Network Analysis and Mining: [@Faralli2015] - Information Systems: [@Kang2016] - Procedia Computer Science: [@Jiang2015] 2. Conference proceedings - **WI** (IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology): [@Zarrinkalam2015a; @Xu2011; @Gao:2011:ITU:2052138.2052335; @Penas2013; @Ahn:2012:IUI:2457524.2457681] - **UMAP** (Conference on User Modeling Adaptation and Personalization): [@Abel2011g; @Hannon2012; @Narducci2013] - **CIKM** (ACM International Conference on Information and Knowledge Management): [@Vu:2013:IMU:2505515.2507883; @Piao2016b; @Sang:2015:PFT:2806416.2806470] - **ECIR** (European Conference on Information Retrieval): [@Zarrinkalam2016; @Guangyuan2017; @AnilKumarTrikhaFattaneZarrinkalam] - **ISWC** (International Conference on Semantic Web): [@Siehndel:2012:TUP:2887379.2887395; @Abel2011e] - **IUI** (International Conference on Intelligent User Interfaces): [@Bhargava:2015:UMU:2678025.2701365; @GarciaEsparza:2013:CCT:2449396.2449402] - **RecSys** (ACM Conference on Recommender Systems): [@Bhattacharya:2014:IUI:2645710.2645765; @Phelan:2009:UTR:1639714.1639794] - **SEMANTiCS** (International Conference on Semantic Systems): [@piao2016exploring; @Orlandi2012] - **HT** (ACM Conference on Hypertext and Social Media): [@Piao2017] - **SIGIR** (International ACM Conference on Research and Development in Information Retrieval): [@Chen2010] - **AAAI** (AAAI Conference on Artificial Intelligence): [@Lu2012] - **KDD** (Knowledge Discovery and Data Mining): [@Spasojevic:2014:LLS:2623330.2623350] - **IJCAI** (International Joint Conference on Artificial Intelligence): [@Abel:2013:TUM:2540128.2540558] - **ICWE** (International Conference on Web Engineering): [@Abel2012a] - **WebSci** (International Web Science Conference): [@Abel2011d] - **ESWC** (Extended Conference on Semantic Web): [@Kapanipathi2014] - **EKAW** (International Conference on Knowledge Engineering and Knowledge Management): [@Piao2016d] - **ICSC** (IEEE International Conference on Semantic Computing): [@Bolting2015] - **SAC** (ACM Symposium on Applied Computing): [@Besel:2016:ISI:2851613.2851819] - **WSDM** (ACM International Conference on Web Search and Data Mining): [@Weng:2010:TFT:1718487.1718520] - **JCDL** (Joint Conference on Digital Libraries): [@Nishioka:2016:PVT:2910896.2910898] - **i-KNOW** (International Conference on Knowledge Technologies and Data-driven Business): [@Nishioka:2015:ITU:2809563.2809601] - **SPIM** (International Conference on Semantic Personalized Information Management: Retrieval and Recommendation): [@Kapanipathi2011] - **OpenSym** (International Symposium on Open Collaboration): [@Lim:2013:ICT:2491055.2491078] - **ADMA** (Advanced Data Mining and Applications): [@Jipmo2017] 3. Workshop proceddings - **AND** (Workshop on Analytics for Noisy Unstructured Text Data): [@Michelson2010] - **Micropost** (Workshop on Making Sense of Microposts): [@Karatay2015a] - **SMAP** (Workshop on Semantic and Social Media Adaptation and Personalization): [@Kanta2012] - **RSWeb** (Workshop on Recommender Systems and the Social Web): [@OBanion2012] - **BlackMirror** (Workshop on Re-coding Black Mirror): [@Nechaev] 4. Others - Tech Report: [@Budak2014]. [^1]: <https://en.wikipedia.org/wiki/Microblogging> [^2]: <https://twitter.com/> [^3]: <https://www.facebook.com/> [^4]: <https://www.omnicoreagency.com/twitter-statistics/> [^5]: <https://www.omnicoreagency.com/facebook-statistics/> [^6]: <https://en.wikipedia.org/wiki/Social_login> [^7]: <https://hbr.org/2011/10/social-login-offers-new-roi-fr> [^8]: <http://www.gigya.com/blog/why-millennials-demand-social-login/> [^9]: <https://del.icio.us/> [^10]: <https://www.flickr.com/> [^11]: [www.wikipedia.org](www.wikipedia.org) [^12]: <https://www.linkedin.com/> [^13]: <https://klout.com/> [^14]: <https://goo.gl/j97H1R> [^15]: <http://bit.ly/pewsnsnews> [^16]: <http://www.corporate-eye.com/main/facebooks-growing-problem-passive-users/> [^17]: <http://edition.cnn.com/> [^18]: <http://listorious.com>, not available at the time of writing. [^19]: <https://www.delicious.com> [^20]: <https://www.stumbleupon.com> [^21]: <https://en.wikipedia.org/wiki/Hashtag> [^22]: <http://www.opencalais.com/> [^23]: <http://zbw.eu/stw> [^24]: <https://www.nlm.nih.gov/mesh/> [^25]: <https://en.wikipedia.org/wiki/DMOZ> [^26]: <http://news.naver.com/> [^27]: <http://news.nate.com//> [^28]: <https://www.yelp.com/> [^29]: <https://yahoo.com/> [^30]: <https://tagme.d4science.org/tagme/> [^31]: <https://aylien.com/> [^32]: <https://www.w3.org/RDF/> [^33]: <http://sioc-project.org/> [^34]: <http://smiy.sourceforge.net/wi/spec/weightedinterests.html> [^35]: Twitter restrict developers from sharing the content of tweets, see <https://developer.twitter.com/en/developer-terms/agreement-and-policy>. [^36]: Both GeniUS and TUMS are available at <http://www.wis.ewi.tudelft.nl/tweetum/> [^37]: <https://www.spotify.com> [^38]: <https://www.goodreads.com/>
{ "pile_set_name": "ArXiv" }
--- abstract: 'The dynamics of two penetrating superfluids exhibit an intriguing variety of nonlinear effects. Using two distinguishable components of a Bose-Einstein condensate, we investigate the counterflow of two superfluids in a narrow channel. We present the first experimental observation of trains of dark-bright solitons generated by the counterflow. Our observations are theoretically interpreted by three-dimensional numerical simulations for the coupled Gross-Pitaevskii (GP) equations and the analysis of a jump in the two relatively flowing components’ densities. Counterflow induced modulational instability for this miscible system is identified as the central process in the dynamics.' author: - 'C.' - 'J.J.' - 'P.' - 'M. A.' title: 'Generation of dark-bright soliton trains in superfluid-superfluid counterflow' --- Nonlinear structures in dilute-gas Bose-Einstein condensates (BECs) have been the focus of intense research efforts, deepening our understanding of quantum dynamics and providing intriguing parallels between atomic physics, condensed matter and optical systems. For superfluids that are confined in a narrow channel, one of the most prominent phenomena of nonlinear behavior is the existence of solitons in which a tendency to disperse is counterbalanced by the nonlinearities of the system. In single-component BECs, dark and bright solitons, forming local density suppressions and local bumps in the density, resp., have attracted great interest [@Kevrekidis2009]. In two-component BECs, the dynamics are even richer as a new degree of freedom, the relative flow between the two components, is possible. In this Letter, we investigate novel dynamics of superfluid-superfluid counterflow, which is in contrast to the extensively studied counterflow of a superfluid and normal fluid in liquid helium [@Donnelly1991]. Previous theoretical analysis has demonstrated that spatially uniform, counterflowing superfluids exhibit modulational instability (MI) when the relative speed exceeds a critical value [@Law2001]. Modulational instability is characterized by a rapid growth of long wavelength, small amplitude perturbations to a carrier wave into large amplitude modulations. The growth is due to the nonlinearity in the system [@Zakharov2009]. Our experiments and analysis reveal that by carefully tuning the relative speed slightly above the critical value, we can enhance large amplitude density modulations at the overlap interface between two nonlinearly coupled BEC components while mitigating the effects of MI in the slowly varying background regions. A dark-bright soliton train then results. In two previous experiments, individual dark-bright solitons were engineered in two stationary components using a wavefunction engineering technique [@Anderson2001; @Becker2008]. In our experiment we find that trains of dark-bright solitons can occur quite naturally in superfluid counterflow. This novel method of generating dark-bright solitons turns out to be robust and repeatable. In single-component, attractive BECs the formation of a bright soliton train from an initial density jump has been predicted [@Kamchatnov2003]. However, both condensate collapse and the effects of MI in the density background must be avoided, placing restrictions on the confinement geometry and diluteness of the single-component condensate. In contrast, the properties of counterflow in miscible, two-component BECs, as we show, enable the observation of trains consisting of ten or more dark-bright solitons in BECs with a large number of atoms. We also note that modulated soliton trains have been studied extensively in single-component, modulationally stable repulsive BECs where supersonic flow supports the generation of dispersive shock waves [@Dutton2001]. The dark-bright soliton train we study in this work occurs when one of the system’s sound speeds becomes complex so that the standard definition of supersonic flow does not apply. Our experiments are conducted with BECs confined in a single-beam optical dipole trap [@EPAPS]. We start with an initially perfectly overlapped mixture of atoms in the $|F,m_{F}\rangle$ = $|1,1\rangle$ and $|2,2\rangle$ hyperfine states of $^{87}$Rb, with a total of about 450000 atoms. The scattering lengths for the two states used in our experiment are estimated to be $a_{11}=100.40$ a.u. and $a_{22} \approx a_{12} = 98.98$ a.u. [@verhaar_predicting_2009]. Here $a_{11}$ and $a_{22}$ denote the single species scattering length for the $|1,1\rangle$ and $|2,2\rangle$ state, respectively, and $a_{12}$ is the interspecies scattering length. Mean field theory predicts that a mixture is miscible if $a_{12}<\sqrt{a_{11}\cdot a_{22}}$ [@Timmermans1998; @Ao1998; @Pu1998]. Therefore our system is predicted to be weakly miscible. In contrast, previous studies of two-component binary $^{87}$Rb BECs concentrated mostly on the states which are immiscible [@Hall1998; @Mertes2007], with the notable exception of Weld et al. [@Weld2009]. When the overlapped mixture is allowed to evolve in the trap, we observe no phase separation over the experimental timescale of several seconds. This is in agreement with the predicted miscibility of the two components and is demonstrated in Fig. \[miscibility\](a-c). The upper cloud of each image throughout this work shows the atoms in the $|2,2\rangle$ state at a time 7 ms after a sudden turn-off of the optical trap (and, where applicable, of any applied magnetic gradients), while the lower cloud, taken during the same experimental run, shows the atoms in the $|1,1\rangle$ state after 8 ms of expansion [@EPAPS]. During their in-trap evolution, these clouds are overlapped in the vertical direction. The dominant effect of the time evolution in Fig. \[miscibility\](a-c) is a slow decay of the atom number over time. For single component BECs, we have measured an exponential BEC lifetime of over 50 sec for the $|1,1\rangle$ state and 14 sec for the $|2,2\rangle$ state in our dipole trap. Motion induced by changes of mean field pressure during the decay may be responsible for a small scale roughness of both components which becomes visible after several seconds (Fig. \[miscibility\]c). =3.375in The situation changes when a small magnetic gradient is applied along the long axis of the trap. Due to Zeeman shifts, the gradient leads to a force in opposite directions for each component, or equivalently to a differential shift between the harmonic potentials along the long axis of the trap. This causes the two components to accelerate in opposite directions and induces counterflow. In all images where a magnetic gradient is applied, the gradient is chosen such that the $|2,2\rangle$ state is pulled to the right and the $|1,1\rangle$ to the left. An example is shown in Fig. \[miscibility\]d where a gradient leading to a calculated differential trap shift of 60 $\mu$m was applied for 9 sec, leading to nearly complete demixing of the two components. =6.75in In the following we investigate the dynamics induced by small gradients and show how they can be exploited to create dark-bright soliton trains. In Fig. \[shocks\] an initially overlapped mixture of 30% of the atoms in the $|2,2\rangle$ state and 70% in the $|1,1\rangle$ state is used. A small magnetic gradient in the axial direction is linearly ramped on over a timescale of 1 sec, leading to a calculated trap separation for the two species of only about three microns. After the end of this ramp, the gradient is held constant. In the subsequent evolution, individual stripes break off from the left edge of the $|2,2\rangle$ component, and perfectly aligned dark notches appear in the $|1,1\rangle$ component (Fig. \[shocks\](a)). The predominantly uniform widths of the observed stripes and notches, their long lifetime of several seconds in the absence of a magnetic gradient, as well as their dynamics resembling individual stable entities (see Fig. \[solitondrift\] and below) are strong experimental indications that the observed features are indeed dark-bright solitons. By reducing the initial number of atoms in the component forming the bright soliton, we have also been able to reliably produce one individual dark-bright soliton and observe its oscillation in trap [@Middelkamp2010], similar to the dynamics observed in [@Becker2008]. The observed soliton formation is reproduced by three-dimensional (3D) numerical simulations of the two-component Gross-Pitaevskii (GP) equations (Fig. \[shocks\](b-e)) [@EPAPS]. Parameters used for the GP equations are the experimental values. These values lead to dynamics that closely match the experiment, as shown in Fig. \[shocks\](a-c) with a moderate time delay. Our numerical calculations suggest that the time delay may be due to uncertainties in the estimated magnetic field gradient induced trap shifts. The experimentally invoked free expansion directly before imaging the condensate was not performed in the numerical simulations. Numerical results for the quantum mechanical phases of the two wavefunctions describing the components are shown in Fig. \[shocks\](e). The nearly linear phase behavior on the right (at $x \gtrsim ~ 50 \mu m$) indicates a smooth counterflow of the two components. In the soliton region, the phase jumps across the dark solitons as well as the phase gradients in the bright component vary slightly, so that the dark-bright solitons are moving relative to one another which eventually leads to dark-bright soliton interactions, see [@EPAPS]. The soliton train formation can be qualitatively understood by appealing to the hydrodynamic formulation of the mean-field, coupled GP equations in (1+1) dimensions $$\begin{aligned} \label{eq:1} (\rho_j)_t + (\rho_j u_j)_z &= 0 \\ \nonumber (u_j)_t + \left (\frac{1}{2} u_j^2 + \rho_j + \sigma_j \rho_{3-j} \right )_z &= \frac{1}{4} \left [\frac{(\rho_j)_{zz}}{\rho_j} - \frac{(\rho_j)_z^2}{2 \rho_j^2} \right ]_z ,\end{aligned}$$ here given in non-dimensional form with $\sigma_j = a_{12}/a_{jj}$, $\rho_j$ and $u_j$, $j=1,2$ the density and phase gradient (superfluid velocity) of the $j^\textrm{th}$ component, respectively. Equation models the dynamics of a highly elongated cigar shaped trap ($\omega_x \sim \omega_y \gg \omega_z$ where $\omega_x$ ($\omega_y$) is the transverse trap frequency in the horizontal (vertical) plane and $\omega_z$ is the axial trap frequency) with axial confinement neglected [@Kevrekidis2009]. Distance is in units of the transverse harmonic oscillator length $\sqrt{\hbar/(m \omega_x)}$ ($m$ is the particle mass). Time is in units of $1/\omega_x$ and the 3D densities are approximated by the harmonic oscillator ground state via $\rho_j(z,t) \exp(-x^2-\frac{\omega_y}{\omega_x} y^2)/(2\pi a_{jj} a_0^2)$. By considering small perturbations proportional to $e^{i(\kappa z - \omega t)}$ for uniform counterflow with densities $\rho_j$ and velocities $u_1 = -v/2$, $u_2 = v/2$, Ref. [@Law2001] demonstrated modulational instability ($\textrm{Im}\, \omega(\kappa) > 0$) for $v$ larger than a critical velocity ${v_\textrm{cr}}$ with a maximum growth rate $\textrm{Im} \, \omega_\textrm{max}$ and associated wavenumber $\kappa_\textrm{max}$. We have repeated the calculation and find the additional result $$\label{eq:2} \sqrt{\rho_1(1 - \sigma_1 \sigma_2)} \le {v_\textrm{cr}}\le 2 \sqrt{\rho_1 (1 - \sqrt{\sigma_1 \sigma_2})} , ~ \rho_1 \ge \rho_2,$$ the lower bound being valid for small $\rho_2/\rho_1$ and the upper bound applicable for $\rho_2 \sim \rho_1$. The scattering lengths of the binary system considered here give $0.119 \sqrt{\rho_1} \le {v_\textrm{cr}}\le 0.168 \sqrt{\rho_1}$. Typical densities for the experiments in Fig. \[shocks\] give $\rho_1 + \rho_2 = 4.3$, $\rho_2/\rho_1 = 0.3$ leading to ${v_\textrm{cr}}= 0.25$ ($\approx 0.22$ mm/s). Figure \[shocks\] shows that the dark-bright soliton train forms at the overlap interface of the two components while approximately maintaining constant total density. We model this by numerically solving eq.  for an initial jump in density that maintains $\rho_1 + \rho_2 = 4.3$ (dotted curves in \[theory\](a,b)) with a uniform counterflow: $u_1 = -v/2$ and $u_2 = v/2$. For subcritical cases $0 \le v < {v_\textrm{cr}}$, the evolution consists of an expanding rarefaction wave with weak oscillations on the right edge (Fig. \[theory\](a); solid line for $v = 0$, dashed line for $v = 0.17$) and corresponding scaled relative speeds $|u_1 - u_2|/\sqrt{\rho_1}$ (Fig. \[theory\](c) solid, dashed) below critical (in Fig. \[theory\](c,d), the bounds (\[eq:2\]) on the critical velocities are indicated by the dotted lines). When the initial relative speed is supercritical, a dark-bright soliton train forms at the initial jump (Fig. \[theory\](b), $v = 0.32$). The relative speed within some regions of the soliton train significantly exceeds ${v_\textrm{cr}}$ as shown in Fig. \[theory\](d) suggesting that counterflow induced MI has the effect of enhancing soliton formation. Because the initial relative speed $v$ was taken just slightly above ${v_\textrm{cr}}$, the maximum growth rate $\textrm{Im} \, \omega_\textrm{max} = 0.0077$ and associated wavenumber $\kappa_\textrm{max} = 0.13$ for unstable perturbations to the uniform state in the far field are small (Figs. \[theory\](e,f)). Therefore, MI in the background counterflow far from the jump does not develop appreciable magnitude over the timescale of soliton train formation, in contrast to the dynamics with $v \gg {v_\textrm{cr}}$ that we investigate in [@Hoefer2010]. =3.375in This MI assisted soliton formation technique allows us to create dark-bright solitons in a well-controlled and repeatable manner, as is evidenced by the fact that all images of Fig. \[shocks\]a form a very consistent sequence even though they were taken during different runs of the experiment. In addition to repeatability, future studies may also require a long lifetime of the solitons. In single component BECs, achieving long lifetimes of dark solitons has proven difficult as they are subject to a transverse instability [@Dutton2001; @Anderson2001]. Only recently have dark soliton lifetimes of up to 2.8 sec been achieved [@Becker2008]. It has been conjectured [@Busch2001] and numerically confirmed [@Musslimani2001] that dark-bright solitons are more stable to transverse perturbations than dark solitons. Experimentally, we indeed observe long lifetimes of several seconds for the dark-bright solitons after the magnetic gradient is turned off. The solitons act as individual entities and can move through the BEC, maintaining their shape for a relatively long time. We demonstrate this by starting from a situation as in Fig. \[shocks\](a) at 1.5 sec, where a train of solitons has been created after the application of an axial magnetic gradient. When the gradient is subsequently turned off, the dark-bright solitons move through the BEC while approximately maintaining their narrow widths (Fig. \[solitondrift\]). The bright and dark part of each individual soliton remain aligned relative to each other, but any regularity in the spacing between solitons is lost. The number of visible solitons decreases over time, but even after 2.5 sec several solitons are still visible, as in Fig. \[solitondrift\](d). Simulations [@EPAPS] suggest that soliton interactions may be the cause of this decay. In Fig. \[solitondrift\](c), little diffuse cloudlets of atoms in the $|2,2\rangle$ state are visible in addition to some solitons, and corresponding small suppressions of the density in the $|1,1\rangle$ components can be detected. We interpret these features as the decay products of dark-bright solitons, marking the end of their life cycle. =3.375in In conclusion, we have observed dark-bright soliton trains in the counterflow of two miscible superfluids. The soliton train is formed due to relative motion above the critical value for modulational instability. By inducing relative speeds slightly above critical, we can avoid the onset of MI throughout the superfluids over the time scales of soliton train formation. Together with the long lifetime of the observed dark-bright solitons, this opens the door to future experiments with these interesting coherent nonlinear structures. While the dynamics considered in this Letter are effectively one-dimensional, a very recent theoretical analysis has shown that superfluid counterflow in higher dimensions can lead to binary quantum turbulence, providing another example of the exceptional dynamical richness of the two-component system [@Takeuchi2010]. Acknowledgments --------------- P.E. acknowledges financial support from NSF and ARO. M.A.H. acknowledges financial support from NSF under DMS-0803074, DMS-1008973 and a Faculty Research and Professional Development grant from NCSU. The authors thank the anonymous referees for beneficial suggestions. P. G. Kevrekidis, D. Frantzeskakis, and R. Carretero-Gonzalez, *Emergent Nonlinear Phenomena in Bose-Einstein Condensates: Theory and Experiment* (Springer, Berlin Heidelberg, 2009). See, e.g., R. J. Donnelly, *Quantized vortices in helium II* (Cambridge University Press, Cambridge, 1991). C. K. Law *et al.*, Phys. Rev. A [**63**]{}, 063612 (2001). V. Zakharov and L. Ostrovsky, Physica D [**238**]{}, 540 (2009). B. Anderson *et al.*, Phys. Rev. Lett. [**86**]{}, 2926 (2001). C. Becker *et al.*, Nature Physics [**4**]{}, 496 (2008). A. M. Kamchatnov *et al.*, Phys. Lett. A [**319**]{}, 406 (2003). Z. Dutton, *et al.*, Science [**293**]{} 663 (2001). A. M. Kamchatnov, A. Gammal, and R. A. Kraenkel, Phys. Rev. A, [**69**]{}, 063605 (2004). M. A. Hoefer *et al.*, Phys. Rev. A [**74**]{}, 023623 (2006). R. Meppelink *et al.*, Phys. Rev. A, [**80**]{}, 043606 (2009). See EPAPS for experimental and numerical details, as well as a movie of the numerical simulations. For more information on EPAPS, see http://www.aip.org/pubservs/epaps.html. B. J. Verhaar, E. G. M. van Kempen, and S. J. J. M. F. Kokkelmans, Phys. Rev. A [**79**]{}, 032711 (2009). S. J. J. M. F. Kokkelmans, personal communication, (2010). E. Timmermans, Phys. Rev. Lett. [**81**]{}, 5718 (1998). P. Ao and S. T. Chui, Phys. Rev. A [**58**]{}, 4836 (1998). H. Pu and N. P. Bigelow, Phys. Rev. Lett. [**80**]{}, 1130 (1998). D. S. Hall *et al.*, Phys. Rev. Lett. [**81**]{}, 1539 (1998). K. M. Mertes *et al.*, Phys. Rev. Lett [**99**]{}, 190402 (2007). D. M. Weld *et al.*, Phys. Rev. Lett. [**103**]{}, 245301 (2009). S. Middelkamp et al., Physics Letters A (2010), doi:10.1016/j.physleta.2010.11.025. M. A. Hoefer *et al.*, arXiv:1007.4947 \[cond-mat.quant-gas\]. T. Busch and J. R. Anglin, Phys. Rev. Lett. [**87**]{}, 010401 (2001). Z. H. Musslimani and J. Yang, Optics Letters [**26**]{}, 1981 (2001). H. Takeuchi, S. Ishino, and M. Tsubota, Phys. Rev. Lett. [**105**]{}, 205301 (2010).
{ "pile_set_name": "ArXiv" }
--- abstract: | Using heat kernel estimates, we prove the pathwise uniqueness for strong solutions of irregular stochastic differential equation driven by a family of Markov process, whose generator is a non-local and non-symmetric Lévy type operator. Due to the extra term $1_{[0,\sigma(X_{s-},z)]}(r)$ in multiplicative noise, we need to derive some new regularity results for the generator and use a trick of mixing $L_1$ and $L_2$-estimates by Kurtz and Protter [@Ku-Po]. [[**Keywords and Phrases:**]{} Heat kernel estimates, non-local operator, irregular SDE, pathwise uniqueness]{} address: - | Longjie Xie: School of Mathematics and Statistics, Jiangsu Normal University, Xuzhou, Jiangsu 221000, P.R.China\ Email: xlj.98@whu.edu.cn - | Lihu Xu: 1. Department of Mathematics, Faculty of Science and Technology, University of Macau, Av. Padre Tomás Pereira, Taipa Macau, China. 2. UM Zhuhai Research Institute, Zhuhai, 519080, China\ Email: lihuxu@umac.mo, xulihu2007@gmail.com author: - '[Longjie Xie]{} and [Lihu Xu]{}' title: Irregular Stochastic differential equations driven by a family of Markov processes --- [^1] Introduction ============ Nowadays, much attentions have been paid to the non-local operators and their corresponding pure jump processes, as these processes are more realistic models for many practice applications. Consider the following non-local and non-symmetric Lévy type operator: $$\begin{aligned} \sL f(x):=\sL_\nu^\sigma f(x)+b(x)\cdot\nabla f(x),\quad\forall f\in C^\infty_0(\mR^d), \label{oper}\end{aligned}$$ where $b(x)$ is a measurable function and $$\begin{aligned} \sL^\sigma_{\nu} f(x):=\int_{\mR^d}\big[f(x+z)-f(x)-1_{\{|z|\leq 1\}}z\cdot\nabla f(x)\big]\sigma(x,z)\nu(\dif z).\end{aligned}$$ Here, $\nu$ is a Lévy measure on $\mR^d$ satisfying $$\int_{\mR^d\setminus \{0\}}(|z|^2\wedge 1)\nu(\dif z)<\infty,$$ and $\sigma:\mR^d\times\mR^d\rightarrow\mR$ is measurable. The operator $\sL$ is a non-local version of the classical second order elliptic operator with non-divergence form and has been intensely studied in the last decade by people in the community of analysis and PDEs, see [@E-I-K; @Ko] and the references therein. While from the probability point of view, it is known via the martingale method (see[@M-P]) that under certain assumptions on $\nu, \sigma$ and $b$, there exists a Markov process $X_t$ with $\sL$ as its generator, and the measure $\nu$ describes the jumps of $X_t$. It is natural to ask wether one can construct $X_t$ via the Itô’s calculus so that we can have another look at $\sL$ from the view of stochastic differential equations (SDEs). However, the classical SDE driven by pure jump Lévy process is not very suitable (see more discussions in [@Xie Section 1]). Its connection to SDE was found very recently. To specify the SDE that we are going to study, let $\cN(\dif z,\dif r,\dif t)$ be a Poisson random measure on $\mR^d\times[0,\infty)\times[0,\infty)$ with intensity measure $\nu(\dif z)\dif r\dif t$, and $\tilde \cN(\dif z,\dif r,\dif t):=\cN(\dif z,\dif r,\dif t)-\nu(\dif z)\dif r\dif t$ is the compensated Poisson random measure. Then, the Markov process $X_t$ corresponding to $\sL$ should satisfy the following SDE: $$\begin{aligned} \dif X_t&=\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}\!\!1_{[0,\sigma(X_{s-},z)]}(r)z\tilde \cN(\dif z,\dif r,\dif t)\no\\ &\quad+\int_0^{\infty}\!\!\!\!\int_{|z|> 1}\!\!1_{[0,\sigma(X_{s-},z)]}(r)z \cN(\dif z,\dif r,\dif t)+b(X_t)\dif t, \ \ \ \ X_0=x\in\mR^d. \label{sde2}\end{aligned}$$ In fact, noticing that for a function $f$ and any $r>0$, $$\begin{aligned} f\big(x+1_{[0,\sigma(x,z)]}(r)z\big)-f(x)=1_{[0,\sigma(x,z)]}(r)\big[f(x+z)-f(x)\big],\end{aligned}$$ an application of Itô’s formula shows that the generator of the solution to SDE (\[sde2\]) is given exactly by (\[oper\]). Note that the driven noise is a Markov process but not necessarily Lévy type [@Kurz2]. This makes such kind of SDEs more interesting and are worthy of study. Under the conditions that $b$ is bounded and global Lipschitz continuous, and $\sigma$ is bounded with $$\begin{aligned} \int_{\mR^d}|\sigma(x,z)-\sigma(y,z)|\cdot|z|\nu(\dif z)\leq C_1|x-y|, \ \ \ \ \forall \ \ x, y \in \mR^d, \label{kur}\end{aligned}$$ and some other assumptions, Kurtz [@Kurz2] showed the existence and uniqueness of strong solution to SDE , see also [@Im-Wi; @Ku-Po] for related results and applications. Our aim in this paper is to prove that SDE admits a unique strong solution under some weak assumptions on the coefficients $\sigma$ and $b$ as well as the jump measure $\nu$, from which we can see the regularization effect of such kind of noises on the deterministic systems. The irregular SDEs driven by pure jump noises have been extensively studied in the past several decades. Note that when $d=1$ and $L_t$ is a symmetric $\alpha$-stable process with $\alpha<1$, Tanaka, Tsuchiya and Watanabe [@Ta-Ts-Wa] showed that if $b$ is bounded and $\beta$-Hölder continuous with $\alpha+\beta<1$, SDE $$\begin{aligned} \dif X_t=\dif L_t+b(X_t)\dif t,\quad X_0=x\in\mR^d \label{levy}\end{aligned}$$ may not have pathwise uniqueness strong solutions. When $\alpha\geq 1$, $b$ is bounded and $\beta$-Hölder continuous with $\beta>1-\frac{\alpha}{2},$ it was proved by Priola [@Pri] that there exists a unique strong solution $X_t(x)$ to SDE (\[levy\]) for each $x\in\mR^d$. Recently, Zhang [@Zh00] obtained the pathwise uniqueness to SDE (\[levy\]) when $\alpha>1$ and $b$ is a bounded function in some local Sobolev space. See also [@B-B-C; @Ch-So-Zh; @Pri2] for related results. We also would like to mention the paper [@M-X] where SDEs driven by multiplicative Lévy noise with Lipschitz diffusion coefficient and Hölder drift was considered. For the study of irregular SDEs driven by Brownian motion, we refer readers to [@Fa-Lu-Th; @Fe-Fl-2; @Fl-Gu-Pr; @Kr-Ro; @M-N-P-Z; @W; @Wa; @XZ; @Zh3; @Zh1]. Let us compare our results with the literatures above. To prove the uniqueness of strong solution, we shall follow a well known strategy [@Ch-So-Zh; @Kr-Ro; @Pri; @Zh1], which is to derive a new SDE with better coefficients by Zvonkin’s transformation and get the uniqueness of the original equation from the new one. The crucial point of this approach is to study the regularity of transformation equations which vary with different SDEs. There are several new aspects that we would like to stress for SDE as the following. First of all, our main tool for studying the transformation equation (see below) is the heat kernel (also called fundamental solution) of the operator $\sL_\nu^\sigma$, it seems the first time to use heat kernel estimates to study the pathwise uniqueness irregular SDEs, see [@C-K-K; @K-S] for the study of weak uniqueness of SDEs with Lévy noise by using heat kernels. Secondly, all the above works are for singular SDEs driven by Brownian motions or additive Lévy noises, in the latter case, one only needs to study the operator $\sL_0$ defined by $$\sL_0 f(x):=\int_{\mR^d}\Big[f(x+z)-f(x)-1_{\{|z|\leq 1\}}z\cdot\nabla f(x)\Big]\nu(\dif z),\quad\forall f\in C_0^\infty(\mR^d).$$ The analysis relies on the nice property of $\sL_0$ and the $C^2$ smoothing effect of its semigroup. However, we study the multiplicative noise and the semigroup generated by $\sL_{\nu}^\sigma$ only has $C^{\alpha+\beta}$ regularity with $\alpha+\beta<2$ (see Remark \[ree\]), we need to use more delicate analysis and interpolation theorems to fit our less regularity property into the frame of Zvonkin’s argument. We mention that in [@Xie], the first author consider the same SDE with critical case $\alpha=1$ and $b$ in Hölder spaces, here we shall consider $\alpha\in(1,2)$ but with a more irregular drift term $b$ in fractional Sobolev spaces, and the proof in this paper is more involved. Lastly, when proving the Krylov-type estimate and performing Zvonkin transformation, we needs to solve a semi-linear elliptic equation and the resolvent equation of $\sL_\nu^\sigma$ in the framework of Sobolev space. Because a well developed elliptic equation theory as in [@Kr-Ro; @Zh3; @Zh1] is not available for $\sL_\nu^\sigma$, we derive a generalized Itô’s formula for Hölder functions and solve the corresponding integral equation in Sobolev spaces. Another novelty in our analysis is the technique for handling the extra term $1_{[0,\sigma(X_{s-},z)]}(r)$ when we prove the pathwise uniqueness in the last section. The usual $L_2$-estimate in the known literatures is not applicable. Fortunately we can use a trick of mixing $L_1$ and $L_2$-estimate as the replacement [@Kurz2; @Ku-Po]. Due to the irregularity of $b$ and $\sigma$, it is much more complicated than [@Kurz2] to apply this trick. Finally, we mention that studying the unique strong solution of SDE with irregular coefficients not only has its own interests but also helps to better understand the nonlocal operator $\sL$ ([@Im-Wi]). Another motivation for studying SDE is because of the special noise. As mentioned above, the driven noise is a Markov process but not necessarily Lévy type. This has been found very useful in applications, for instance, Markov type noise plays a crucial role as the control when proving Freidlin-Wentzell type large deviation for Lévy type SDEs via weak convergence approach [@BDM11; @BCD13; @ZhZh15]. The organization of the paper is as the following. Section 2 gives the main result with some comments and comparisons with known literatures. Sections 3 and 4 are both preparation sections, the former for some estimates of heat kernel of $\sL_\nu^\sigma$ and the latter for the regularity of the corresponding semigroup $\cT_t$. Krylov’s estimate and Zvonkin’s transformation are studied in the 5th section and applied in the last section to prove the strong uniqueness of SDE . Throughout this paper, we use the following convention: $C$ with or without subscripts will denote a positive constant, whose value may change in different places, and whose dependence on parameters can be traced from calculations. We would like to gratefully thank Professors Rengming Song and Xicheng Zhang for very helpful discussions. Main result =========== We assume that for all $x\in\mR^d$, $$\begin{aligned} \sigma(x,z)=\sigma(x,-z),\quad\forall z\in\mR^d, \label{s2}\end{aligned}$$ and that there exists a function $\tilde\kappa$ such that $$\begin{aligned} \nu(\dif z)=\frac{\tilde\kappa(z)}{|z|^{d+\alpha}}\dif z, \quad \tilde\kappa(z)=\tilde\kappa(-z),\quad \kappa_0\leq \tilde\kappa(z)\leq \kappa_1,\label{nu}\end{aligned}$$ with $\alpha\in(1,2)$ and $\kappa_0, \kappa_1$ are two positive constants. The symmetric in $z$ of $\sigma$ and $\tilde\kappa$ is a common assumption in the literature, see [@Ca-Si]. As a result, we can also write $\sL_\nu^\sigma$ as $$\begin{aligned} \sL^\kappa_{\alpha}\varphi(x)&=\text{p.v.}\int_{\mR^d}\big[\varphi(x+z)-\varphi(x)\big]\sigma(x,z)\nu(\dif z)\no\\ &=\text{p.v.}\int_{\mR^d}\big[\varphi(x+z)-\varphi(x)\big]\frac{\kappa(x,z)}{|z|^{d+\alpha}}\dif z,\label{op}\end{aligned}$$ where $$\begin{aligned} \kappa(x,z)=\sigma(x,z)\tilde\kappa(z). \label{kappa}\end{aligned}$$ The operator $\sL^\kappa_{\alpha}$ is a non-local and non-symmetric operator, which can be seen as a generalization of variable coefficients fractional Laplacian operator. For brevity, we set $B_n:=\{x\in\mR^d:|x|\leq n\}$. Our main result is: \[main\] Let (\[s2\]) hold and the Lévy measure $\nu$ satisfies (\[nu\]). Suppose that for any $n\in\mN$: 1. There exists a function $\zeta\in L^{q}(B_n)$ with $q>d/\alpha$, such that for almost all $x,y\in B_n$, $$\begin{aligned} \int_{\mR^d}|\sigma(x,z)-\sigma(y,z)|(|z|\wedge1)\nu(\dif z)\leq |x-y|\Big(\zeta(x)+\zeta(y)\Big),\label{a1}\end{aligned}$$ and for some constants $k^n_0, k^n_1>0$, $\beta\in(0,1)$ and $C_n>0$, $$\begin{aligned} k^n_0\leq \sigma(x,z)\leq k^n_1,\quad|\sigma(x,z)-\sigma(y,z)|\leq C_n|x-y|^{\beta},\quad \forall x,y\in B_n,\,\,\forall z\in\mR^d.\label{ho}\end{aligned}$$ 2. For some $\theta\in (1-\frac{\alpha}{2},1)$ and $p>2d/\alpha$, $$\begin{aligned} \int_{B_n}\!\int_{B_n}\frac{|b(x)-b(y)|^p}{|x-y|^{d+\theta p}}\dif x\dif y<+\infty,\end{aligned}$$ and it holds $$\begin{aligned} \sup_{x\in B_n}|b(x)|<\infty.\end{aligned}$$ Then, for each $x\in\mR^d$, there exists an stopping time $\varsigma(x)$ (called the explosion time) and a unique strong solution $X_t(x)$ to SDE (\[sde2\]) such that $$\begin{aligned} \lim_{t\uparrow\varsigma(x)}X_t(x)=\infty,\quad a.s.. \label{xt}\end{aligned}$$ Let us make some comments on the assumptions and give an example for our result with a comparison with known literatures. (1). It is clear that the assumption (\[a1\]) is a generalization of condition (\[kur\]) in [@Kurz2]. For an very interesting example of $\sigma$, we can take $$\sigma(x,z)=K(z)+\tilde\sigma(x)|z|^{\gamma} \ \ {\rm for} \ \ |z| \le 1, \ \ \ \ \ \sigma(x,z)=K(z)+\tilde\sigma(x) \ \ {\rm for} \ \ |z|>1,$$ with $0<K_1\leq K(z)\leq K_2$, $\gamma>\alpha-1$ and $\nabla\tilde\sigma\in L^q_{loc}(\mR^d)$ with $q>d/\alpha$, where $\nabla$ denotes the weak derivative. Since we assume $\alpha>1$, our theorem can cover the regime $q \in (d/\alpha, d]$. However, for the following SDE driven by multiplicative Brownian motion [@Zh3]: $$\dif X_t=\sigma(X_t)\dif W_t+b(X_t)\dif t,\quad X_0=x\in\mR^d,$$ one has to assume that $\nabla\sigma\in L^q(\mR^d)$ with $q>d$. Here, the main trick is that $\sigma$ appears in the term $1_{[0,\sigma(X_{s-},z)]}(r)$. When $\nabla\tilde\sigma\in L^q_{loc}(\mR^d)$ with $q>d$, we can also have the Hölder continuity in (\[ho\]) by the Sobolev embedding theorem. (2). For interesting examples of irregular drift coefficient $b$, we can take $b(x)=1_A(x)$ for a measurable set $A \in \mR^d$, see [@Zh00 Remark 1.2] for details. (3). The conditions (\[s2\]), (\[nu\]) and (\[ho\]) are assumed so that we can use the results obtained in [@Ch-Zh]. Under (\[s2\]), (\[nu\]) and the global assumptions $$\begin{aligned} 0<\tilde k_0\leq \sigma(x,z)\leq \tilde k_1,\,\,\,|\sigma(x,z)-\sigma(y,z)|\leq C_0|x-y|^{\beta},\quad \forall x,y\in \mR^d,\,\,\forall z\in\mR^d, \label{s1}\end{aligned}$$ it was proved that there exists a unique fundamental solution $p(t,x,y)$ for $\sL^\kappa_\alpha$, see [@Ch-Zh Theorem 1.1]. Here, we only need the local boundness and the local Hölder continuity of $\sigma$ in (\[ho\]) thanks to the stopping time technique. Furthermore, we shall prove better regularities of $p(t,x,y)$ (see Theorem \[seen\]) than obtained in [@Ch-Zh], which seem to be new and have independent interests. Heat kernel estimates ===================== We briefly recall the construction of the heat kernel $p(t,x,y)$ for operator $\sL^\kappa_{\alpha}$ in [@Ch-Zh], from which we derive some important estimates of $p(t,x,y)$ (see Theorem \[seen\] below) for further use in next sections. From now on, we assume that (\[s2\]), (\[nu\]) and (\[s1\]) always hold. First of all, in view of (\[s2\]) (\[nu\]) and (\[op\]), we can also write $$\begin{aligned} \sL^\kappa_{\alpha} \varphi(x)=\frac{1}{2}\int_{\mR^d}\delta_\varphi(x,z)\frac{\kappa(x,z)}{|z|^{d+\alpha}}\dif z,\quad\forall \varphi\in C^\infty_0(\mR^d), \label{opp}\end{aligned}$$ where $$\delta_\varphi(x,z):=\varphi(x+z)+\varphi(x-z)-2\varphi(x).$$ In order to reflect the dependence of $\kappa$ with respect to $x$, we shall also use $\sL_\alpha^{\kappa,x}$ instead of $\sL_\alpha^\kappa$. To shorten the notation, we set for $\gamma, \beta\in \mR$, $$\varrho_\gamma^\beta(t,x):=t^{\frac{\gamma}{\alpha}}\big(|x|^{\beta}\wedge 1\big)\big(|x|+t^{1/\alpha}\big)^{-d-\alpha}.$$ The following 3-P type inequalities shall be used below from time to time. (i). If $\gamma_1+\beta_1>0$ and $\gamma_2+\beta_2>0$, then there exists a constant $C_1>0$ such that for all $t\geq 0$ and $x,y\in\mR^d$, $$\begin{aligned} &\int_0^t\!\!\!\int_{\mR^d}\varrho_{\gamma_1}^{\beta_1}(t-s,x-z)\varrho_{\gamma_2}^{\beta_2}(s,z-y)\dif z\dif s\no\\ &\leq C_1\Big(\varrho^{0}_{\gamma_1+\gamma_2+\beta_1+\beta_2}+\varrho_{\gamma_1+\gamma_2+\beta_2}^{\beta_1}+ \varrho^{\beta_2}_{\gamma_1+\gamma_2+\beta_1}\Big)(t,x-y). \label{3p}\end{aligned}$$ (ii). For all $\beta_1,\beta_2\in[0,\alpha]$ and $\gamma_1,\gamma_2\in\mR$, there exists a constant $C_2>0$ such that for any $t\geq 0$ and $x\in\mR^d$, $$\begin{aligned} &\int_{\mR^d}\varrho_{\gamma_1}^{\beta_1}(t,x-z)\varrho_{\gamma_2}^{\beta_2}(t,z)\dif z\no\\ &\leq C_2\Big(\varrho^{0}_{\gamma_1+\gamma_2+\beta_1+\beta_2-\alpha}+\varrho_{\gamma_1+\gamma_2+\beta_2-\alpha}^{\beta_1}+ \varrho^{\beta_2}_{\gamma_1+\gamma_2+\beta_1-\alpha}\Big)(t,x). \label{3p1}\end{aligned}$$ The first inequality is given by [@Ch-Zh Lemma 2.1 (iii)], while the second one can be proved entirely by the same arguments as [@Ch-Zh Lemma 2.1 (ii)], the details are omitted. Let $p_{\alpha}(t,x)$ denote the heat kernel of operator $\Delta^{\frac{\alpha}{2}}$ (or equivalently, the transition density of $d$-dimensional symmetric $\alpha$-stable process). It is well known that there exists a constant $C_0$ such that $$\begin{aligned} C_0^{-1}\varrho_\alpha^0(t,x)\leq p_{\alpha}(t,x)\leq C_0\varrho_\alpha^0(t,x), \label{k0}\end{aligned}$$ and for every $k\in\mN$, it holds for some $C_k$ that $$\begin{aligned} |\nabla^k p_{\alpha}(t,x)|\leq C_k\varrho_{\alpha-k}^0(t,x). \label{kk}\end{aligned}$$ Set for $z\in\mR^d$, $$\begin{aligned} \delta_{p_\alpha}(t,x;z):=p_\alpha(t,x+z)+p_\alpha(t,x-z)-2p_\alpha(t,x).\end{aligned}$$ It was shown by [@Ch-Zh Lemma 2.2] that that there exists a constant $C>0$ such that $$\begin{aligned} |\delta_{p_\alpha}(t,x;z)|\leq C\Big((t^{-\frac{2}{\alpha}}|z|^2)\wedge 1\Big)\Big(\varrho^0_\alpha(t,x\pm z)+\varrho^0_\alpha(t,x)\Big).\label{de}\end{aligned}$$ With this estimate in hand and following the same ideas as in the proof of [@Ch-Zh Theorem 2.4], we can derive the fractional derivative estimate of $p_{\alpha}(t,x)$. For completeness, we sketch the details here. \[fe\] For any $0<\gamma<2$, there exists a constant $C_\gamma$ such that $$\begin{aligned} |\Delta^{\frac{\gamma}{2}}p_{\alpha}(t,x)|\leq C_\gamma\varrho^0_{\alpha-\gamma}(t,x). \label{e1}\end{aligned}$$ We may assume that $t\leq 1$, since the general case follows by the Chapman-Kolmogorov equation. By the definition of fractional Laplacian and as (\[opp\]), we can write for $0<\gamma<2$, $$\begin{aligned} \Delta^{\frac{\gamma}{2}}p_{\alpha}(t,x)=\frac{c_{d,\gamma}}{2}\int_{\mR^d}\delta_{p_\alpha}(t,x;z)\frac{1}{|z|^{d+\gamma}}\dif z.\label{22}\end{aligned}$$ Consequently, we have by (\[de\]) $$\begin{aligned} |\Delta^{\frac{\gamma}{2}}p_\alpha(t,x)|&\leq C_{d,\gamma}\int_{\mR^d}|\delta_{p_\alpha}(t,x;z)|\cdot|z|^{-d-\gamma}\dif z\\ &\leq C_1\varrho^0_\alpha(t,x)\int_{\mR^d}\Big((t^{-\frac{2}{\alpha}}|z|^2)\wedge 1\Big)|z|^{-d-\gamma}\dif z\\ &\quad+C_1\!\!\int_{\mR^d}\Big((t^{-\frac{2}{\alpha}}|z|^2)\wedge 1\Big)\varrho^0_\alpha(t,x\pm z)|z|^{-d-\gamma}\dif z=:I_1+I_2.\end{aligned}$$ For $I_1$, by the assumption that $\gamma<2$, one can check easily that $$\begin{aligned} I_1&=C_1\varrho^0_{\alpha-2}(t,x)\!\int_{|z|\leq t^{1/\alpha}}|z|^{2-d-\gamma}\dif z+C_1\varrho^0_{\alpha}(t,x)\!\int_{|z|> t^{1/\alpha}}|z|^{-d-\gamma}\dif z\leq C_2\varrho^0_{\alpha-\gamma}(t,x).\end{aligned}$$ As for the second term, similarly we write $$\begin{aligned} I_2&=C_1t^{-\frac{2}{\alpha}}\!\!\int_{|z|\leq t^{1/\alpha}}\varrho^0_\alpha(t,x\pm z)|z|^{2-d-\gamma}\dif z+C_1\!\!\int_{|z|> t^{1/\alpha}}\varrho^0_\alpha(t,x\pm z)|z|^{-d-\gamma}\dif z=:I_{21}+I_{22}.\end{aligned}$$ We further control $I_{21}$ by $$\begin{aligned} I_{21}&\leq C_3\varrho^0_{\alpha-2}(t,x)\!\!\int_{|z|\leq t^{1/\alpha}}|z|^{2-d-\gamma}\dif z\leq C_4\varrho^0_{\alpha-\gamma}(t,x).\end{aligned}$$ For $I_{22}$, if $|x|\leq 2t^{1/\alpha}$, then $$\begin{aligned} I_{22}&\leq C_3t^{-\frac{d}{\alpha}}\!\int_{|z|> t^{1/\alpha}}|z|^{-d-\gamma}\dif z\leq C_3t^{-\frac{d+\gamma}{\alpha}}\leq C_4\varrho^0_{\alpha-\gamma}(t,x).\end{aligned}$$ If $|x|> 2t^{1/\alpha}$, we can deduce that $$\begin{aligned} I_{22}&\leq C_1\left(\int_{\frac{|x|}{2}>|z|> t^{1/\alpha}}+\int_{|z|> \frac{|x|}{2}}\right)\varrho^0_\alpha(t,x\pm z)|z|^{-d-\gamma}\dif z\\ &\leq C_2t\int_{\frac{|x|}{2}>|z|> t^{1/\alpha}}\big(|x\pm z|+t^{1/\alpha}\big)^{-d-\alpha}|z|^{-d-\gamma}\dif z+C_2|x|^{-d-\gamma}\int_{|z|> \frac{|x|}{2}} \varrho^0_\alpha(t,x\pm z)\dif z\\ &\leq C_3\varrho^0_{\alpha}(t,x)\int_{|z|> t^{1/\alpha}}|z|^{-d-\gamma}\dif z+C_3|x|^{-d-\gamma}\leq C_4\varrho^0_{\alpha-\gamma}(t,x).\end{aligned}$$ Combing the above computations, we get (\[e1\]). Now we fix $y\in\mR^d$, consider the freezing operator $$\sL_{\alpha}^{\kappa,y}f(x):=\text{p.v.}\int_{\mR^d}[f(x+z)-f(x)]\frac{\kappa(y,z)}{|z|^{d+\alpha}}\dif z,$$ where $\kappa$ is given by (\[kappa\]). It is known that there exists a symmetric $\alpha$-stable like process corresponding to $\sL_{\alpha}^{\kappa,y}$. Let $p_y(t,x)$ be the heat kernel of operator $\sL_{\alpha}^{\kappa,y}$. Since $\kappa$ is uniformly bounded, it follows from [@Ch-Ku Theorem 1.1] that for some constant $C_0$ independent of $y$, $$\begin{aligned} C_0^{-1}\varrho_\alpha^0(t,x)\leq p_{y}(t,x)\leq C_0\varrho_\alpha^0(t,x). \label{p0}\end{aligned}$$ Moreover, if we set $$\hat\kappa(y,z):=\kappa(y,z)-\frac{\tilde k_0\kappa_0}{2},$$ where $\tilde k_0, \kappa_0$ are the constants in (\[s1\]) and (\[nu\]), respectively, and let $\hat p_y(t,x)$ be the heat kernel of operator $\sL_{\alpha}^{\hat\kappa,y}$, by the construction of Lévy process, we can write $$\begin{aligned} p_y(t,x)=\int_{\mR^d}p_{\alpha}(\tfrac{\tilde k_0\kappa_0}{2}t,x-z)\hat p_y(t,z)\dif z, \label{p00}\end{aligned}$$ see also [@Ch-Zh (2.23)]. The advantage of (\[p00\]) is that we can derive certain estimates for $p_y(t,x)$ by using properties of $p_\alpha(t,x)$. As an easy result, we have the following fractional derivative estimate of $p_y(t,x)$ and the Hölder continuity of $\nabla p_y(t,x)$. Here and below, both operators $\Delta^{\frac{\gamma}{2}}$ and $\nabla$ are acted with respect to the variable $x$. For any $0<\gamma<2$, it holds $$\begin{aligned} |\Delta^{\frac{\gamma}{2}}p_{y}(t,x)|\leq C_\gamma\varrho^0_{\alpha-\gamma}(t,x). \label{e2}\end{aligned}$$ and for any $\vartheta\in(0,1)$, $t>0$ and all $x,x',y\in\mR^d$, $$\begin{aligned} |\nabla p_y(t,x)-\nabla p_y(t,x')|\leq C_\vartheta|x-x'|^\vartheta\varrho_{\alpha-1-\vartheta}^0(t,\tilde x). \label{p0h}\end{aligned}$$ where $C_\gamma, C_\vartheta$ are positive constants independent of $y$, and $\tilde x$ is the one of the two points $x$ and $x'$ which is nearer to zero point. It is enough to prove the estimates with $t\in(0,1)$. The first assertion can be verified by using the Fubini’s theorem, (\[k0\]), (\[e1\]), (\[p0\]), (\[p00\]) (\[3p1\]) and easy computations. As for the second inequality, without lose of generality, we may assume that $|x|\leq |x'|$. In view of (\[kk\]), we know that when $|x-x'|\geq t^{\frac{1}{\alpha}}/2$, $$\begin{aligned} |\nabla p_\alpha(t,x)-\nabla p_\alpha(t,x')|&\leq C_1|x-x'|^\vartheta\Big(\varrho_{\alpha-1-\vartheta}^0(t,x)+\varrho_{\alpha-1-\vartheta}^0(t,x')\Big)\\ &\leq C_1|x-x'|^\vartheta\varrho_{\alpha-1-\vartheta}^0(t,x).\end{aligned}$$ While for $|x-x'|< t^{\frac{1}{\alpha}}/2$, we have by the mean value theorem that for some $\varepsilon\in[0,1]$, $$\begin{aligned} |\nabla p_\alpha(t,x)-\nabla p_\alpha(t,x')|&\leq C_2|x-x'|\varrho_{\alpha-2}^0\big(t,x+\varepsilon(x'-x)\big)\\ &\leq C_2|x-x'|\varrho_{\alpha-2}^0(t,x)\leq C_2|x-x'|^\vartheta\varrho_{\alpha-1-\vartheta}^0(t,x).\end{aligned}$$ The desired estimate (\[p0h\]) follows by (\[p00\]), (\[p0\]) and (\[3p1\]). Let $\beta$ be the Hölder index in (\[s1\]). Below, we always suppose that $\alpha+\beta<2$. This is assumed just to simplify the proof and is in fact not an restriction at all. Indeed, since we also assumed that $\sigma$ is bounded, (\[s1\]) still holds true for any $\beta'<\beta$. Hence, it is enough to study the pathwise uniqueness of SDE (\[sde2\]) when $\beta<2-\alpha$. We show the following estimate. \[as\] Under (\[s1\]), we have for $\gamma\in(0,2-\alpha)$ and all $x\in\mR^d$, $$\begin{aligned} \bigg|\int_{\mR^d}\Delta^{\frac{\alpha+\gamma}{2}}p_y(t,x-y)\dif y\bigg|\leq C_{d,\alpha,\gamma}t^{\frac{\beta-\gamma}{\alpha}-1},\label{00}\end{aligned}$$ and for any $\vartheta\in(0,1)$ and $x, x'\in\mR^d$, $$\begin{aligned} \bigg|\int_{\mR^d}\Big[\nabla p_y(t,x-y)-\nabla p_y(t,x'-y)\Big]\dif y\bigg|\leq C_{d,\vartheta}|x-x'|^\vartheta t^{\frac{\beta-\vartheta-1}{\alpha}},\label{000}\end{aligned}$$ where $C_{d,\alpha,\gamma}, C_{d,\vartheta}$ are positive constants. Since $\hat p_y(t,x)$ is a density function of Markov process, we have for any $\xi\in\mR^d$, $$\int_{\mR^d}\hat p_\xi(t,x-y)\dif y=1.$$ Combing this with (\[22\]), (\[p00\]) and using the Fubini’s theorem, it is easily checked that $$\begin{aligned} \int_{\mR^d}\!\!\int_{\mR^d}\delta_{p_\xi}(t,x-y;z)\frac{c_{d,\alpha,\gamma}}{|z|^{d+\alpha+\gamma}}\dif z\dif y&=\int_{\mR^d}\Delta^{\frac{\alpha+\gamma}{2}}p_\xi(t,x-y)\dif y\\ &=\int_{\mR^d}\!\!\int_{\mR^d}\Delta^{\frac{\alpha+\gamma}{2}}p_\alpha(\tfrac{\tilde k_0\kappa_0}{2}t,x-y-z)\hat p_{\xi}(t,z)\dif z\dif y\\ &\!\!\!\stackrel{y+z=\tilde z}{=}\int_{\mR^d}\Delta^{\frac{\alpha+\gamma}{2}}p_\alpha(\tfrac{\tilde k_0\kappa_0}{2}t,x-z)\dif z=0,\quad \forall \xi\in\mR^d.\end{aligned}$$ As a result, we can write $$\begin{aligned} \sE_1:=\int_{\mR^d}\Delta^{\frac{\alpha+\gamma}{2}}p_y(t,x-y)\dif y&=\int_{\mR^d}\!\!\int_{\mR^d}\delta_{p_y}(t,x-y;z)\frac{c_{d,\alpha,\gamma}}{|z|^{d+\alpha+\gamma}}\dif z\dif y\no\\ &=\int_{\mR^d}\!\!\int_{\mR^d}\big[\delta_{p_y}(t,x-y;z)-\delta_{p_\xi}(t,x-y;z)\big]\Big|_{\xi=x}\frac{c_{d,\alpha,\gamma}}{|z|^{d+\alpha+\gamma}}\dif z\dif y.\label{ii}\end{aligned}$$ By the proof of [@Ch-Zh Theorem 2.5], we know that for any $0<\gamma'<\alpha$, there exists a $C_{\gamma'}$ such that $$\begin{aligned} \big[\delta_{p_y}(t,x-y;z)-\delta_{p_\xi}(t,x-y;z)\big]\Big|_{\xi=x}&\!\leq C_{\gamma'}\big(|x-y|^\beta\wedge 1\big)\Big((t^{-\frac{2}{\alpha}}|z|^2)\wedge 1\Big)\\ &\times\Big(\big(\varrho^0_\alpha+\varrho^{\gamma'}_{\alpha-\gamma'}\big)(t,x-y\pm z)+\big(\varrho^0_\alpha+\varrho^{\gamma'}_{\alpha-\gamma'}\big)(t,x-y)\Big).\end{aligned}$$ Taking this into (\[ii\]), choosing $\gamma'$ such that $\alpha+\gamma+\gamma'<2$ and arguing entirely the same as in the proof of Lemma \[fe\], we find that $$\begin{aligned} \sE_1\leq C_{d,\alpha,\gamma}\!\int_{\mR^d}\varrho^{\beta}_{-\gamma}(t,x-y)\dif y\leq C_{d,\alpha,\gamma}t^{\frac{\beta-\gamma}{\alpha}-1}.\end{aligned}$$ Hence, (\[00\]) is true. We proceed to prove (\[000\]). Using (\[p00\]) again, we write $$\begin{aligned} \nabla p_y(t,x-y)-\nabla p_{y}(t,x'-y)=\int_{\mR^d}\sK_{\nabla p_{\alpha}}(t;x,x';y,z)\hat p_y(t,z)\dif z,\end{aligned}$$ where $$\sK_{\nabla p_{\alpha}}(t;x,x';y,z):=\nabla p_{\alpha}\big(\tfrac{\tilde k_0\kappa_0}{2}t,x-y-z\big)-\nabla p_{\alpha}\big(\tfrac{\tilde k_0\kappa_0}{2}t,x'-y-z\big).$$ Let $\tilde x$ be the one of the two points $x$ and $x'$ which is nearer to $y+z$. Then, we know from the proof of (\[p0h\]) that for any $\vartheta\in(0,1)$, $$|\sK_{\nabla p_{\alpha}}(t;x,x';y,z)|\leq C_\vartheta|x-x'|^\vartheta\varrho_{\alpha-1-\vartheta}^0\big(t,\tilde x-y-z\big).$$ We may argue as in (\[ii\]) to deduce that $$\begin{aligned} \sE_2&:=\int_{\mR^d}\Big[\nabla p_y(t,x-y)-\nabla p_y(t,x'-y)\Big]\dif y\\ &=\int_{\mR^d}\!\int_{\mR^d}\sK_{\nabla p_{\alpha}}(t;x,x';y,z)\big[\hat p_y(t,z)-\hat p_{\xi}(t,z)\big]\Big|_{\xi=\tilde x}\dif z\dif y.\end{aligned}$$ Thanks to [@Ch-Zh Theorem 2.5], we know that for any $0<\gamma'<\alpha$, there exists a $C_{\gamma'}$ such that $$\begin{aligned} \big[\hat p_y(t,z)-\hat p_{\xi}(t,z)\big]\Big|_{\xi_=\tilde x}\leq C_{\gamma'}\big(|\tilde x-y|^{\beta}\wedge1\big)\Big(\varrho^{0}_{\alpha}+\varrho^{\gamma'}_{\alpha-\gamma'}\Big)(t,z),\end{aligned}$$ which yields by (\[3p1\]) that $$\begin{aligned} \sE_2&\leq C_\vartheta|x-x'|^\vartheta\int_{\mR^d}\!\int_{\mR^d}\varrho^0_{\alpha-1-\vartheta}(t,\tilde x-y-z)\Big(\varrho^{0}_{\alpha}+\varrho^{\gamma'}_{\alpha-\gamma'}\Big)(t,z)\dif z\cdot\big(|\tilde x-y|^{\beta}\wedge1\big)\dif y\\ &\leq C_\vartheta|x-x'|^\vartheta\int_{\mR^d}\Big(\varrho^{\beta}_{\alpha-1-\vartheta}+\varrho^{\gamma'+\beta}_{\alpha-1-\vartheta-\gamma'}\Big)\big(t,\tilde x-y\big)\dif y\leq C_\vartheta|x-x'|^\vartheta t^{\frac{\beta-\vartheta-1}{\alpha}}.\end{aligned}$$ The proof is finished. Now, the Levi’s parametrix method suggests that the fundamental solution $p(t,x,y)$ of $\sL^{\kappa,x}_{\alpha}$ should be of the form $$\begin{aligned} p(t,x,y)=p_0(t,x,y)+\int_0^t\!\!\!\int_{\mR^d}p_0(t-s,x,z)q(s,z,y)\dif z\dif s, \label{heat}\end{aligned}$$ where $p_0(t,x,y):=p_y(t,x-y)$ and $q(t,x,y)$ satisfies the integral equation $$q(t,x,y)=q_0(t,x,y)+\int_0^t\!\!\!\int_{\mR^d}q_0(t-s,x,z)q(s,z,y)\dif z\dif s$$ with $$q_0(t,x,y):=\big(\sL^{\kappa,x}_{\alpha}-\sL_{\alpha}^{\kappa,y}\big)p_0(t,x,y).$$ The following lemma collects some estimates that we shall use below, whose proof can be found in [@Ch-Zh]. The following statements hold: 1. ([@Ch-Zh Theorem 3.1]) There exist constants $C_1, C_2$ such that for all $t\geq 0$ and $x,y\in\mR^d$, $$\begin{aligned} |q(t,x,y)|\leq C_1\big(\varrho_0^{\beta}+\varrho_\beta^0\big)(t,x-y), \label{q1}\end{aligned}$$ and for any $\gamma<\beta$, $t\geq 0$ and every $x,x',y\in\mR^d$, $$\begin{aligned} |q(t,x,y)-q(t,x',y)|\leq C_2\Big(|x-x'|^{\beta-\gamma}\wedge 1\Big)\Big(\big(\varrho^0_{\gamma}&+\varrho^{\beta}_{\gamma-\beta}\big)(t,x-y)\no\\ &\quad+\big(\varrho^0_{\gamma}+\varrho^{\beta}_{\gamma-\beta}\big)(t,x'-y)\Big), \label{q2}\end{aligned}$$ where $\beta$ is the constant in (\[ho\]). 2. ([@Ch-Zh Theorem 1.1]) It hold for all $t>0$ and $x\in\mR^d$, $$\begin{aligned} \int_{\mR^d}p(t,x,y)\dif y=1, \label{11}\end{aligned}$$ and there exists a constant $C_3$ such that $$\begin{aligned} |\nabla p(t,x,y)|\leq C_3\varrho^0_{\alpha-1}(t,x-y). \label{na}\end{aligned}$$ In [@Ch-Zh], it was also shown that for a constant $C>0$, $$|\Delta^{\frac{\alpha}{2}}p(t,x,y)|\leq C\varrho^0_{0}(t,x-y),$$ in which the main point is to handle the singularity caused by the integral with respect to $s$. To study the strong solution of equation , we need to prove more delicate estimates and (\[ph\]) as below, the proof is quite involved. \[seen\] Suppose (\[s1\]) holds true. Then, there exist constants $C_{d,\alpha,\gamma}, C_\vartheta>0$ such that for any $0\leq \gamma<\beta$, $$\begin{aligned} |\Delta^{\frac{\alpha+\gamma}{2}}p(t,x,y)|\leq C_{d,\alpha,\gamma}\varrho^0_{-\gamma}(t,x-y). \label{es}\end{aligned}$$ and for any $\vartheta\in(0,\alpha+\beta-1)$, $t>0$ and all $x,x',y\in\mR^d$, $$\begin{aligned} |\nabla p(t,x,y)-\nabla p(t,x',y)|\leq C_\vartheta|x-x'|^\vartheta\varrho_{\alpha-1-\vartheta}^0(t,\tilde x-y), \label{ph}\end{aligned}$$ where $\tilde x$ is the one of the two points $x$ and $x'$ which is nearer to $y$. Still, we only consider the case when $t\leq 1$. For brevity, we set $$\sS(t,x,y):=\int_0^t\!\!\!\int_{\mR^d}p_0(t-s,x,z)q(s,z,y)\dif z\dif s.$$ By Fubini’s theorem, we can write $$\begin{aligned} \big|\Delta^{\frac{\alpha+\gamma}{2}}\sS(t,x,y)\big|&\leq \Bigg|\!\int_{\frac{t}{2}}^t\!\!\!\int_{\mR^d}\Delta^{\frac{\alpha+\gamma}{2}}p_0(t-s,x,z)\Big(q(s,z,y)-q(s,x,y)\Big)\dif z\dif s\Bigg|\\ &\quad+\Bigg|\!\int_{\frac{t}{2}}^t\!\!\!\int_{\mR^d}\Delta^{\frac{\alpha+\gamma}{2}}p_0(t-s,x,z)\dif zq(s,x,y)\dif s\Bigg|\\ &\quad+\Bigg|\!\int^{\frac{t}{2}}_0\!\!\!\int_{\mR^d}\Delta^{\frac{\alpha+\gamma}{2}}p_0(t-s,x,z)q(s,z,y)\dif z\dif s\Bigg|\\ &=:\sC_1(t,x,y)+\sC_2(t,x,y)+\sC_3(t,x,y).\end{aligned}$$ For $\gamma<\beta$, we choose a $\gamma'>0$ such that $\gamma+\gamma'<\beta$, and by (\[e2\]), (\[q2\]) and (\[3p\]), we have $$\begin{aligned} \sC_1(t,x,y)&\leq C_1\!\!\int_{\frac{t}{2}}^t\!\!\!\int_{\mR^d}\varrho^{\beta-\gamma'}_{-\gamma}(t-s,x-z)\Big(\varrho^0_{\gamma'}+\varrho^{\beta}_{\gamma'-\beta}\Big)(s,z-y)\dif z\dif s\\ &\quad+C_1\!\!\int_{\frac{t}{2}}^t\!\!\!\int_{\mR^d}\varrho^{\beta-\gamma'}_{-\gamma}(t-s,x-z)\dif z\Big(\varrho^0_{\gamma'}+\varrho^{\beta}_{\gamma'-\beta}\Big)(s,x-y)\dif s\\ &\leq C_1\!\!\int_{0}^t\!\!\!\int_{\mR^d}\varrho^{\beta-\gamma'}_{-\gamma}(t-s,x-z)\Big(\varrho^0_{\gamma'}+\varrho^{\beta}_{\gamma'-\beta}\Big)(s,z-y)\dif z\dif s\\ &\quad+C_2\!\!\int_{\frac{t}{2}}^t(t-s)^{\frac{\beta-\gamma-\gamma'}{\alpha}-1}\Big(\varrho^0_{\gamma'}+\varrho^{\beta}_{\gamma'-\beta}\Big)(s,x-y)\dif s\\ &\leq C_3\Big(\varrho^{0}_{\beta-\gamma}+\varrho^{\beta-\gamma'}_{\gamma'-\gamma}+\varrho^{\beta}_{-\gamma}\Big)(t,x-y)\\ &\quad+C_3\!\!\int_{\frac{t}{2}}^t(t-s)^{\frac{\beta-\gamma-\gamma'}{\alpha}-1}\Big(\varrho^0_{\gamma'}+\varrho^{\beta}_{\gamma'-\beta}\Big)(s,x-y)\dif s\leq C_4\varrho^0_{-\gamma}(t,x-y).\end{aligned}$$ Thanks to (\[00\]) and taken into account of (\[q1\]), it holds $$\begin{aligned} \sC_2(t,x,y)\leq C_1\!\!\int^t_{\frac{t}{2}}(t-s)^{\frac{\beta-\gamma}{\alpha}-1}\Big(\varrho_0^{\beta}+\varrho_\beta^0\Big)(s,x-y)\dif s\leq C_2\varrho^0_{-\gamma}(t,x-y).\end{aligned}$$ Finally, we have by (\[e2\]), (\[q1\]) and (\[3p\]) that for any $\gamma{'\!'}>0$, $$\begin{aligned} \sC_3(t,x,y)&\leq C_1\!\!\int^{\frac{t}{2}}_0\!\!\!\int_{\mR^d}\varrho^0_{-\gamma}(t-s,x-z)\Big(\varrho_0^{\beta}+\varrho_\beta^0\Big)(s,z-y)\dif z\dif s\\ &\leq C_2t^{-\frac{\gamma+\gamma{'\!'}}{\alpha}}\!\int^{t}_0\!\!\!\int_{\mR^d}\varrho^0_{\gamma{'\!'}}(t-s,x-z)\Big(\varrho_0^{\beta}+\varrho_\beta^0\Big)(s,z-y)\dif z\dif s\leq C_3\varrho^0_{-\gamma}(t,x-y).\end{aligned}$$ Based on the above estimates, we thus get (\[es\]) by (\[e2\]) and (\[heat\]). Next, we proceed to prove (\[ph\]). As in the proof of Lemma \[as\], we set $$\sK_{\nabla p_0}(t;x,x';y):=\nabla p_0(t,x,y)-\nabla p_0(t,x',y).$$ Then, estimate (\[p0h\]) yields that for any $\vartheta\in(0,1)$, $$|\sK_{\nabla p_0}(t;x,x';y)|\leq C_\vartheta|x-x'|^\vartheta\varrho_{\alpha-1-\vartheta}^0(t,\tilde x-y).$$ As above, we can write $$\begin{aligned} \big|\nabla\sS(t,x,y)-\nabla\sS(t,x',y)\big|&\leq \Bigg|\int_{\frac{t}{2}}^t\!\!\!\int_{\mR^d}\sK_{\nabla p_0}(t-s;x,x';z)\Big(q(s,z,y)-q(s,\tilde x,y)\Big)\dif z\dif s\Bigg|\\ &\quad+\Bigg|\int_{\frac{t}{2}}^t\!\!\!\int_{\mR^d}\sK_{\nabla p_0}(t-s;x,x';z)\dif zq(s,\tilde x,y)\dif s\Bigg|\\ &\quad+\Bigg|\!\int^{\frac{t}{2}}_0\!\!\!\int_{\mR^d}\sK_{\nabla p_0}(t-s;x,x';z)q(s,z,y)\dif z\dif s\Bigg|\\ &=:\sD_1(t,x,x',y)+\sD_2(t,x,x',y)+\sD_3(t,x,x',y).\end{aligned}$$ For $\vartheta<\alpha+\beta-1$, choose a $\gamma'$ such that $\vartheta+\gamma'<\alpha+\beta-1$, we follow the same procedure as in the estimate of $\sC_1(t,x,y)$ to derive that $$\begin{aligned} \sD_1(t,x,x',y)&\leq C_\vartheta|x-x'|^\vartheta\!\!\int_{\frac{t}{2}}^t\!\!\!\int_{\mR^d}\varrho^{\beta-\gamma'}_{\alpha-1-\vartheta}(t-s,\tilde x-z)\Big(\varrho^0_{\gamma'}+\varrho^{\beta}_{\gamma'-\beta}\Big)(s,z-y)\dif z\dif s\\ &\quad+C_\vartheta|x-x'|^\vartheta\!\!\int_{\frac{t}{2}}^t\!\!\!\int_{\mR^d}\varrho^{\beta-\gamma'}_{\alpha-1-\vartheta}(t-s,\tilde x-z)\dif z\Big(\varrho^0_{\gamma'}+\varrho^{\beta}_{\gamma'-\beta}\Big)(s,x-y)\dif s\\ &\leq C_\vartheta|x-x'|^\vartheta\varrho_{\alpha-1-\vartheta}^0(t,\tilde x-y).\end{aligned}$$ As for $\sD_2(t,x,x',y)$ and $\sD_3(t,x,x',y)$, we may use (\[000\]) and (\[p0h\]) respectively, and argue the same way as estimating $\sC_2(t,x,y)$ and $\sC_3(t,x,y)$ to get that $$\sD_2(t,x,x',y)+\sD_3(t,x,x',y)\leq C_\vartheta|x-x'|^\vartheta\varrho_{\alpha-1-\vartheta}^0(t,\tilde x-y),$$ which in turn yields (\[ph\]). Smoothing properties of the semigroup ===================================== Let $\cT_t$ be the semigroup corresponding to $\sL^\kappa_{\alpha}$, that is, $$\cT_tf(x):=\int_{\mR^d}p(t,x,y)f(y)\dif y,\quad\forall f\in\cB_b(\mR^d).$$ We shall use heat kernel estimates obtained in the last section to derive some space regularities of $\cT_t$, which has its own independent interest and will be used to study Krylov-type estimate and Zvonkin’s transformation in the next section. Let us first introduce some notations. Let $p\geq1$ and $\|\cdot\|_p$ denote the norm in $L^p(\mR^d)$. For $0<\gamma<2$, define the Bessel potential space $\mH^{\gamma}_p:=\mH^{\gamma}_p(\mR^d)$ by $$\mH^{\gamma}_p(\mR^d):=\big\{f\in L^p(\mR^d): \Delta^{\frac{\gamma}{2}}f\in L^p(\mR^d)\big\}$$ with norm $$\|f\|_{\gamma,p}:=\|f\|_p+\|\Delta^{\frac{\gamma}{2}}f\|_p.$$ In fact, this space can also be defined to be the complete space of $C^{\infty}_0(\mR^d)$ under the norm $$\|\sF^{-1}\big((1+|\cdot|^{\gamma})(\sF f)\big)\|_p<\infty,\quad \forall f\in C^{\infty}_0(\mR^d),$$ where $\sF$ (resp. $\sF^{-1}$) denotes the Fourier transform (resp. the Fourier inverse transform). By Sobolev’s embedding theorem, if $\gamma-\frac{d}{p}>0$ is not an integer, then ([@Tri p. 206, (16)]) $$\begin{aligned} \mH^{\gamma}_p\hookrightarrow C^{\gamma-\frac{d}{p}}_b(\mR^d),\label{emb}\end{aligned}$$ where for some $\gamma>0$, $C^{\gamma}_b(\mR^d)$ is the usual Hölder space with norm $$\|f\|_{C^{\gamma}_b}:=\sum_{i=1}^{[\gamma]}\|\nabla^i f(x)\|_\infty+\big[\nabla^{[\gamma]}f\big]_{\gamma-[\gamma]},$$ here, $[\gamma]$ denotes the integer part of $\gamma$, and for a function $f$ on $\mR^d$ and $\vartheta\in(0,1)$, $$\begin{aligned} [f]_{\vartheta}:=\sup_{x\neq y}\frac{|f(x)-f(y)|}{|x-y|^{\vartheta}}. \label{hol}\end{aligned}$$ Noticing that for $n\in \mN$, $\mH^{n}_p$ is just the usual Sobolev space with equivalent norm ([@St p. 135, Theorem 3]) $$\|f\|_{n,p}=\|f\|_p+\|\nabla^nf\|_{p},$$ here and below, $\nabla$ denotes the weak derivative of $f$. While for $0<\gamma\neq$ integer, the fractional Sobolev space $\mW^{\gamma}_p$ is defined by $$\begin{aligned} \|f\|_{\mW^{\gamma}_p}:=\|f\|_p+\sum_{k=0}^{[\gamma]}\Bigg(\int_{\mR^d}\!\!\int_{\mR^d}\frac{|\nabla^kf(x)-\nabla^kf(y)|^p}{|x-y|^{d+(\gamma-[\gamma])p}}\dif x\dif y\Bigg)^{1/p}<\infty.\label{fs}\end{aligned}$$ The relation between $\mH^{\gamma}_p$ and $\mW^{\gamma}_p$ is that (cf. [@Tri p. 190]): for $\gamma>0$, $\eps\in(0,\gamma)$ and $p>1$, $$\begin{aligned} \mH^{\gamma+\eps}_p\hookrightarrow\mW^{\gamma}_p\hookrightarrow\mH^{\gamma-\eps}_p.\label{re}\end{aligned}$$ Moreover, the following relationship can be found in [@Tri p. 185]: for $p>1$, $\gamma_1\neq\gamma_2$ and $\vartheta\in(0,1)$, $$\begin{aligned} [\mH^{\gamma_1}_p,\mH^{\gamma_2}_p]_\vartheta=\mH^{\gamma_1+\vartheta(\gamma_2-\gamma_1)}_p, \label{fw}\end{aligned}$$ where $[A, B]_{\vartheta}$ denotes the complex interpolation space between two Banach space $A$ and $B$. Recall the following complex interpolation result ([@Tri p. 59, Theorem (a)]). \[inter\] Let $A_i\subseteq B_i$, $i=0,1$ be Banach spaces and $\sT: A_i\rightarrow B_i$, $i=0,1$ be a bounded linear operator. For any $\theta\in(0,1)$, we have $$\|\sT\|_{A_{\theta}\rightarrow B_{\theta}}\leq \|\sT\|_{A_0\rightarrow B_0}^{1-\theta}\|\sT\|_{A_1\rightarrow B_1}^{\theta},$$ where $A_{\theta}:=[A_0, A_1]_{\theta}$, $B_{\theta}:=[B_0, B_1]_{\theta}$, and $\|\sT\|_{A_{\theta}\rightarrow B_{\theta}}$ denotes the operator norm of $\sT$ mapping $A_{\theta}$ to $B_{\theta}$. Given a locally integrable function $f$ on $\mR^d$, the Hardy-Littlewood maximal function of $f$ is defined by $$\cM f(x):=\sup_{0<r<\infty}\frac{1}{|B_r|}\int_{B_r}|f(x+y)|\dif y,$$ where $|B_r|$ denotes the Lebesgue measure of ball $B_r$. The following well known result can be found in [@St p. 5, Theorem 1] and [@Zh3]. \(i) For $p\in(1,\infty]$ and all $f\in L^p(\mR^d)$, there exists a constant $C_{d,p}>0$ such that $$\begin{aligned} \|\cM f\|_p\leq C_{d,p}\|f\|_p. \label{mf}\end{aligned}$$ (ii) For every $f\in \mH^1_p$, there is a constant $C_d>0$ such that for a.e. $x,y\in\mR^d$, $$\begin{aligned} |f(x)-f(y)|\leq C_{d}|x-y|\Big(\cM|\nabla f|(x)+\cM|\nabla f|(y)\Big). \label{w11}\end{aligned}$$ Now, we proceed to study the regularities of the semigroup $\cT_t$. There exist constants $C_1, C_2$ such that for any $\vartheta\in(0,1)$, $$\begin{aligned} \|\nabla \cT_tf\|_{\infty}\leq C_{1}t^{\frac{\vartheta-1}{\alpha}}[f]_\vartheta ,\quad \forall \ f \in C_0^\infty(\mR^d), \label{33}\end{aligned}$$ and for $\vartheta\in(0,1)$, $\vartheta'\in(0,\alpha+\beta-1)$, $$\begin{aligned} [\nabla \cT_tf]_{\vartheta'}\leq C_{2}t^{\frac{\vartheta-\vartheta'-1}{\alpha}}[f]_\vartheta,\quad \forall \ f \in C_0^\infty(\mR^d),\label{34}\end{aligned}$$ where $[\cdot]$ is defined by (\[hol\]). Using (\[11\]), we find that $$\int_{\mR^d}\nabla p(t,x,y)\dif y=0.$$ Thus, we can write $$\begin{aligned} \nabla \cT_tf(x)=\int_{\mR^d}\nabla p(t,x,y)\big[f(y)-f(x)\big]\dif y. \label{cha}\end{aligned}$$ Thus, by (\[na\]), it is easy to find that for any $\vartheta\in(0,1)$, $$\|\nabla \cT_tf\|_{\infty}\leq C_1t^{-\frac{1}{\alpha}} [f]_\vartheta\!\!\int_{\mR^d}\frac{|x-y|^\vartheta}{(|x-y|+t^{1/\alpha})^{d+\alpha}}\dif y\leq C_1t^{\frac{\vartheta-1}{\alpha}}[f]_\vartheta.$$ To prove (\[34\]), we write $$\begin{aligned} \nabla \cT_tf(x)-\nabla \cT_tf(x')&=\int_{\mR^d}\Big(\nabla p(t,x,y)-\nabla p(t,x',y)\Big)\big(f(y)-f(\tilde x)\big)\dif y,\end{aligned}$$ where $\tilde x$ is the one of the two points $x$ and $x'$ which is nearer to $y$. Taking into account of (\[ph\]) we arrive that for $0<\vartheta'<\alpha+\beta-1$, $$\begin{aligned} \nabla \cT_tf(x)-\nabla \cT_tf(x')&\leq C_2|x-x'|^{\vartheta'}[f]_\vartheta\!\int_{\mR^d}\varrho_{\alpha-1-\vartheta'}^\vartheta(t,\tilde x-y)\dif y\no\\ &\leq C_2|x-x'|^{\vartheta'}t^{\frac{\vartheta-\vartheta'-1}{\alpha}}[f]_\vartheta,\end{aligned}$$ which in turn implies the desired result. Let $\theta\in(0,1)$ and $\gamma+\theta<\alpha+\beta$ hold, then for every $p>1$, $$\begin{aligned} \|\cT_tf\|_{\gamma+\theta,p}\leq C_{\gamma,p}t^{-\gamma/\alpha}\|f\|_{\theta,p}, \ \ \ \ \forall \ f \in \mH^{\theta}_p, \label{tt}\end{aligned}$$ where $C_{\gamma,p}>0$ is a constant. Thanks to a standard approximation argument, we only need to prove the estimate for $f\in C^{\infty}_0(\mR^d)$. Observe $$\begin{aligned} \|T_tf\|^p_{p}&=\int_{\mR^d}\Bigg(\int_{\mR^d}p(t,x,y)f(y)\dif y\Bigg)^p\dif x\\ &=\int_{\mR^d}\Bigg(\int_{\mR^d}p^{1/q}(t,x,y)p^{1/p}(t,x,y)f(y)\dif y\Bigg)^p\dif x\\ &\leq \int_{\mR^d}\Bigg(\int_{\mR^d}p(t,x,y)\dif y\Bigg)^{p/q}\Bigg(\int_{\mR^d}p(t,x,y)f^p(y)\dif y\Bigg)\dif x,\end{aligned}$$ thus $$\|\cT_tf\|_{p}\leq \|f\|_p.$$ Hence, $\cT_t$ is a contraction semigroup and we may assume $t<1$ below. By Fubini’s theorem and , for $\beta_1<\alpha+\beta$ we have $$\begin{aligned} \Delta^{\frac{\beta_{1}}{2}} \cT_tf(x)&=\int_{\mR^d}\Delta^{\frac{\beta_{1}}{2}}p(t,x,y)f(y)\dif y\\ &\leq C_1t^{-\beta_1/\alpha}\int_{\mR^d}\varrho_{\alpha}^0(t,x-y)f(y)\dif y,\end{aligned}$$ which implies that for $\beta_1<\alpha+\beta$, $$\|\cT_tf\|_{\beta_1,p}\leq C_1t^{-\beta_1/\alpha}\|f\|_p.$$ Let $\beta_2$ be such that $1+\beta_2<\alpha+\beta$, as in (\[cha\]), we write $$\begin{aligned} \Delta^{\frac{1+\beta_2}{2}} \cT_tf(x)&=\int_{\mR^d}\Delta^{\frac{1+\beta_2}{2}}p(t,x,y)\big[f(y)-f(x)\big]\dif y.\end{aligned}$$ Then, it follows by (\[es\]) and (\[w11\]) that $$\begin{aligned} |\Delta^{\frac{1+\beta_2}{2}} \cT_tf(x)|&\leq \int_{\mR^d}|\Delta^{\frac{1+\beta_2}{2}}p(t,x,y)|\cdot|x-y|\Big(\cM|\nabla f|(x)+\cM|\nabla f|(y)\Big)\dif y\\ &\leq C\!\int_{\mR^d}\varrho_{\alpha-1-\beta_2}^0(t,x-y)\cdot|x-y|\Big(\cM|\nabla f|(x)+\cM|\nabla f|(y)\Big)\dif y\\ &=C\cM|\nabla f|(x)\!\int_{\mR^d}\varrho_{\alpha-1-\beta_2}^0(t,y)|y|\dif y+C\!\int_{\mR^d}\varrho_{\alpha-1-\beta_2}^0(t,y)|y|\cdot\cM|\nabla f|(x-y)\dif y.\end{aligned}$$ Since $\alpha>1$, one can check easily that $$\begin{aligned} \int_{\mR^d}\varrho_{\alpha-1-\beta_2}^0(t,y)|y|\dif y= \left(\int_{|y| \le t^{1/\alpha}}+\int_{|y|>t^{1/\alpha}}\right)\varrho_{\alpha-1-\beta_2}^0(t,y)|y|\dif y\leq t^{-\frac{\beta_2}{\alpha}},\end{aligned}$$ which, together with (\[mf\]) and Minkovski’s inequality, yields $$\begin{aligned} \|\Delta^{\frac{1+\beta_2}{2}} \cT_tf\|_p&\leq Ct^{-\frac{\beta_2}{\alpha}}\|\cM|\nabla f|\|_p+C\bigg\|\int_{\mR^d}\varrho_{\alpha-1-\beta_2}^0(t,y)|y|\cM|\nabla f|(x-y)\dif y\bigg\|_p\\ &\leq Ct^{-\frac{\beta_2}{\alpha}}\|\nabla f\|_p+C\!\!\int_{\mR^d}\varrho_{\alpha-1-\beta_2}^0(t,y)|y|\cdot\|\cM|\nabla f|(x-y)\|_p\dif y\\ &\leq Ct^{-\frac{\beta_2}{\alpha}}\|\nabla f\|_p.\end{aligned}$$ Hence, we get $$\|\cT_tf\|_{1+\beta_2,p}\leq Ct^{-\frac{\beta_2}{\alpha}}\|f\|_{1,p}.$$ By the interpolation (\[fw\]), for $\theta\in(0,1)$ $$\mH^{\theta}_p=[L^p,\mH^{1}_p]_{\theta},\quad\mH^{\beta_1+(1+\beta_2-\beta_1)\theta}_p=[\mH^{\beta_1}_p,\mH^{1+\beta_2}_p]_{\theta},$$ and Lemma \[inter\], we can derive that for $\gamma=(1-\theta)\beta_1+\theta\beta_2<\alpha+\beta-\theta$, $$\|\cT_tf\|_{\gamma+\theta,p}\leq C t^{-\frac{\gamma}{\alpha}}\|f\|_{\theta,p}.$$ The proof is finished. \[ree\] The proof in [@Pri] and [@Zh00] relies heavily on the symmetry of $\Delta^{\frac{\alpha}{2}}$ and the smooth properties at least up to second order of its heat kernel $p_{\alpha}(t,x)$. However, the operator $\sL_{\alpha}^\kappa$ considered here is non-symmetric and its heat kernel has no more regularity than ‘$\alpha+\beta$’-order, as we have seen in Lemma \[seen\]. Krylov-type estimate and Zvonkin’s transformation ================================================= This section consists of two parts, one is to obtain a Krylov-type estimate for the strong solution of SDE , while the other is to transforms SDE (\[sde2\]) into a new one with better drift coefficient by Zvonkin’s transformation. The regularity of the semigroup $\cT_t$ obtained in the last section will play an important role in the two subsections below. Krylov’s estimate ----------------- Let $X_t(x)$ be a strong solution to SDE (\[sde2\]). Usually, the Itô’s formula is performed for functions $f\in C^2_b(\mR^d)$. However, this is too strong for our latter use. Notice that by (\[s1\]), $\sL_{\alpha}^\kappa f$ is meaningful for any $f\in C^{\gamma}_b(\mR^d)$ as long as $\gamma>\alpha$. Indeed, we have by (\[nu\]) that $$\begin{aligned} \sL^\kappa_{\alpha}f(x)&\leq C_{d,\alpha}\!\int_{|z|\leq 1}\!\!\int_0^1\!\big|\nabla f(x+rz)-\nabla f(x)\big|\dif r\frac{\dif z}{|z|^{d+\alpha-1}}+C_{d,\alpha}\|f\|_{\infty}\\ &\leq C_{d,\alpha}\!\int_{|z|\leq 1}\frac{\dif z}{|z|^{d+\alpha-\gamma}}\|f\|_{\gamma}+C_{d,\alpha}\|f\|_{\infty}<\infty.\end{aligned}$$ We first show that Itô’s formula holds for $f(X_t)$ when $f\in C^{\gamma}_b(\mR^d)$ with $\gamma>\alpha$. \[ito\] Suppose (\[nu\]) and (\[s1\]) hold. Let $X_t$ satisfy (\[sde2\]) and $f\in C^{\gamma}_b(\mR^d)$ with $\gamma>\alpha$. Then, we have $$\begin{aligned} f(X_t)-f(x)-\int_0^t\!\sL f(X_s)\dif s=\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{\mR^d}\big[f\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(r)z\big)-f(X_s)\big]\tilde \cN(\dif z\times\dif r\times \dif s),\end{aligned}$$ where $\sL=\sL_\alpha^\kappa+b\cdot\nabla$. Let $\rho\in C^{\infty}_0(\mR^d)$ such that $\int_{\mR^d}\rho(x)\dif x=1$. Define $\rho_n(x):=n^d\rho(nx)$, and $$\begin{aligned} f_n(x):=\int_{\mR^d}f(y)\rho_n(x-y)\dif y.\label{mo}\end{aligned}$$ Hence, we have $f_n\in C^2_b(\mR^d)$ with $\|f_n\|_{C^\gamma_b}\leq \|f\|_{C^\gamma_b}$, and $\|f_n-f\|_{C^{\gamma'}_b}\rightarrow0$ for every $\gamma'<\gamma$. By using Itô’s formula for $f_n(X_t)$, we get $$\begin{aligned} f_n(X_t)-f_n(x)-\int_0^t\!\sL f_n(X_s)\dif s=\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{\mR^d}\big[f_n\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(r)z\big)-f_n(X_{s-})\big]\tilde \cN(\dif z\times\dif r\times \dif s).\end{aligned}$$ Now we are going to pass the limits on the both sides of the above equality. It is easy to see that for every $\omega$ and $x\in\mR^d$, $$f_n(X_t)-f_n(x)\rightarrow f(X_t)-f(x),\quad \text{as}\,\,n\rightarrow\infty.$$ Since $$\begin{aligned} |f_n(x+z)-f_n(x)-z\cdot\nabla f_n(x)|\leq C|z|^{\gamma}\|f_n\|_{C^\gamma_b}\leq C|z|^{\gamma}\|f\|_{C^\gamma_b},\end{aligned}$$ we can get by dominated convergence theorem that for every $\omega$, $$\int_0^t\!\sL f_n(X_s)\dif s\rightarrow\int_0^t\!\sL f(X_s)\dif s,\quad \text{as}\,\,n\rightarrow\infty.$$ Finally, by the isometry formula, we have $$\begin{aligned} &\mE\bigg|\!\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{\mR^d}\Big[f_n\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(r)z\big)-f_n(X_s)\\ &\qquad-f\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(r)z\big)+f(X_{s-})\Big]\tilde \cN(\dif z\times\dif r\times \dif s)\bigg|^2\\ &=\mE\int_0^t\!\!\!\int_{\mR^d}\!\!\int_0^{\infty}1_{[0,\sigma(X_s,z)]}(r)\big|f_n(X_s+z)-f_n(X_s) -f(X_s+z)+f(X_s)\big|^2\dif r\nu(\dif z)\dif s\\ &\leq C\!\!\int_0^t\!\!\!\int_{\mR^d}\mE\big|f_n(X_s+z)-f_n(X_s) -f(X_s+z)+f(X_s)\big|^2\nu(\dif z)\dif s\rightarrow0,\quad \text{as}\,\,n\rightarrow\infty,\end{aligned}$$ where in the last step we have used the fact that $\sigma$ is bounded, $\|f_n\|_{C^\gamma_b}\leq \|f\|_{C^\gamma_b}$ and the dominated convergence theorem again. The proof is finished. We need the following results about the semi-linear elliptic PDE to prove the Krylov’s estimate. \[p1\] Let $\lambda,\bar k>0$. For any $f\in C_b^\infty(\mR^d)$, there exists a unique classical solution $u\in C_b^{\alpha+\theta}(\mR^d)$ with $\theta<\beta$ to equation $$\begin{aligned} \lambda u-\sL^\kappa_\alpha u-\bar k|\nabla u|=f, \label{pide3}\end{aligned}$$ which also satisfies the following integral equation: $$\begin{aligned} u(x)=\int_0^\infty\!\e^{-\lambda t}\cT_t\big(\bar k|\nabla u|+f\big)(x)\dif t.\label{u}\end{aligned}$$ Moreover, for $\lambda$ big enough, we have for any $1<\gamma<\alpha$, $$\begin{aligned} \|u\|_{\gamma,p}\leq C\|f\|_p.\label{ess}\end{aligned}$$ Let us first construct the solution of via Picard’s iteration argument. Set $u_0\equiv0$ and for $n\in \mN$, define $u_n$ recursively by $$u_n(x):=\int_0^\infty\!\e^{-\lambda t}\cT_t\big(\bar k|\nabla u_{n-1}|+f\big)(x)\dif t.$$ In view of (\[na\]), it is easy to check that $u_1\in C^1_b(\mR^d)$, and $u_2$ is thus well defined, and so on. We further write that $$\begin{aligned} \nabla u_1(x)-\nabla u_1(y)&=\int_0^\infty\!\e^{-\lambda t}\big[\nabla \cT_tf(x)-\nabla \cT_tf(y)\big]\dif t\\ &=\left(\int_0^{|x-y|^{\alpha}}+\int^{\infty}_{|x-y|^{\alpha}}\right)\e^{-\lambda t}\big[\nabla \cT_tf(x)-\nabla \cT_tf(y)\big]\dif t=:I_1+I_2.\end{aligned}$$ For $I_1$, we have by (\[33\]) that for $0<\vartheta<2-\alpha$, $$\begin{aligned} I_1\leq C\!\!\int_0^{|x-y|^{\alpha}}\!t^{\frac{\vartheta-1}{\alpha}}\dif t \cdot[f]_\vartheta\leq C|x-y|^{\alpha+\vartheta-1}[f]_\vartheta.\end{aligned}$$ Meanwhile, as a direct result of (\[34\]), we have for $\vartheta\in(0,\beta)$ and $\alpha+\vartheta-1<\vartheta'<\alpha+\beta-1$, $$\begin{aligned} I_2\leq C|x-y|^{\vartheta'} [f]_\vartheta\!\int^{\infty}_{|x-y|^{\alpha}}t^{\frac{\vartheta-\vartheta'-1}{\alpha}}\dif t\leq C|x-y|^{\alpha+\vartheta-1} [f]_\vartheta.\end{aligned}$$ Consequently, we get that $u_1\in C_b^{\alpha+\vartheta}(\mR^d)$ if $f\in C_b^{\vartheta}(\mR^d)$. Noticing that $u_1\in C_b^{\alpha+\vartheta}(\mR^d)$ implies that $|\nabla u_1|\in C_b^{\vartheta}(\mR^d)$ since $$\big||\nabla u_1|(x)-|\nabla u_1|(y)\big|\leq |\nabla u_1(x)-\nabla u_1(y)|.$$ Repeating the above argument, we have for every $n\in\mN$ and $\vartheta\in(0,\beta)$, $$u_n\in C_b^{\alpha+\vartheta}(\mR^d).$$ Moreover, since $$\begin{aligned} u_n(x)-u_m(x)&=\int_0^\infty\!\e^{-\lambda t}\cT_t\big(\bar k|\nabla u_{n-1}|-k|\nabla u_{m-1}|\big)(x)\dif t\\ &\leq \bar k\!\!\int_0^\infty\!\e^{-\lambda t}\cT_t\big|\nabla u_{n-1}-\nabla u_{m-1}\big|(x)\dif t,\end{aligned}$$ we further have that for $\vartheta'\!'$ with $\vartheta<\vartheta'\!'<\alpha+\vartheta-1$, $$\begin{aligned} \|u_n-u_m\|_{C_b^{\alpha+\vartheta}}&\leq \bar k\!\!\int_0^\infty\!\e^{-\lambda t}\|\cT_t\big(|\nabla u_{n-1}-\nabla u_{m-1}|\big)\|_{C_b^{\alpha+\vartheta}}\dif t\\ &\leq C_{\bar k}\!\!\int_0^\infty\!\e^{-\lambda t}\Big(1+t^{\frac{\vartheta-1}{\alpha}}+t^{\frac{\vartheta'\!'-\vartheta}{\alpha}-1}\Big)\dif t\cdot\|u_{n-1}-u_{m-1}\|_{C_b^{1+\vartheta'\!'}}\\ &\leq C_{\bar k}\lambda^{-\frac{\vartheta'\!'-\vartheta}{\alpha}}\|u_{n-1}-u_{m-1}\|_{C_b^{\alpha+\vartheta}},\end{aligned}$$ where we have also used the fact that $\lambda>1$. This means that $u_n$ is Cauchy sequence in $C_b^{\alpha+\vartheta}(\mR^d)$. Thus, there exists a limit function $u\in C_b^{\alpha+\vartheta}(\mR^d)$ with $\vartheta\in(0,\beta)$ satisfying (\[u\]). Since by [@Ch-Zh (1.7)], for every $f\in C^{\vartheta}_b(\mR^d)$, $$\p t\cT_tf(x)=\sL^\kappa_{\alpha}\cT_tf(x),$$ we have by integral by part, $$\begin{aligned} \sL^\kappa_{\alpha}u(x)&=\int_0^\infty\!\e^{-\lambda t}\sL^\kappa_{\alpha}\cT_t\big(\bar k|\nabla u|+f\big)(x)\dif t\\ &=\int_0^\infty\!\e^{-\lambda t}\p t\cT_t\big(\bar k|\nabla u|+f\big)(x)\dif t\\ &=-\bar k|\nabla u|(x)-f(x)+\lambda u(x),\end{aligned}$$ which means that $u$ satisfies PIDE (\[pide3\]). Moreover, we have by (\[tt\]) that $$\begin{aligned} \|u\|_{\gamma,p}&\leq \int_0^\infty\!\e^{-\lambda t}\|\cT_t\big(\bar k|\nabla u|+f\big)\|_{\gamma,p}\dif t\\ &\leq C_{\bar k}\!\!\int_0^\infty\!\e^{-\lambda t}t^{-\frac{\gamma}{\alpha}}\dif t\Big(\|\nabla u\|_{p}+\|f\|_{p}\Big)\\ &\leq C_{\bar k}\lambda^{\frac{\gamma}{\alpha}-1}\Big(\|u\|_{1,p}+\|f\|_{p}\Big).\end{aligned}$$ Choosing $\lambda$ big enough such that $C_{\bar k}\lambda^{\frac{\gamma}{\alpha}-1}<1$, we can get (\[ess\]). The whole proof is finished. Now, we can give the Krylov’s estimate for strong solutions of SDE (\[sde2\]). Let $X_t$ be a strong solution of SDE (\[sde2\]). Then, for any $T>0$, there exist a constant $C_T$ such that for any $f\in L^p(\mR^d)$ with $p>d/\alpha$, we have $$\begin{aligned} \mE\Bigg(\int_0^T\!\!f(X_s)\dif s\Bigg)\leq C_T \|f\|_p. \label{kry1}\end{aligned}$$ We first suppose that $f\in C^{\infty}_0(\mR^{d})$. By Lemma \[p1\], there exists a solution $u\in C_b^{\alpha+\vartheta}(\mR^d)$ with $0<\vartheta<\beta$ to equation (\[pide3\]), which is given by (\[u\]). According to Lemma \[ito\], we can use Itô’s formula to get for any $t>0$, $$\begin{aligned} u(X_t)&=u(x)+\int_0^t\!\!\sL^\kappa_{\alpha}u(X_s)+b\cdot\nabla u(X_s)\dif s\\ &\quad+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{\mR^d}\big[u\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(r)z\big)-u(X_{s-})\big]\tilde \cN(\dif z\times\dif r\times \dif s).\end{aligned}$$ By (\[pide3\]) and take $\bar k$ big enough such that $\bar k>\|b\|_{\infty}$, we have for any $T>0$, $$\begin{aligned} \int_0^T\!\!\sL^\kappa_{\alpha}u(X_s)+b\cdot\nabla u(X_s)\dif s&=\lambda\int_0^T\!u(X_s)\dif s+\int_0^T\!\!\Big(b\cdot\nabla u(X_s)-\bar k|\nabla u|(X_s)\Big)\dif s-\int_0^T\!\!f(X_s)\dif s\\ &\leq \lambda\int_0^T\!u(X_s)\dif s-\int_0^T\!\!f(X_s)\dif s.\end{aligned}$$ Consequently, it follows that $$\begin{aligned} \mE\Bigg(\int_0^T\!\!f(X_s)\dif s\Bigg)\leq \lambda\mE\Bigg(\int_0^T\!|u(X_s)|\dif s\Bigg)+2\|u\|_{\infty}\leq C_{\lambda,T}\|u\|_{\infty}.\end{aligned}$$ Since $p>d/\alpha$, there exists a $\gamma<\alpha$ such that $p>d/\gamma$. Now, using (\[ess\]), we conclude that $$\begin{aligned} \mE\Bigg(\int_0^T\!\!f(X_s)\dif s\Bigg)\leq C_{\lambda,T}\|u\|_{\infty}\leq C_{\lambda,T}\|u\|_{\gamma,p}\leq C_{\lambda,T}\|f\|_{p},\end{aligned}$$ where the second inequality is due to the Sobolev embedding (\[emb\]). By a standard density argument as in [@Zh00], we get the desired result for general $f\in L^p(\mR^d)$. Zvonkin’s transformation ------------------------ Now, we follow the idea in [@Kr-Ro; @Zh1; @Zh00] to transform SDE (\[sde2\]) into a new one with better coefficients. Unlike [@Kr-Ro; @Zh1; @Zh00], we do not have well developed elliptic equation theory to solve the following equation in Bessel potential space (or Sobolev space): $$\begin{aligned} \lambda u(x)-\sL^\kappa_{\alpha} u(x)-b\cdot\nabla u(x)=b(x). \label{pide1}\end{aligned}$$ The point is that the operator $\sL^\kappa_{\alpha}$ is non-symmetric and has no comparison with operator $\Delta^{\frac{\alpha}{2}}$. To be more precisely, even if we know $\Delta^{\frac{\alpha}{2}}u\in L^p(\mR^d)$ for some $p>1$, it is not easy to claim that $\sL^\kappa_{\alpha}u\in L^p(\mR^d)$, and vice versa. The authors in [@Kr-Ro; @Zh1] encountered with classical local seconder order differential operator $$\begin{aligned} \sL_2:=\frac{1}{2}\sum_{i,j=1}^da_{ij}(x)\frac{\p^2}{\p{x_i}\p{x_j}},\end{aligned}$$ which has been well studied, it is clear that if $\big(a_{ij}(x)\big)$ is bounded, then $\Delta u\in L^p(\mR^d)$ implies $\sL_2 u\in L^p(\mR^d)$ for every $p>1$. While [@Zh00] only needs to handle the symmetric operator $\Delta^{\frac{\alpha}{2}}$. Therefore, some new ways are required in our case. First, we note that the elliptic equation can still be solved in the framework of Hölder space. The following result can be proved as in Lemma \[p1\], we omit the details here. \[int\] Assume that for some $\vartheta\in(0,\beta)$, $b\in C^\vartheta_b(\mR^d)$ . Then, there exists a classical solution $u\in C_b^{\alpha+\vartheta}(\mR^d)$ to (\[pide1\]). Meanwhile, $u$ also satisfies the integral equation: $$\begin{aligned} u(x)=\int_0^\infty\!\e^{-\lambda t}\cT_t\big(b\cdot\nabla u+b\big)(x)\dif t. \label{g1}\end{aligned}$$ Despite that we can not solve the elliptic equation (\[pide1\]) in Bessel potential space (or Sobolev space), we can solve the integral equation (\[g1\]) in this framework thanks to the Bessel regularity of $\cT_t$ obtained in Section 4. \[equ\] Let $1<\gamma<\alpha$. Suppose that for some $p>\frac{d}{\gamma}$ and $0<\theta\in(1-\gamma+\frac{d}{p},1)$, $$b\in L^{\infty}(\mR^d)\cap\mW^{\theta}_p.$$ Then, for $\lambda$ big enough there exists a function $u\in \mH^{\gamma+\theta}_p$ satisfying the integral equation (\[g1\]). Moreover, we have $$\begin{aligned} \|u\|_{\gamma+\theta,p}\leq C_1\|b\|_{\mW^\theta_p} \label{esa}\end{aligned}$$ and $$\begin{aligned} \|u\|_{\infty}+\|\nabla u\|_{\infty}\leq \frac{1}{2}. \label{es2}\end{aligned}$$ We only show the priori estimate (\[esa\]). Then ,the existence of solutions follows by the standard continuity method. Since $1<\gamma<\alpha$, choose $\eps\in(0,\alpha-\gamma)$, we have by (\[tt\]) and (\[re\]) that $$\begin{aligned} \|u\|_{\gamma+\theta,p}&\leq \int_0^\infty\!\e^{-\lambda t}\|\cT_{t}(b\cdot\nabla u+b)\|_{\gamma+\theta,p}\dif t\\ &\leq C_1\!\int_0^\infty\!\e^{-\lambda t}t^{-\frac{\gamma+\eps}{\alpha}}\dif t\cdot\Big(\|b\cdot\nabla u\|_{\theta-\eps,p}+\|b\|_{\theta-\eps,p}\Big)\\ &\leq C_1\lambda^{\frac{\gamma+\eps}{\alpha}-1}\cdot\Big(\|b\cdot\nabla u\|_{\mW^\theta_p}+\|b\|_{\mW^\theta_p}\Big).\end{aligned}$$ In view of (\[fs\]), (\[emb\]), and thanks to the condition that $\gamma>1$ and $\gamma+\theta-1-\tfrac{d}{p}>0$, we can know $$\begin{aligned} \|b\cdot\nabla u\|_{\mW^\theta_p}&\leq \|b\|_\infty\|\nabla u\|_p+\|b\|_{\mW^\theta_p}\|\nabla u\|_{\infty}+\|b\|_{\infty}\|\nabla u\|_{\mW^\theta_p}\\ &\leq \|b\|_\infty\|u\|_{1,p}+\|b\|_{\mW^\theta_p}\|u\|_{\gamma+\theta,p}+ \|b\|_{\infty}\|u\|_{\mW^{1+\theta}_p}\\ &\leq \Big(\|b\|_{\mW^\theta_p}+\|b\|_{\infty}\Big)\|u\|_{\gamma+\theta,p},\end{aligned}$$ where we also used (\[re\]) in the last inequality. It then follows that $$\|u\|_{\gamma+\theta,p}\leq C_1\lambda^{\frac{\gamma+\eps}{\alpha}-1}\Big(\|b\|_{\mW^\theta_p}+\|b\|_{\infty}\Big)\|u\|_{\gamma+\theta,p}+C_1\lambda^{\frac{\gamma+\eps}{\alpha}-1}\|b\|_{\mW^\theta_p}.$$ Hence, we can choose $\lambda_1$ big enough such that $$C_1\lambda^{\frac{\gamma+\eps}{\alpha}-1}_1\Big(\|b\|_{\mW^\theta_p}+\|b\|_{\infty}\Big)<\frac{1}{2},$$ which means (\[esa\]) is true. Moreover, we can take $\lambda\geq \lambda_1$ such that $$\lambda^{\frac{\gamma+\eps}{\alpha}-1}\|b\|_{\mW^\theta_p}<\frac{1}{8},$$ and then get $$\|u\|_{\infty}+\|\nabla u\|_{\infty}\leq2\|u\|_{\gamma+\theta,p}\leq\frac{1}{2}.$$ The proof is finished. In the sequel, we assume that $b\in L^{\infty}(\mR^d)\cap\mW^{\theta}_p$ with $$\begin{aligned} \theta\in(1-\frac{\alpha}{2},1),\quad p>\frac{2d}{\alpha}. \label{index}\end{aligned}$$ Notice that we can always choose a $1<\gamma<\alpha$ such that $$\theta>1-\gamma+\frac{d}{p}\quad\text{and}\quad p>\frac{d}{\gamma}.$$ Hence, according to Theorem \[equ\], for $\lambda$ big enough we can get a function $u\in \mH^{\gamma+\theta}_p$ satisfying the integral equation (\[g1\]). Define $$\Phi(x):=x+u(x).$$ In view of (\[es2\]), we also have $$\frac{1}{2}|x-y|\leq\big|\Phi(x)-\Phi(y)\big|\leq \frac{3}{2}|x-y|,$$ which implies that the map $x\rightarrow\Phi(x)$ forms a $C^1$-diffeomorphism and $$\begin{aligned} \frac{1}{2}\leq \|\nabla\Phi\|_{\infty},\|\nabla\Phi^{-1}\|_{\infty}\leq 2, \label{upd}\end{aligned}$$ where $\Phi^{-1}(\cdot)$ is the inverse function of $\Phi(\cdot)$. We prove the following Zvonkin’s transformation. \[zvon\] Let $\Phi(x)$ be defined as above and $X_t$ solve SDE (\[sde2\]). Then, $Y_t:=\Phi(X_t)$ satisfies the following SDE: $$\begin{aligned} Y_t&=\Phi(x)+\int_0^t\tilde b(Y_s)\dif s+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}\tilde g(Y_{s-},z)1_{[0,\tilde\sigma(Y_{s-},z)]}(r)\tilde \cN(\dif z\times\dif r\times \dif s)\no\\ &\quad+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|> 1}\tilde g(Y_{s-},z)1_{[0,\tilde\sigma(Y_{s-},z)]}(r)\cN(\dif z\times\dif r\times \dif s), \label{sde3}\end{aligned}$$ where $$\begin{aligned} \tilde b(x)=\lambda u\big(\Phi^{-1}(x)\big)-\int_{|z|>1}\!\big[u\big(\Phi^{-1}(x)+z\big)-u\big(\Phi^{-1}(x)\big)\big]\sigma\big(\Phi^{-1}(x),z\big)\nu(\dif z)\end{aligned}$$ and $$\begin{aligned} \tilde g(x,z):=\Phi\big(\Phi^{-1}(x)+z\big)-x,\quad \tilde\sigma(x,z):=\sigma\big(\Phi^{-1}(x),z\big).\end{aligned}$$ Let $b_n$ be the mollifying approximation of $b$ defined as in (\[mo\]). Then, it is obvious that $$\begin{aligned} \|b_n\|_{\infty}\leq \|b\|_{\infty},\quad \|b_n-b\|_{\mW^{\vartheta}_p}\rightarrow0,\,\,\text{as}\,\,n\rightarrow\infty. \label{bn}\end{aligned}$$ Meanwhile, $b_n\in C_b^\vartheta(\mR^d)$ for any $\vartheta\in(0,\beta)$ and we may assume $$\begin{aligned} b_n\rightarrow b,\,\,a.e.,\,\,\text{as}\,\,n\rightarrow\infty. \label{bnn}\end{aligned}$$ Let $u_n\in C_b^{\alpha+\vartheta}(\mR^d)$ be the classical solution to the elliptic equation (\[pide1\]) with $b$ replaced by $b_n$. According to Lemma \[int\], we know that $u_n$ also satisfies the integral equation $$\begin{aligned} u_n(x)=\int_0^\infty\!\e^{-\lambda t}\cT_t\big(b_n\cdot\nabla u_n+b_n\big)(x)\dif t.\end{aligned}$$ We proceed to show that $$\begin{aligned} \|u_n-u\|_{\gamma+\vartheta,p}\rightarrow0,\,\,\text{as}\,\,n\rightarrow\infty. \label{unn}\end{aligned}$$ In fact, write $$\begin{aligned} u_n(x)-u(x)=\int_0^\infty\!\e^{-\lambda t}\cT_t\Big(b_n(\nabla u_n-\nabla u)+(b_n-b)\cdot\nabla u+(b_n-b)\Big)(x)\dif t.\end{aligned}$$ As in the proof of Theorem \[equ\], we have $$\begin{aligned} \|u_n-u\|_{\gamma+\theta,p}\leq C_1\lambda^{\frac{\gamma+\eps}{\alpha}-1}\cdot\Big(\|b_n(\nabla u_n-\nabla u)\|_{\mW^\theta_p}+\|(b_n-b)\cdot\nabla u\|_{\mW^\theta_p}+\|b_n-b\|_{\mW^\theta_p}\Big).\end{aligned}$$ At the same time, we can also get by (\[bn\]) that $$\begin{aligned} \|b_n(\nabla u_n-\nabla u)\|_{\mW^\theta_p}\leq \Big(\|b_n\|_{\mW^\theta_p}+\|b_n\|_{\infty}\Big)\|u_n-u\|_{\gamma+\theta,p}\leq \Big(\|b\|_{\mW^\theta_p}+\|b\|_{\infty}\Big)\|u_n-u\|_{\gamma+\theta,p}.\end{aligned}$$ Hence, $$\begin{aligned} \|u_n-u\|_{\gamma+\theta,p}&\leq C_\lambda\Big(\|(b_n-b)\cdot\nabla u\|_{\mW^\theta_p}+\|b_n-b\|_{\mW^\theta_p}\Big)\\ &\leq C_\lambda\|b_n-b\|_{\mW^\theta_p}+C_\lambda\!\!\int_{\mR^d}\!\!\int_{\mR^d}\frac{|\nabla u(x)-\nabla u(y)|^p}{|x-y|^{d+\vartheta p}}|b_n(y)-b(y)|\dif x\dif y\rightarrow0,\,\,\text{as}\,\,n\rightarrow\infty,\end{aligned}$$ where we have used (\[bn\]), (\[bnn\]) and the dominated convergence theorem. Now, we define $$\Phi_n(x):=x+u_n(x).$$ By Lemma \[ito\] and recalling that $u_n$ satisfies (\[pide1\]), we can use Itô’s formula for $u_n$ to get $$\begin{aligned} u_n(X_t)&=u_n(x)+\int_0^t\!\Big(\lambda u_n+\big(b-b_n\big)\cdot\nabla u_n-b_n\Big)(X_s)\dif s\\ &\quad-\int_0^t\!\!\!\int_0^{\infty}\!\!\!\int_{|z|>1}\big[u_n\big(X_s+1_{[0,\sigma(X_s,z)]}(v)z\big)-u_n(X_s)\big]\nu(\dif z)\dif r\dif s\\ &\quad+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|> 1}\big[u_n\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(v)z\big)-u_n(X_{s-})\big] \cN(\dif z\times\dif r\times \dif s)\\ &\quad+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}\big[u_n\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(v)z\big)-u_n(X_{s-})\big]\tilde \cN(\dif z\times\dif r\times \dif s).\end{aligned}$$ Adding this with SDE and noticing that $$\Phi_n\big(x+y\big)-\Phi_n(x)=u_n(x+y)-u_n(x)+y,$$ $$\begin{aligned} \Phi_n\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(r)z\big)-\Phi_n(X_{s-})=1_{[0,\sigma(X_{s-},z)]}(r)\left[\Phi_n\big(X_{s-}+z\big)-\Phi_n(X_{s-})\right],\\end{aligned}$$ we obtain $$\begin{aligned} Y^n_t&:=\Phi_n(X_t)=\Phi_n(x)+\int_0^t\!\lambda u_n(X_s)\dif s+\int_0^t\!\Big(\big(b-b_n\big)\cdot\nabla u_n+\big(b-b_n\big)\Big)(X_s)\dif s\\ &\quad-\int_0^t\!\!\!\int_0^{\infty}\!\!\!\int_{|z|>1}\big[u_n\big(X_s+1_{[0,\sigma(X_s,z)]}(r)z\big)-u_n(X_s)\big]\nu(\dif z)\dif r\dif s\\ &\quad+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|> 1}\big[\Phi_n\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(r)z\big)-\Phi_n(X_{s-})\big] \cN(\dif z\times\dif r\times \dif s)\\ &\quad+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}\big[\Phi_n\big(X_{s-}+1_{[0,\sigma(X_{s-},z)]}(r)z\big)-\Phi_n(X_{s-})\big]\tilde \cN(\dif z\times\dif r\times \dif s)\\ &=\Phi_n(x)+\int_0^t\!\lambda u_n(X_s)\dif s+\int_0^t\!\Big(\big(b-b_n\big)\cdot\nabla u_n+\big(b-b_n\big)\Big)(X_s)\dif s\\ &\quad-\int_0^t\!\!\!\int_{|z|>1}\big[u_n(X_s+z)-u_n(X_s)\big]\sigma(X_s,z)\nu(\dif z)\dif s\\ &\quad+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|> 1}1_{[0,\sigma(X_{s-},z)]}(r)\big[\Phi_n(X_{s-}+z)-\Phi_n(X_{s-})\big] \cN(\dif z\times\dif r\times \dif s)\\ &\quad+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}1_{[0,\sigma(X_{s-},z)]}(r)\big[\Phi_n(X_{s-}+z)-\Phi_n(X_{s-})\big]\tilde \cN(\dif z\times\dif r\times \dif s)\\ &=:\Phi_n(x)+\cI_1+\cI_2+\cI_3+\cI_4.\end{aligned}$$ Now we are going to take limits for the above equality. First of all, it is easy to see that $$\lim_{n\rightarrow\infty}Y^n_t=\Phi(X_t)=Y_t.$$ By the dominated convergence theorem and (\[unn\]), we also have $$\begin{aligned} \cI_2+\cI_3\rightarrow&-\!\int_0^t\!\!\!\int_{|z|>1}\big[u(X_s+z)-u(X_s)\big]\sigma(X_s,z)\nu(\dif z)\dif s\\ &+\int_0^t\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|> 1}1_{[0,\sigma(X_{s-},z)]}(r)\big[\Phi(X_{s-}+z)-\Phi(X_{s-})\big] \cN(\dif z\times\dif r\times \dif s).\end{aligned}$$ As for $\cI_4$, it follows from (\[upd\]), (\[unn\]) and the dominated convergence theorem that $$\begin{aligned} &\mE\bigg|\int_0^t\!\!\!\int_0^{\infty}\!\!\!\int_{|z|\leq1}\!\!1_{[0,\sigma(X_{s-},z)]}(r)\Big[\Phi^n(X_{s-}+z)-\Phi^n(X_{s-})\\ &\qquad\qquad\qquad\qquad\qquad-\Phi(X_{s-}+z)+\Phi(X_{s-})\Big]\tilde{\cN}(\dif z\times\dif r\times \dif s)\bigg|^2\\ &=\mE\int_0^t\!\!\!\int_0^{\infty}\!\!\!\int_{|z|\leq1}\!\!1_{[0,\sigma(X_s,z)]}(r)\Big|\Phi^n_s(X_s+z)-\Phi^n_s(X_s)-\Phi_s(X_s+z)+\Phi_s(X_s)\Big|^2\nu(\dif z)\dif r\dif s\\ &=\mE\int_0^t\!\!\!\int_{|z|\leq1}\Big|\Phi^n_s(X_s+z)-\Phi^n_s(X_s)-\Phi_s(X_s+z)+\Phi_s(X_s)\Big|^2\sigma(X_s,z)\nu(\dif z)\dif s\\ &\rightarrow0,\quad\text{as}\,\, n\rightarrow\infty.\end{aligned}$$ Finally, Krylov’s estimate (\[kry1\]) yields that $$\mE\Bigg(\!\int_0^tb(X_s)-b_n(X_s)\dif s\Bigg)\leq C\|b-b_n\|_p\rightarrow0,$$ which in turn implies by (\[unn\]) that $$\begin{aligned} \cI_1&\rightarrow \lambda\!\!\int_0^t u(X_s)\dif s,\quad\text{as}\,\, n\rightarrow\infty.\end{aligned}$$ Combing the above calculations, and noticing that $X_s=\Phi^{-1}(Y_s)$, we get the desired result. At the end of this section, we collect some properties of the new coefficients. For a function $f$ on $\mR^d$, set $$\cJ_zf(x):=f(x+z)-f(x).$$ Suppose that: 1. The global condition (\[s1\]) holds true and (\[a1\]) is satisfied for almost all $x,y\in\mR^d$ with $\zeta\in L^q(\mR^d)$, $q>d/\alpha$. 2. For $\theta, p$ satisfying (\[index\]), $$b\in L^{\infty}(\mR^d)\cap\mW^{\theta}_p.$$ Then, we have: Under [**(H$\sigma'$)**]{}-[**(Hb$'$)**]{}, there exist constants $C_1, C_2$ such that for a.e. $x,y\in\mR^d$, $$\begin{aligned} |\tilde b(x)-\tilde b(y)|\leq C_1|x-y|\cdot\Big(1+\zeta\big(\Phi^{-1}(x)\big)+\zeta\big(\Phi^{-1}(y)\big)\Big)\label{b}\end{aligned}$$ and $$\begin{aligned} |\tilde g(x,z)-\tilde g(y,z)|\leq C_2|x-y|\cdot\Big(\cM|\nabla\cJ_zu|(\Phi^{-1}(x))+\cM|\nabla\cJ_zu|(\Phi^{-1}(y))\Big).\label{g}\end{aligned}$$ Moreover, for any $p>1$ and $\gamma\in(1,2)$, it holds for all $f\in\mH^{\gamma}_p$ that $$\begin{aligned} \|\cJ_zf\|_{1,p}\leq C_{p,d,\gamma}|z|^{\gamma-1}\|f\|_{\gamma,p},\label{jz}\end{aligned}$$ where $C_{p,d,\gamma}$ is a positive constant. Recall the definition of $\tilde b$ and $\tilde g$ in Lemma \[zvon\]. Since $\sigma$ is bounded and thanks to (\[es2\]), (\[upd\]), (\[a1\]), we get $$\begin{aligned} |\tilde b(x)-\tilde b(y)|&\leq \lambda\big|u\big(\Phi^{-1}(x)\big)-u\big(\Phi^{-1}(y)\big)\big|+\int_{|z|>1}\!\big|u\big(\Phi^{-1}(x)+z\big)-u\big(\Phi^{-1}(y)+z\big)\big|\nu(\dif z)\\ &\quad+\int_{|z|>1}\!\big|u\big(\Phi^{-1}(x)\big)-u\big(\Phi^{-1}(y)\big)\big|\nu(\dif z)+\int_{|z|>1}\!\big|\sigma\big(\Phi^{-1}(x),z\big)-\sigma\big(\Phi^{-1}(y),z\big)\big|\nu(\dif z)\\ &\leq C_\lambda|x-y|+C_0|x-y|\Big(\zeta\big(\Phi^{-1}(x)\big)+\zeta\big(\Phi^{-1}(y)\big)\Big),\end{aligned}$$ which gives (\[b\]). By (\[w11\]), further have $$\begin{aligned} |\tilde g(x,z)-\tilde g(y,z)|&=\big|\Phi\big(\Phi^{-1}(x)+z\big)-\Phi\big(\Phi^{-1}(x)\big)-\Phi\big(\Phi^{-1}(y)+z\big)+\Phi\big(\Phi^{-1}(y)\big)\big|\\ &=\big|u\big(\Phi^{-1}(x)+z\big)-u\big(\Phi^{-1}(x)\big)-u\big(\Phi^{-1}(y)+z\big)+u\big(\Phi^{-1}(y)\big)\big|\\ &=\big|\big(\cJ_zu\big)(\Phi^{-1}(x))-\big(\cJ_zu\big)(\Phi^{-1}(y))\big|\\ &\leq C_2|x-y|\cdot\Big(\cM|\nabla\cJ_zu|(\Phi^{-1}(x))+\cM|\nabla\cJ_zu|(\Phi^{-1}(y))\Big).\end{aligned}$$ As for (\[jz\]), it was proved by [@Zh00 Lemma 2.3]. Proof of the main result ======================== Now, we are ready to give the proof of our main result. Comparing with the usual SDEs driven by Brownian motion or pure jump Lévy processes, for SDEs of the form (\[sde2\]) or (\[sde3\]), some new tricks are needed to handle the term $1_{[0,\sigma(X_s,z)]}(r)$, as we shall see below. The point is that we have to use $L^1$-estimate as well as $L^2$-estimate to deduce the pathwise uniqueness. The proof will consist of two steps.\ [**Step 1:**]{} We assume that [**(H$\sigma'$)**]{}-[**(Hb$'$)**]{} hold. It was shown in [@M-P Proposition 3] that under these conditions, there exists a martingale solution to operator $\sL$. Meanwhile, it is known that the martingale solution for $\sL$ is equivalent to the weak solution to SDE (\[sde2\]), see [@Kurz2 Lemma 2.1]. Thus, the existence and uniqueness of weak solution hold for SDE (\[sde2\]). Thus, it suffices to show the pathwise uniqueness. Let $X_t$ and $\hat X_t$ be two strong solutions for SDE (\[sde2\]) both starting from $x\in\mR^d$, and set $$Y_t:=\Phi(X_t),\quad \hat Y_t:=\Phi(\hat X_t).$$ Since the uniqueness if a local property, as the argument in [@Ik-Wa Theorem IV. 9.1] and [@Zh00], we only need to prove by Lemma \[zvon\] that $$\begin{aligned} Z_t\equiv0,\quad\forall t\geq 0, \label{66}\end{aligned}$$ where $Z_t$ is given by $$\begin{aligned} Z_{t\wedge\tau_1}&=\int_0^{t\wedge\tau_1}\!\big[\tilde b(Y_s)-\tilde b(\hat Y_s)\big]\dif s +\!\int_0^{t\wedge\tau_1}\!\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}\Big[\tilde g(Y_{s-},z)1_{[0,\tilde\sigma(Y_{s-},z)]}(r)\\ &\quad\qquad\quad\qquad\quad\qquad\quad-\tilde g(\hat Y_{s-}, z)1_{[0,\tilde\sigma(\hat Y_{s-},z)]}(r)\Big]\tilde \cN(\dif z\times\dif r\times \dif s)=:\cJ^{t\wedge\tau_1}_1+\cJ^{t\wedge\tau_1}_2.\end{aligned}$$ Set $$A_1(t):=\int_0^t\Big(1+\zeta(X_s)+\zeta(\hat X_s)\Big)\dif s,$$ then following by an approximation argument as in [@Zh3; @Zh00], it is easy to see by (\[b\]) that for almost all $\omega$ and every stopping time $\eta$, $$\begin{aligned} \sup_{t\in[0,\eta]}\left|\cJ^{t\wedge\tau_1}_1\right|\leq C_1\!\int_0^{\tau_1\wedge\eta}|Z_s|\cdot\Big(1+\zeta(X_s)+\zeta(\hat X_s)\Big)\dif s=C_1\!\int_0^{\tau_1\wedge\eta}|Z_s| \dif A_1(s).\end{aligned}$$ As for the second term, write $$\begin{aligned} \cJ_2^{t\wedge\tau_1}&=\int_0^{t\wedge\tau_1}\!\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}1_{[0,\tilde\sigma(Y_{s-},z)\wedge\tilde\sigma(\hat Y_{s-},z)]}(r)\Big[\tilde g(Y_{s-},z)1_{[0,\tilde\sigma(Y_{s-},z)]}(r)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\tilde g(\hat Y_{s-}, z)1_{[0,\tilde\sigma(\hat Y_{s-},z)]}(r)\Big]\tilde \cN(\dif z\times\dif r\times \dif s)\\ &\quad+\int_0^{t\wedge\tau_1}\!\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}1_{[\tilde\sigma(Y_{s-},z)\vee\tilde\sigma(\hat Y_{s-},z),\infty]}(r)\Big[\tilde g(Y_{s-},z)1_{[0,\tilde\sigma(Y_{s-},z)]}(r)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\tilde g(\hat Y_{s-}, z)1_{[0,\tilde\sigma(\hat Y_{s-},z)]}(r)\Big]\tilde \cN(\dif z\times\dif r\times \dif s)\\ &\quad+\int_0^{t\wedge\tau_1}\!\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}1_{[\tilde\sigma(Y_{s-},z)\wedge\tilde\sigma(\hat Y_{s-},z),\tilde\sigma(Y_{s-},z)\vee\tilde\sigma(\hat Y_{s-},z)]}(r)\Big[\tilde g(Y_{s-},z)1_{[0,\tilde\sigma(Y_{s-},z)]}(r)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\tilde g(\hat Y_{s-}, z)1_{[0,\tilde\sigma(\hat Y_{s-},z)]}(r)\Big]\tilde \cN(\dif z\times\dif r\times \dif s)\\ &=:\cJ_{21}^{t\wedge\tau_1}+\cJ_{22}^{t\wedge\tau_1}+\cJ_{23}^{t\wedge\tau_1}.\end{aligned}$$ We proceed to estimate each component. First, for $\cJ_{21}^{t\wedge\tau_1}$, we use the Doob’s $L^2$-maximal inequality to deduce that for any stopping time $\eta$, $$\begin{aligned} \mE\left[\sup_{t\in[0,\eta]}|\cJ_{21}^{t\wedge\tau_1}|\right]&\leq \mE\Bigg(\int_0^{\tau_1\wedge\eta}\!\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}1_{[0,\tilde\sigma(Y_s,z)\wedge\tilde\sigma(\hat Y_s,z)]}(r)\big|\tilde g(Y_s,z)-\tilde g(\hat Y_s, z)\big|^2\dif r\nu(\dif z) \dif s\Bigg)^{\frac{1}{2}}\\ &=\mE\Bigg(\int_0^{\tau_1\wedge\eta}\!\!\!\!\int_{|z|\leq 1}\big[\tilde\sigma(Y_s,z)\wedge\tilde\sigma(\hat Y_s,z)\big]\cdot\big|\tilde g(Y_s,z)-\tilde g(\hat Y_s, z)\big|^2\nu(\dif z) \dif s\Bigg)^{\frac{1}{2}}.\end{aligned}$$ Thus, if we set $$A_2(t):=\int_0^t\!\!\!\int_{|z|\leq 1}\!\Big(\cM|\nabla\cJ_zu|(X_s)+\cM|\nabla\cJ_zu|(\hat X_s)\Big)^2\nu(\dif z) \dif s,$$ we further have by the fact that $\tilde\sigma$ is bounded and (\[g\]) that $$\begin{aligned} \mE\left[\sup_{t\in[0,\eta]}|\cJ_{21}^{t\wedge\tau_1}|\right]&\leq C_2\mE\Bigg(\int_0^{\tau_1\wedge\eta}|Z_s|^2\!\int_{|z|\leq 1}\!\Big(\cM|\nabla\cJ_zu|(X_s)+\cM|\nabla\cJ_zu|(\hat X_s)\Big)^2\nu(\dif z) \dif s\Bigg)^{\frac{1}{2}}\\ &=C_2\mE\Bigg(\int_0^{\tau_1\wedge\eta}|Z_s|^2\dif A_2(s)\Bigg)^{\frac{1}{2}}.\end{aligned}$$ Next, it is easy to see that for any $t\geq 0$, $$\cJ_{22}^{t\wedge\tau_1}\equiv0.$$ Finally, we use the $L^1$-estimate (see [@Kurz3 P$_{174}$] or [@Kurz2 P$_{157}$]) to control the third term by $$\begin{aligned} \mE\left[\sup_{t\in[0,\eta]}|\cJ_{23}^{t\wedge\tau_1}|\right]&\leq 2\mE\!\int_0^{\tau_1\wedge\eta}\!\!\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}1_{[\tilde\sigma(Y_s,z)\wedge\tilde\sigma(\hat Y_s,z),\tilde\sigma(Y_s,z)\vee\tilde\sigma(\hat Y_s,z)]}(r)\\ &\qquad\qquad\qquad\times\big|\tilde g(Y_s,z)1_{[0,\tilde\sigma(Y_s,z)]}(r)-\tilde g(\hat Y_s, z)1_{[0,\tilde\sigma(\hat Y_s,z)]}(r)\big|\nu(\dif z)\dif r \dif s\\ &\leq2\mE\!\int_0^{\tau_1\wedge\eta}\!\!\!\int_{|z|\leq 1}|\tilde\sigma(Y_s,z)-\tilde\sigma(\hat Y_s,z)|\cdot\Big(|\tilde g(Y_s,z)|+|\tilde g(\hat Y_s,z)|\Big)\nu(\dif z)\dif s.\end{aligned}$$ Since $$|\tilde g(x,z)|=\big|\Phi\big(\Phi^{-1}(x)+z\big)-\Phi\big(\Phi^{-1}(x)\big)\big|\leq \frac{3}{2}|z|,$$ and taking into account of (\[a1\]), we get $$\begin{aligned} \mE\left[\sup_{t\in[0,\eta]}|\cJ_{23}^{t\wedge\tau_1}|\right]&\leq C_3\mE\!\int_0^{\tau_1\wedge\eta}\!\!\!\int_{|z|\leq 1}|\tilde\sigma(Y_s,z)-\tilde\sigma(\hat Y_s,z)|\cdot|z|\nu(\dif z)\dif s\\ &\leq C_3\mE\!\int_0^{\tau_1\wedge\eta}|Z_s|\Big(\zeta(X_s)+\zeta(\hat X_s)\Big)\dif s\leq C_3\mE\!\int_0^{\tau_1\wedge\eta}|Z_s|\dif A_1(s).\end{aligned}$$ Combing the above computations, and set $$A(t):=A_1(t)+A_2(t),$$ we arrive at that for any stopping time $\eta$, there exists a constant $C_0$ such that $$\begin{aligned} \mE\left[\sup_{t\in[0,\eta]}|Z_{t\wedge\tau_1}|\right]\leq C_0\mE\!\int_0^{\tau_1\wedge\eta}|Z_s|\dif A(s)+C_0\mE\Bigg(\int_0^{\tau_1\wedge\eta}|Z_s|^2\dif A(s)\Bigg)^{\frac{1}{2}}. \label{ee}\end{aligned}$$ By our assumption that $\zeta\in L^{q}(\mR^d)$ with $q>d/\alpha$ and the Krylov estimate (\[kry1\]), we find that $$\mE A_1(t)\leq t+C\|\zeta\|_q<\infty.$$ Meanwhile, since $p>2d/\alpha$, using the Fubini’s theorem, Krylov estimate, Minkovski’s inequality and taken into account of (\[mf\]), we can get $$\begin{aligned} \mE A_2(t)&=\int_{|z|\leq 1}\!\mE\!\int_0^t\Big(\cM|\nabla\cJ_zu|(X_s)+\cM|\nabla\cJ_zu|(\hat X_s)\Big)^2\dif s\nu(\dif z)\\ &\leq C\!\int_{|z|\leq 1}\!\|(\cM|\nabla\cJ_zu|)^2\|_{p/2}\nu(\dif z)= C\!\int_{|z|\leq 1}\!\|\cM|\nabla\cJ_zu|\|_{p}^2\nu(\dif z)\\ &\leq C\!\int_{|z|\leq 1}\|\nabla\cJ_zu\|_{p}^2\nu(\dif z)\leq C\!\int_{|z|\leq 1}\|\cJ_zu\|_{1,p}^2\nu(\dif z).\end{aligned}$$ Recall our assumption that $\theta>1-\tfrac{\alpha}{2}$. Hence, we can choose a $\gamma<\alpha$ such that $$2(\gamma+\theta-1)>\alpha.$$ Consequently, it follows from (\[jz\]) and (\[nu\]) that $$\begin{aligned} \mE A_2(t)\leq C\|u\|_{\gamma+\theta,p}^2\!\int_{|z|\leq 1}|z|^{2(\gamma+\theta-1)}\nu(\dif z)<\infty.\end{aligned}$$ Therefore, $t\mapsto A(t)$ is a continuous strictly increasing process. Define for $t\geq 0$ the stopping time $$\eta_t:=\inf\{s\geq 0: A(s)\geq t\}.$$ Then, it is clear that $\eta_t$ is the inverse of $t\mapsto A(t)$. Since $A(t)\geq t$, we further have $\eta_t\leq t$. Taking $\eta_t$ into (\[ee\]), we have by the change of variable $$\begin{aligned} &\mE\left[\sup_{s\in[0,t]}|Z_{\tau_1\wedge\eta_s}|\right]=\mE\left[\sup_{s\in[0,\eta_t]}|Z_{s\wedge\tau_1}|\right]\\ &\leq C_0\mE\!\int_0^{\tau_1\wedge\eta_t}|Z_s|\dif A(s)+C_0\mE\Bigg(\int_0^{\tau_1\wedge\eta_t}|Z_s|^2\dif A(s)\Bigg)^{\frac{1}{2}}\\ &\leq C_0\mE\!\int_0^{\eta_t}|Z_{\tau_1\wedge s}|\dif A(s)+C_0\mE\Bigg(\int_0^{\eta_t}|Z_{\tau_1\wedge s}|^2\dif A(s)\Bigg)^{\frac{1}{2}}\\ &=C_0\mE\!\int_0^{t}|Z_{\tau_1\wedge\eta_s}|\dif s+C_0\mE\Bigg(\int_0^{t}|Z_{\tau_1\wedge\eta_s}|^2\dif s\Bigg)^{\frac{1}{2}}\leq C_0\big(t+\sqrt{t}\big)\mE\left[\sup_{s\in[0,t]}|Z_{\tau_1\wedge\eta_s}|\right].\end{aligned}$$ Now, take $t_0$ small enough such that $$C_0\big(t_0+\sqrt{t_0}\big)<1,$$ it holds that for almost all $\omega$, $$\sup_{s\in[0,\eta_{t_{0}}]}|Z_{s\wedge\tau_1}|=\sup_{s\in[0,t_0]}|Z_{\tau_1\wedge\eta_s}|=0.$$ In particular, $$Z_{\eta_{t_0}\wedge\tau_1}=0,\quad a.s..$$ The same arguments as above and write for $t>t_0$, $$\begin{aligned} Z_{t\wedge\tau_1}&=\int_{\eta_{t_0}\wedge\tau_1}^{t\wedge\tau_1}\!\big[\tilde b(Y_s)-\tilde b(\hat Y_s)\big]\dif s+\int_{\eta_{t_0}\wedge\tau_1}^{t\wedge\tau_1}\!\!\int_0^{\infty}\!\!\!\!\int_{|z|\leq 1}\Big[\tilde g(Y_{s-},z)1_{[0,\tilde\sigma(Y_{s-},z)]}(r)\\ &\quad\qquad\quad\qquad\quad\qquad\quad\qquad\quad\qquad-\tilde g(\hat Y_{s-}, z)1_{[0,\tilde\sigma(\hat Y_{s-},z)]}(r)\Big]\tilde \cN(\dif z\times\dif r\times \dif s),\end{aligned}$$ we can get $$\begin{aligned} &\mE\left[\sup_{s\in[t_0,t]}|Z_{\tau_1\wedge\eta_s}|\right]=\mE\left[\sup_{s\in[\eta_{t_0},\eta_t]}|Z_{s\wedge\tau_1}|\right]\\ &\leq C_0\mE\!\int_{\eta_{t_0}\wedge\tau_1}^{\tau_1\wedge\eta_t}|Z_s|\dif A(s)+C_0\mE\Bigg(\int_{\eta_{t_0}\wedge\tau_1}^{\tau_1\wedge\eta_t}|Z_s|^2\dif A(s)\Bigg)^{\frac{1}{2}}\\ &\leq C_0\mE\!\int_{t_0}^{t}|Z_{\tau_1\wedge\eta_s}|\dif s+C_0\mE\Bigg(\int_{t_0}^{t}|Z_{\tau_1\wedge\eta_s}|^2\dif s\Bigg)^{\frac{1}{2}}\\ &\leq C_0\Big[(t-t_0)+\sqrt{(t-t_0)}\Big]\mE\left[\sup_{s\in[t_0,t]}|Z_{\tau_1\wedge\eta_s}|\right].\end{aligned}$$ Hence, for for almost all $\omega$, $$\sup_{s\in[0,\eta_{2t_{0}}]}|Z_{s\wedge\tau_1}|=0.$$ Repeating the above arguments, we may get for any $k>0$, $$\sup_{s\in[0,\eta_{kt_{0}}]}|Z_{s\wedge\tau_1}|=0.$$ Noticing that $\eta_t$ is also strictly increasing, we have for all $t\geq 0$, $$Z_{t\wedge\tau_1}=0,\quad a.s..$$ Thus, (\[66\]) is proven.\ [**Step 2:**]{} Assume now that $\sigma$ and $b$ satisfy [**(H$\sigma$)**]{}-[**(Hb)**]{}. For each $n\in\mN$, let $\chi_n(x)\in[0,1]$ be a nonnegative smooth function in $\mR^d$ with $\chi_n(x)=1$ for all $x\in B_n$ and $\chi_n(x)=0$ for all $x\notin B_{n+1}$. Let $$b_n(x):=\chi_n(x)b(x),\quad \zeta_n(x):=\chi_{n+1}(x)\zeta(x)+\cM|\nabla\chi_{n+1}|(x),$$ and $$\sigma_n(x,z):=1+\chi_{n+1}(x)\sigma(x,z)\big(|z|\wedge 1\big)+\Big(1-\chi_n(x)\big(|z|\wedge 1\big)\Big)\left(1+\sup_{x\in B_{n+2}}|\sigma(x,z)|\right)\mI_{d\times d}.$$ Then, one can check easily that (\[s1\]) holds and $b_n$ satisfies [**(Hb$'$)**]{}. Meanwhile, $$\begin{aligned} &\int_{\mR^d}|\sigma_n(x,z)-\sigma_n(y,z)|(|z|\wedge1)\nu(\dif z)\leq C\!\!\int_{\mR^d}|\chi_{n+1}(x)-\chi_{n+1}(y)|\big(|z|^2\wedge1\big)\nu(\dif z)\\ &\quad\leq C\!\!\int_{\mR^d}|\sigma(x,z)-\sigma(y,z)|(|z|\wedge1)\nu(\dif z)\Big(\chi_{n+1}(x)\wedge\chi_{n+1}(y)\Big)\\ &\quad\leq C|x-y|\Big(\cM|\nabla\chi_{n+1}|(x)+\cM|\nabla\chi_{n+1}|(y)\Big)+C|x-y|\Big(\chi_{n+1}(x)\zeta(x)+\chi_{n+1}(y)\zeta(y)\Big).\end{aligned}$$ It is obvious that $\zeta_n\in L^q(\mR^d)$ with $q>d/\alpha$, hence [**(H$\sigma'$)**]{} is also true. Therefore, for each $x\in\mR^d$, there exist a unique strong solution $X_t^n(x)$ to SDE (\[sde2\]) with coefficients $\sigma_n$ and $b_n$. For $n\geq k$, define $$\varsigma_{n,k}(x):=\inf\{t\geq0: |X_t^n(x)|\geq k\}\wedge n.$$ By the uniqueness of the strong solution, we have $$\mP\Big(X_t^n(x)=X_t^k(x),\,\forall t\in[0,\varsigma_{n,k}(x))\Big)=1,$$ which implies that for $n\geq k$, $$\varsigma_{k,k}(x)\leq \varsigma_{n,k}(x)\leq \varsigma_{n,n}(x),\quad a.s..$$ Hence, if we let $\varsigma_k(x):=\varsigma_{k,k}(x)$, then $\varsigma_k(x)$ is an increasing sequence of stopping times and for $n\geq k$, $$\mP\Big(X_t^n(x)=X_t^k(x),\,\forall t\in[0,\varsigma_{k}(x))\Big)=1.$$ Now, for each $k\in\mN$, we can define $X_t(x):=X^k_t(x)$ for $t<\varsigma_k(x)$ and $\varsigma(x):=\lim_{k\rightarrow\infty}\varsigma_k(x)$. It is clear that $X_t(x)$ is the unique strong solution to SDE (\[sde2\]) up to the explosion time $\varsigma(x)$ and (\[xt\]) holds. [10]{} Budhiraja A., Dupuis P. and Maroulas V.: Variational representations for continuous time processes. Ann. Inst. Henri. Poincar. Probab. Stat., [**[47]{}**]{} (2011), 725–747. Budhiraja A., Chen J. and Dupuis P.: Large deviations for stochastic partial differential equations driven by a Poisson random measure. Stochastic Process. Appl., [**[123]{}**]{} (2013), 523–560. Bass R. F., Burdzy K. and Chen Z.: Stochastic differential equations driven by stable processes for which pathwise uniqueness fails. Stoch. Proc. Appl., [**[111]{}**]{} (2004), 1–15. Caffarelli L. and Silvestre L.: The Evans-Krylov theorem for nonlocal fully nonlinear equations. Ann. Math., [**174**]{} (2011), 1163–1187. Chen Z., Kim P. and Kumagai T.: Global heat kernel estimates for symmetric jump processes. Trans. Amer. Math. Soc., [**363**]{} (2011), 5021–5055. Chen Z. and Kumagai T.: Heat kernel estimates for stable-like processes on $d$-sets. Stochastic Process. Appl., [**[108]{}**]{} (2003), 27–62. Chen Z., Song R. and Zhang X.: Stochastic flows for Lévy processes with Hölder drift. http://arxiv.org/abs/1501.04758. Chen Z. and Zhang X.: Heat kernel and analyticity of non-symmetric jump diffusion semigroups. Prob. Theory and Related Fields, [**156**]{} (2015), 1–46. Eidelman S. D., Ivasyshen S. D. and Kochubei A. N.: Analytic Methods in the Theory of Differential and Pseudo-differential Equations of Parabolic Type. Birkhauser, Basel (2004). Fang S., Luo D. and Thalmaierb A.: Stochastic differential equations with coefficients in Sobolev spaces. J. Funct. Anal., [**259**]{} (2010), 1129–1168. Fedrizzi E. and Flandoli F.: Hölder Flow and Differentiability for SDEs with Nonregular Drift. Sto. Ana. and App., [**[31]{}**]{} (2013), 708–736. Flandoli F., Gubinelli M. and Priola E.: Well-posedness of the transport equation by stochastic perturbation. Invent. Math., [**180**]{}(1) (2010), 1–53. Ikeda N. and Watanabe S.: Stochastic Differential Equations and Diffusion Processes, 2nd edition. North-Holland, Kodansha, 1989. Imkellera P. and Willrich N.: Solutions of martingale problems for Lévy-type operators with discontinuous coefficients and related SDEs. Sto. Pro. App., [**[126]{}**]{} (2016), 703–734. Kim P. and Song R.: Stable process with singular drift. Stoch. Proc. Appl., [**124**]{} (2014), 2479–2516. Kochubei A. N.: Parabolic pseudodifferential equations, hypersingular integrals and Markov processes. Math. USSR Izv., [**33**]{} (1989), 233–259. \[translation from Izv. Akad. Nauk SSSR Ser. Mat., [**52**]{} (1988), 909–934.\] Krylov N. V. and Röckner M.: Strong solutions of stochastic equations with singular time dependent drift. Probab. Theory Related Fields, [**[131]{}**]{}(2) (2005), 154–196. Kurtz T. G.: Martingale Problems, Particles and Filters. http://www.math.wisc.edu/$\sim$kurtz/Lectures/ill06pst.pdf. Kurtz T. G.: Equivalence of Stochastic Equations and Martingale Problems. Stochastic Analysis, (2010), 113–130. Kurtz T. G. and Protter P. E.: Weak convergence of stochastic integrals and differential equations. II. Infinite-dimensional case. In Probabilistic models for nonlinear partial differential equations (Montecatini Terme, 1995), volume 1627 of Lecture Notes in Math., pages 197–285. Springer, Berlin, 1996. Menoukeu P. O., Meyer B. T., Nilssen T., Proske F. and Zhang T.: A variational approach to the construction and Malliavin differentiability of strong solutions of SDEs. Math. Ann., [**[357]{}**]{} (2013), 761–799. Mikulevcius R. and Pragarauskas H.: On the Cauchy problem for integro-differential operators in Sobolev classes and the martingale problem. J. Diff. Eq., [**256**]{} (2014), 1581–1626. Mikulevcius R. and Xu F.: On the rate of converge of strong Euler approximation for SDEs driven by Levy processes. https://arxiv.org/pdf/1608.02303.pdf. Priola E.: Pathwise uniqueness for singular SDEs driven by stable processes. Osaka Journal of Mathematics, [**49**]{} (2012), 421–447. Priola E.: Stochastic flow for SDEs with jumps and irregular drift term. http://arXiv:1405.2575v1. Stein E. M.: Singular integrals and differentiability properties of functions. Princeton Mathematical Series 30, Princeton University Press, Princeton, NJ, 1970. Tanaka H., Tsuchiya M. and Watanabe S.: Perturbation of drift-type for Lévy processes. J. Math. Kyoto Univ., [**14**]{} (1974), 73–92. Triebel H.: Interpolation Theory, Function Spaces, Differeential Operators. North-Holland Publishing Company, Amsterdam, 1978. Wang F. Y.: Gradient Estimates and Applications for SDEs in Hilbert Space with Multiplicative Noise and Dini Continuous Drift. J. Diff. Eq., [**[3]{}**]{} (2016), 2792–2829. Wang F. Y.: Integrability Conditions for SDEs and Semi-Linear SPDEs. http://arxiv.org/pdf/1510.02183.pdf. Xie L.: Singular SDEs with critical non-local and non-symmetric Lévy type generator. Stoch. Proc. App., (2017), http://dx.doi.org/10.1016/j.spa.2017.03.014. Xie L. and Zhang X.: Sobolev differentiable flows of SDEs with local Sobolev and super-linear growth coefficients. Ann. Prob., [**44**]{}(6) (2016), 3661–3687. Zhai J. and Zhang T.: Large deviations for 2-D stochastic Navier-Stokes equations driven by multiplicative Lévy noises. Bernoulli, [**21**]{} (2015), 2351–2392. Zhang X.: Strong solutions of SDEs with singular drift and Sobolev diffusion coefficients. Stoch. Proc. Appl., [**[115]{}**]{} (2005), 1805–1818. Zhang X.: Stochastic homemomorphism flows of SDEs with singular drifts and Sobolev diffusion coefficients. Electron. J. Probab., [**[16]{}**]{} (2011), 1096–1116. Zhang X.: Stochastic differential equations with Sobolev drifts and driven by $\alpha$-stable processes. Ann. Inst. H. Poincare Probab. Statist., [**49**]{} (2013), 915–931. [^1]: The first author is supported by the Project Funded by PAPD of Jiangsu Higher Education Institutions. The second author is partially supported by the grants NSFC No 11571390, Macao S.A.R. FDCT/030/2016/A1 and University of Macau MYRG2016-00025-FST
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a calculation of the Sun’s motion through the Milky Way Galaxy over the last 500 million years. The integration is based upon estimates of the Sun’s current position and speed from measurements with [*Hipparcos*]{} and upon a realistic model for the Galactic gravitational potential. We estimate the times of the Sun’s past spiral arm crossings for a range in assumed values of the spiral pattern angular speed. We find that for a difference between the mean solar and pattern speed of $\Omega_\odot - \Omega_p = 11.9 \pm 0.7$ km s$^{-1}$ kpc$^{-1}$ the Sun has traversed four spiral arms at times that appear to correspond well with long duration cold periods on Earth. This supports the idea that extended exposure to the higher cosmic ray flux associated with spiral arms can lead to increased cloud cover and long ice age epochs on Earth.' author: - 'D. R. Gies and J. W. Helsel' title: 'Ice Age Epochs and the Sun’s Path Through the Galaxy' --- Introduction ============ Since its birth the Sun has made about 20 cycles around the Galaxy, and during this time the Sun has made many passages through the spiral arms of the disk. There is a growing interest in determining how these passages may have affected Earth’s environment. Shaviv (2002, 2003) makes a persuasive argument that there is a correlation between extended cold periods on Earth and Earth’s exposure to a varying cosmic ray flux (CRF). Shaviv proposes that the CRF varies as the Sun moves through Galactic spiral arms, regions with enhanced star formation and supernova rates that create more intense exposure to cosmic rays. The CRF experienced by Earth may affect the atmospheric ionization rate and, in turn, the formation of charged aerosols that promote cloud condensation nuclei [@har01; @eic02]. @mar00 show that there is a close correlation between the CRF and low altitude cloud cover over a 15 year time span. Thus, we might expect that extended periods of high CRF lead to increased cloud cover and surface cooling that result in long term (Myr) ice ages. Spiral arm transits may affect Earth in other ways as well. @yeg04 suggest that during some spiral passages the Earth may encounter interstellar clouds of sufficient density to alter the chemistry of the upper atmosphere and trigger an ice age of relatively long duration. The higher stellar density in the arms may more effectively perturb the Oort cloud of comets and lead to a greater chance of large impacts on Earth, and this combined with the possible lethal effects of nearby supernova explosions could cause mass extinctions during during passages through the spiral arms [@lei98]. On the other hand, the record of terrestrial impact craters suggests a variation on a time scale shorter than the interarm crossing time, but possibly related to the Sun’s oscillations above and below the disk plane [@sto98]. A comparison of the geological record of temperature variations with estimates of the Sun’s position relative to the spiral arms of the Galaxy is difficult for a number of reasons. First, our location within the disk makes it hard to discern the spiral structure of the Galaxy, particularly in more distant regions. Nevertheless, there is now good evidence that a four-arm spiral pattern is successful in explaining the emissions from the star-forming complexes of the Galaxy [@rus03]. Second, the angular rotation speed of the Galactic spiral pattern is still poorly known, with estimates ranging from 11.5 [@gor78] to 30 km s$^{-1}$ kpc$^{-1}$ [@fer01] (see reviews in @sha03, [@bis03], and @mar04). Finally, the Sun’s orbit in the Galaxy is not circular, and we need to account for the Sun’s variation in distance from Galactic center and in orbital speed to make an accurate estimate of the Sun’s position in the past. Here we present such a calculation of the Sun’s path through the Galaxy over the last 500 Myr. It is based upon the Sun’s current motion relative to the local standard of rest as determined from parallaxes and proper motions from the [*Hipparcos Satellite*]{} [@dea98] and on a realistic model of the Galactic gravitational potential [@deb98]. We discuss how the spiral pattern speed is critical to the estimates of the times of passage through the spiral arms, and we show a plausible example that is consistent with the occurrence of ice ages during spiral arm crossings. Integration of the Sun’s Motion =============================== An integration of the Sun’s motion was made using a cylindrical coordinate system for the Galaxy of $(R, \phi, Z)$. We first determined the position and resolved velocity components of the Sun in this system using the velocity of the Sun with respect to the local standard of rest [@dea98] and the Sun’s position relative to the plane [@hol97]. We then performed integrations backward in time using a fourth-order Runge-Kutta method and a model for the Galactic potential from @deb98. We adopted the model (\#2) from @deb98 that uses a Galactocentric distance of $R_o=8.0$ kpc and a disk stellar density exponential scale length of $R_{d\star}=2.4$ kpc. This model has a circular velocity at $R_o =8.0$ kpc of 217.4 km s$^{-1}$. We used time steps of 0.01 Myr over a time span of 500 Myr. Note that the model potential is axisymmetric and does not account for the minor variations in the field near spiral arms. We also ignore accelerations due to encounters with giant molecular clouds, since their effect is small over periods less than 1 Gyr (at least in a statistical sense; @jen92). The full set of coordinates as a function of time is not included here, but interested readers can obtain the digital data from our web site[^1]. The Sun’s journey in cylindrical coordinates is illustrated in Figure 1. The top panel shows the temporal variation in distance from Galactic center, and we see the radial oscillation that is expected from the “epicycle approximation” for nearly circular orbits [@bin87]. The period is 170 Myr and the corresponding frequency is 36.9 km s$^{-1}$ kpc$^{-1}$, which is close to the expected value of $36.7\pm2.4$ km s$^{-1}$ kpc$^{-1}$ based upon the local Oort constants [@fea97]. The middle panel shows the advance in azimuthal position with the orbit (small departures from linearity reflect speed variations that conserve angular momentum). The Sun has completed just over two circuits of the Galaxy over this time span. The lower panel shows the oscillations above and below the Galactic plane. The period is approximately 63.6 Myr, but there are cycle to cycle variations caused by the varying radial density in the model. This period $P$ is approximately related to the mid-plane density at the average radius, $\rho = (26.43~{\rm Myr}/P)^2$ $M_\odot$ pc$^{-3}$ [@bin87]. The period for our model of the solar motion corresponds to a mid-plane density of 0.17 $M_\odot$ pc$^{-3}$, which is close to current estimates of the Oort limit of $0.15\pm 0.01$ $M_\odot$ pc$^{-3}$ [@sto98]. Thus, while the estimates of motion of the Sun in the $Z$ direction are secure for the recent past, probable errors in the period of approximately $7\%$ may accumulate to as much as half a cycle error in the timing of the oscillations 500 Myr ago. The errors in the estimates of the Sun’s current Galactic motions [@dea98] have only a minor impact on these trajectories. For example, the error in the $V$ component of motion amounts to a difference of only $3^\circ$ in $\phi$ over this 500 Myr time span. We next consider the motion of the Sun in the plane of the Galaxy relative to the spiral arm pattern. The disk of the Galaxy from the solar circle out-wards appears to display a four-arm spiral structure as seen in the emission of atomic hydrogen [@bli83] and molecular CO [@dam01] and in the distribution of star forming regions [@rus03]. We show in Figure 2 the appearance of the Galactic spiral arm patterns based on the model of @wai92 but with some revisions introduced by @cor03[^2]. This representation is very similar to the pattern adopted by @rus03. We have rescaled the pattern from a solar Galactocentric radius of 8.5 kpc to a value of 8.0 kpc for consistency with our model of Galactic potential from @deb98. Each arm is plotted with an assumed width of 0.75 kpc [@wai92] and each is named in accordance with the scheme of @rus03. The dotted line through the center of the Galaxy indicates the current location of the central bar according to @bis03. The pattern speed of the bar may be similar to that of the arms [@iba95] or it may be faster than that of the arms [@bis03], in which case the bar – arm relative orientation will be different in the past. The placement of the Sun’s trajectory in this diagram depends critically on the relative angular pattern speeds of the Sun and the spiral arms. The mean advance in azimuth in our model of the Sun’s motion corresponds to a solar angular motion of $\Omega_\odot = 26.3$ km s$^{-1}$ kpc$^{-1}$. If the difference in the solar and spiral arm pattern speeds, $\Omega_\odot-\Omega_p$, is greater than zero, then the Sun overtakes the spiral pattern and progresses in a clockwise direction in our depiction of the Galactic plane. Unfortunately, the spiral pattern speed is not well established and may in fact be different in the inner and outer parts of the Galaxy [@sha03]. Several recent studies [@ama97; @bis03; @mar04] advocate a spiral pattern speed of $\Omega_p=20\pm5$ km s$^{-1}$ kpc$^{-1}$, and we show in Figure 2 the Sun’s trajectory projected onto the plane for this value ($\Omega_\odot-\Omega_p = 6.3$ km s$^{-1}$ kpc$^{-1}$). Diamonds along the Sun’s track indicate its placement at intervals of 100 Myr. We see that for this assumed pattern speed the Sun has passed through only two arms over the last 500 Myr. However, if we assume a lower but still acceptable pattern speed of $\Omega_p=14.4$ km s$^{-1}$ kpc$^{-1}$ (shown in Fig. 3 for $\Omega_\odot-\Omega_p = 11.9$ km s$^{-1}$ kpc$^{-1}$), then the Sun has crossed four spiral arms in the past 500 Myr and has nearly completed a full rotation ahead of the spiral pattern. Thus, the choice of the spiral pattern speed dramatically influences any conclusions about the number and timing of Sun’s passages through the spiral arms over this time interval. The duration of a coherent spiral pattern is an open question, but there is some evidence that long-lived spiral patterns may be more prevalent in galaxies with a central bar. For example, numerical simulations of the evolution of barred spirals by @rau99 suggest that spiral patterns may last several gigayears. Their work suggests that the shortest time scale for the appearance or disappearance of a spiral arm is about 1 Gyr. Therefore, it is reasonable to assume that the present day spiral structure has probably been more or less intact over the last 500 Myr (at least in the region of the solar circle). Discussion ========== @sha03 argues that the Earth has experienced four large scale cycles in the CRF over the last 500 Myr (with similar cycle times back to 1 Gyr before the present). Shaviv shows that the CRF exposure ages of iron meteorites indicate a periodicity of $143\pm10$ Myr in the CRF rate. Since the cosmic ray production is related to supernovae and since Type II supernovae will be more prevalent in the young star forming regions of the spiral arms, Shaviv suggests that the periodicity corresponds to the mean time between arm crossings (so that Earth has made four arm crossings over the last 500 Myr). @sha03 and @sav03 show how the epochs of enhanced CRF are associated with cold periods on Earth. The geological record of climate-sensitive sedimentary layers (glacial deposits) and the paleolatitudinal distribution of ice rafted debris [@fra92; @cro99] indicate that the Earth has experienced periods of extended cold (“icehouses”) and hot temperatures (“greenhouses”) lasting tens of million years [@fra92]. The long periods of cold may be punctuated by much more rapid episodes of ice age advances and declines [@imb92]. The climate variations indicated by the geological evidence of glaciation are confirmed by measurements of ancient tropical sea temperatures through oxygen isotope levels in biochemical sediments [@vei00]. All of these studies lead to a generally coherent picture in which four periods of extended cold have occurred over the last 500 Myr, and the midpoints of these ice age epochs (IAE) are summarized in Table 1 [see @sha03]. The icehouse times according to @fra92 are indicated by the thick line segments in each of Figures 1, 2, and 3. If these IAE do correspond to the Sun’s passages through spiral arms, then it is worthwhile considering what spiral pattern speeds lead to crossing times during ice ages. We calculated the crossing times for a grid of assumed values of $\Omega_\odot - \Omega_p$ and found the value that minimized the $\chi^2_\nu$ residuals of the differences between the crossing times and IAE. There are two major error sources in the estimation of the timing differences. First, the calculated arm crossing times depend sensitively on the placement of the spiral arms, and we made a comparison between the crossing times for our adopted model and that of @rus03 to estimate the timing error related to uncertainties in the position of the spiral arms (approximately $\pm8$ Myr except in the case of the crossing of the Scutum–Crux arm on the far side of the Galaxy where the difference is $\approx 40$ Myr). Secondly, there are errors associated with the estimated mid-times of the IAE, and we used the scatter between the various estimates in columns 2 – 5 of Table 1 to set this error (approximately $\pm14$ Myr). We adopted the quadratic sum of these two errors in evaluating the $\chi^2_\nu$ statistic of each fit. The results of the fitting procedure for various model and sample assumptions are listed in Table 2. The first trial fit was made by finding the $\chi^2_\nu$ minimum that best matched the crossing times with the IAE midpoints from @sha03 (given in column 5 of Table 1 and noted as “Midpoint” in column 2 of Table 2). All four arm crossings were included in the calculation (indicated as 1 – 4 in column 3 of Table 2) that used the adopted model for the Galactic potential with a Galactocentric distance $R_o = 8.0$ kpc and and a stellar disk exponential scale length of $R_{d\star}=2.4$ kpc (model \#2 from @deb98; see columns 4 and 5 of Table 2). The best fit difference (column 6 of Table 2) is obtained with $\Omega_\odot - \Omega_p = 12.3 \pm 0.8$ km s$^{-1}$ kpc$^{-1}$, where the error was estimated by finding the limits for which $\chi^2_\nu$ increased by 1. This fit gave reasonable agreement between the IAE and crossing times for all but the most recent crossing of the Sagittarius – Carina arm. Thus, we made a second fit (\#2 in Table 2) using only the crossings associated with IAE 2 – 4, and this solution (with $\Omega_\odot - \Omega_p = 11.9 \pm 0.7$ km s$^{-1}$ kpc$^{-1}$) is the one illustrated in Figure 3. The crossing times (given in the final column of Table 1) agree well with the adopted IAE midpoints. Our results are similar to the estimate of $\Omega_\odot - \Omega_p = 10.4 \pm 1.5$ km s$^{-1}$ kpc$^{-1}$ from @sha03 who assumed a circular orbit for the Sun in the Galaxy. We also computed orbits using two other models for the Galactic potential from @deb98 and determined the best fit spiral speeds for these as well. Fit \#3 in Table 2 was made assuming a larger Galactocentric distance $R_o = 8.5$ kpc but with the same ratio of $R_{d\star}/R_o$ (model \#2b in @deb98), and the resulting best fit spiral speed is the same within errors as that for our adopted model. We also computed an orbit for a potential with a larger value of disk exponential scale length $R_{d\star}/R_o$ (model \#3 in @deb98), but again the best fit spiral speed (fit \#4 in Table 2) is the same within errors as that for our adopted model. Thus, the details of the adopted Galactic potential model have little influence on the derived spiral pattern speed needed to match the IAE times. We might expect that the IAE midpoint occurs somewhat after the central crossing of the arm. For example, @sha03 suggests that the IAE midpoint may occur some 21 – 35 Myr after the central arm crossing due to the difference in the stellar and pattern speeds (so that the cosmic rays move ahead of arms as the stellar population does) and to the time delay between stellar birth and supernova explosion of the SN II cosmic ray sources. Furthermore, if ice ages are triggered by encounters with dense clouds as suggested by @yeg04, then the ice age may not begin until the Sun reaches the gas density maximum at the center of the arm. Thus, we calculated a second set of best fit spiral speeds to match the mean crossing and icehouse starting times [@fra92], and these are listed as fits \#5 and \#6 in Table 2. This assumption leads to somewhat smaller values of $\Omega_\odot - \Omega_p$, but ones that agree within errors with all the other estimates. We offer a few cautionary notes about possible systematic errors in this analysis. First, the fit of the IAE and arm crossing times depends on the difference $\Omega_\odot - \Omega_p$, and if our assumed value of $\Omega_\odot$ eventually needs revision, then so too will the spiral pattern speed $\Omega_p$ need adjustment. For example, @rei04 derive an angular rotation speed of $\Omega_{LSR} = 29.5\pm 1.9$ km s$^{-1}$ kpc$^{-1}$ for the local standard of rest based upon Very Long Baseline Array observations of the proper motion of Sgr A$^\star$ with respect to two extragalactic radio sources. If we suppose the local Galactic rotation curve is flat, then $\Omega_\odot = \Omega_{LSR} ~8.0 / R_g = 28.7\pm 1.8$ km s$^{-1}$ kpc$^{-1}$, where $R_g=8.23$ kpc is the Sun’s mean Galactocentric distance. Adopting this value results in a spiral pattern speed of $\Omega_p = 16.8\pm 2.0$. Second, our calculation ignores any orbital perturbations caused by close encounters with giant molecular clouds that cause an increase in the Sun’s motion with respect to a circularly rotating frame of reference. @nor04 present of a study of the ages and velocities of Galactic disk stars that indicates a net increase in the random component of motion proportional to time raised to the exponent 0.34. Thus, we would expect that the Sun’s random speed has increased through encounters by only $\approx 4\%$ over the last 500 Myr, too small to change the orbit or the arm crossing times estimates significantly. Third, we have ignored the deviations in the gravitational potential caused by the arms themselves. The Sun presumably slows somewhat during the arm crossings so that the duration of the passage is longer than indicated in our model, but since our model of the gravitational potential represents an azimuthal average, the derived orbital period and interarm crossing times should be reliable. @lei98 argue that mass extinctions may also preferentially occur during spiral arm crossings. However, they proposed that a spiral pattern speed of $\Omega_p=19$ km s$^{-1}$ kpc$^{-1}$ is required to find consistency between times of mass extinctions and spiral arm crossings, and if correct, then the relationship between ice ages and arm crossings would apparently be ruled out because $\Omega_p=19$ km s$^{-1}$ kpc$^{-1}$ is too large for the inter-arm crossing time to match the intervals between IAE (see Fig. 2 and Fig. 3). We show the times of the five major mass extinctions as X signs in Figures 1 – 3 [@rau86; @ben95; @mat96]. We see that in fact the lower value of $\Omega_p=14.4$ km s$^{-1}$ kpc$^{-1}$ ($\Omega_\odot - \Omega_p = 11.9$ km s$^{-1}$ kpc$^{-1}$, as shown in Fig. 3) also leads to a distribution of mass extinction times that fall close to or within a spiral arm passage, so the association of mass extinctions with arm crossings may also be viable in models with pattern speeds that are consistent with the ice age predictions. Our calculation of the Sun’s motion in the Galaxy appears to be consistent with the suggestion that ice age epochs occur around the times of spiral arm passages as long as the spiral pattern speed is close to $\Omega_p=14 - 17$ km s$^{-1}$ kpc$^{-1}$. However, this value is somewhat slower than the $20\pm5$ km s$^{-1}$ kpc$^{-1}$ preferred in recent dynamical models of the Galaxy [@ama97; @bis03; @mar04]. The resolution of this dilemma may require more advanced dynamical models that can accommodate differences between pattern speeds in the inner and outer parts of the Galaxy (for example, a possible resonance between the four-armed spiral pattern moving with $\Omega_p=15$ km s$^{-1}$ kpc$^{-1}$ and a “two-armed” inner bar moving with $\Omega_p=60$ km s$^{-1}$ kpc$^{-1}$; @bis03). We thank Walter Dehnen for sending us his code describing the Galactic gravitational potential. We also thank the referee and our colleagues Beth Christensen, Crawford Elliott, and Paul Wiita for comments on this work. Financial support was provided by the National Science Foundation through grant AST$-$0205297 (DRG). Institutional support has been provided from the GSU College of Arts and Sciences and from the Research Program Enhancement fund of the Board of Regents of the University System of Georgia, administered through the GSU Office of the Vice President for Research. Amaral, L. H., & Lepine, J. R. D. 1997, , 286, 885 Benton, M. J. 1995, Science, 268, 52 Binney, J., & Tremaine, S. 1987, Galactic Dynamics (Princeton: Princeton Univ. Press) Bissantz, N., Englmaier, P., & Gerhard, O. 2003, , 340, 949 Blitz, L., Fich, M., & Kulkarni, S. 1983, Science, 220, 1233 Cordes, J. M., & Lazio, T. J. W. 2003, preprint (astro-ph/0301598) Crowell, J. C. 1999, Pre-Mesozoic Ice Ages: Their Bearing on Understanding the Climate System, Mem. Geological Soc. Am., 192 Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, , 547, 792 Dehnen, W., & Binney, J. J. 1998a, , 298, 387 Dehnen, W., & Binney, J. 1998b, , 294, 429 Eichkorn, S., Wilhelm, S., Aufmhoff, H., Wohlfrom, K. H., & Arnold, F. 2002, Geophysical Research Lett., 29, 10.1029/2002GL015044 Feast, M., & Whitelock, P. 1997, , 291, 683 Fernández, D., Figueras, F., & Torra, J. 2001, , 372, 833 Frakes, L. A., Francis, J. E., & Syktus, J. I. 1992, Climate modes of the phanerozoic: the history of the earth’s climate over the past 600 million years (Cambridge: Cambridge Univ. Press) Gordon, M. A. 1978, , 222, 100 Harrison, R. G., & Aplin, K. L. 2001, J. Atmospheric Terrestrial Physics, 63, 1811 Holmberg, J., Flynn, C., & Lindegren, L. 1997, in Proceedings of the ESA Symposium Hipparcos Venice ’97 (ESA SP-402), ed. B. Battrick (Noordwijk: ESA/ESTEC), 721 Ibata, R. A., & Gilmore, G. F. 1995, , 275, 605 Imbrie, J., et al. 1992, Paleoceanography, 7 (\#6), 701 Jenkins, A. 1992, , 257, 620 Leitch, E. M., & Vasisht, G. 1998, , 3, 51 Marsh, N. D., & Svensmark, H. 2000, , 85, 5004 Martos, M., Hernandez, X., Yáñez, M., Moreno, E., & Pichardo, B. 2004, , 350, L47 Matsumoto, M., & Kubotani, H. 1996, , 282, 1407 Nordström, B., et al. 2004, , 418, 989 Raup, D. M., & Sepkoski, J. J. 1986, Science, 231, 833 Rautiainen, P., & Salo, H. 1999, , 348, 737 Reid, M. J., & Brunthaler, A. 2004, , 616, 872 Russeil, D. 2003, , 397, 133 Shaviv, N. J. 2002, , 89, 051102 Shaviv, N. J. 2003, , 8, 39 Shaviv, N. J., & Veizer, J. 2003, GSA Today, 13, \#7, 4 Stothers, R. B. 1998, , 300, 1098 Veizer, J., Godderis, Y., & François, L. M. 2000, , 408, 698 Wainscoat, R. J., Cohen, M., Volk, K., Walker, H. J., & Schwartz, D. E. 1992, , 83, 111 Yeghikyan, A., & Fahr, H. 2004, , 425, 1113 [![The Sun’s position in the Galaxy over the last 500 Myr expressed in cylindrical coordinates, $R$ the distance from Galactic center ([*top*]{}), $\phi$ the azimuthal position in the disk relative to $\phi=0^\circ$ at present ([*middle*]{}), and $Z$ the distance from the plane ([*bottom*]{}). Thick line portions mark icehouse epochs on Earth [@fra92], and X signs indicate times of large mass extinctions on Earth. The names of the geological eras and periods over this time span are noted at top.[]{data-label="fig1"}](f1.eps "fig:"){height="12cm"}]{} [lccccc]{} 1& $<22$ & $<28$ &30 &20 &80\ 2&155 &144 & 180 & 160 & 156\ 3&319 &293 & 310 & 310 & 310\ 4&437 &440 & 450 & 446 & 446\ [lccccc]{} 1& Midpoint & 1 – 4 & 8.0 & 2.40 & $12.3 \pm 0.8$\ 2& Midpoint & 2 – 4 & 8.0 & 2.40 & $11.9 \pm 0.7$\ 3& Midpoint & 2 – 4 & 8.5 & 2.55 & $11.8 \pm 0.6$\ 4& Midpoint & 2 – 4 & 8.0 & 2.80 & $11.8 \pm 0.7$\ 5& Starting & 1 – 4 & 8.0 & 2.40 & $11.6 \pm 0.8$\ 6& Starting & 2 – 4 & 8.0 & 2.40 & $11.4 \pm 0.6$\ [^1]: http://www.chara.gsu.edu/$^\sim$gies/solarmotion.dat [^2]: http://astrosun2.astro.cornell.edu/$^\sim$cordes/NE2001/
{ "pile_set_name": "ArXiv" }
--- abstract: | Search Based Software Testing (SBST) is a popular automated testing technique which uses a feedback mechanism to search for faults in software. Despite its popularity, it has fundamental challenges related to the design, construction and interpretation of the feedback. Neural Networks (NN) have been hugely popular in recent years for a wide range of tasks. We believe that they can address many of the issues inherent to common SBST approaches. Unfortunately, NNs require large and representative training datasets. In this work we present an SBST framework based on a deconvolutional generative neural network. Not only does it retain the beneficial qualities that make NNs appropriate for SBST tasks, it also produces its own training data which circumvents the problem of acquiring a training dataset that limits the use of NNs. We demonstrate through a series of experiments that this architecture is possible and practical. It generates diverse, sensible program inputs, while exploring the space of program behaviours. It also creates a meaningful ordering over program behaviours and is able to find crashing executions. This is all done without any prior knowledge of the program. We believe this proof of concept opens new directions for future work at the intersection of SBST and neural networks. author: - Leonid Joffe - 'David J. Clark' bibliography: - 'bibliography.bib' title: A Generative Neural Network Framework for Automated Software Testing --- =1 Introduction ============ In this paper we explore an automated testing framework based on Neural Networks (NN). The proposed approach aims to address some of the most pertinent problems of both automated testing and generative NN models. This proof of concept introduces multiple new ideas at the intersection of automated software testing and generative neural networks, and we hope it will stimulate further research in this area. Automated testing techniques have become increasingly popular thanks to the availability of resources and their ever improving effectiveness. We view automated testing from the perspective of Search Based Software Testing (SBST) [@harman2001search; @mcminn2004search]. In SBST, a program is repeatedly executed, its execution monitored and further executions are generated with the aim of more effective fault discovery. Commonly the goal is to optimise coverage. A fundamental feature of SBST is the reliance on a feedback mechanism to evaluate and direct the search. The practical instantiation of SBST that we consider is fuzzing. Fuzzing is a technique where a program is bombarded with random inputs in hopes it will eventually crash [@sutton2007fuzzing]. Modern fuzzers however use feedback mechanisms to improve their effectiveness. Using a feedback loop for improving search brings fuzzing into the realm of SBST. A popular modern fuzzer is the American Fuzzy Lop (AFL) [@zalewski2007american]. Although it does use a feedback mechanism and hence falls into the realm of SBST, its search strategies and notions of similarity are non-principled heuristics – they “just work”. That said, these heuristics all ultimately drive the fuzzer towards exploring a program’s behaviours maximally *diversely*. Indeed, diversification is generally a very common target for testing [@ammann2016introduction; @heimdahl2004specification; @gay2015risks]; after all, if the target of search is unknown, the best you can do is explore. Although popular both in academia and industry, SBST is not without its limitations [@mcminn2011search; @aleti2017analysing]. The problems of SBST we aim to address are the following. *First*, fitness landscapes of SBST may have plateaus or be discontinuous. Then either multiple adjacent candidate solutions appear equivalent in terms of fitness and the search mechanism cannot prioritise them or it cannot move to a better solution. *Second*, the fitness landscape may contain local optima which leads the search to a sub-optimal solution. *Third*, choosing the representation requires domain knowledge and expert involvement [@shepperd1995fundamentals]. *Fourth*, it is not apparent how to assign an ordering onto the search landscape, likewise requiring an involvement of an expert [@shepperd1995fundamentals; @harman2004metrics]. Granted, it can be defined in terms of the search operators, i.e. candidate solutions one search step away from each other are adjacent. But is this the best ordering for the search? *Finally*, the generation of new candidate solutions is a big can of worms with various approaches and solutions [@mcminn2004search; @anand2013orchestrated; @ali2010systematic; @alshraideh2006search; @fraser2013whole; @fraser2011evosuite; @fraser2012mutation; @pacheco2005eclat; @korel1992dynamic]. What search operators to use? How much prior knowledge is required and available? How to produce the next candidate solution? We propose that NNs’ properties make them ideal for tackling these issues. *First*, NNs are trained by a process of backpropagation [@rumelhart1986learning] which means they must be differentiable and thus continuous by construction [@glasmachers2017limits]. That is, if a neural network trains, its intermediate states must be differentiable. This continuity and differentiability make NNs a natural candidate to tackle the issue of plateaus in SBST search spaces. *Second*, it has been shown that given sufficient size, NNs avoid local optima [@kawaguchi2016deep; @swirszcz2016local; @nguyen2017loss; @nguyen2018optimization]. *Third*, the above property also implies that if a representation contains a useful signal, an NN will discover it. This means that they can use representations that are difficult to interpret manually. NNs may suffer from noise given redundant data, but these problems can be addressed with feature selection [@verikas2002feature; @leray1999feature; @wang2014attentional] and modern, deep architectures suffer less from this problem [@li2018feature]. This alternative is nonetheless preferable to a major manual effort. *Fourth*, due to their differentiable nature, NNs impose an order relation onto data (e.g. [@kingma2013auto]). They may thus help us reason about similarity and diversity of program behaviour in a principled, continuous way. *Lastly*, NNs can be used for generating new data without analytical human effort [@Goodfellow-et-al-2016; @kawthekarevaluating; @chollet2017git; @graves2013generating; @openai_gnn; @radford2015unsupervised; @van2016wavenet]. NNs, as tools for SBST, come with limitations of their own however. The main problem is that they are data hungry; they need large representative training datasets [@beleites2013sample]. This appears like a disqualifying issue in the context of SBST. If one wishes to train an NN to be used as a fuzzer, and has sufficient data to train the NN, this data could simply be used to test the program itself. This really defeats the purpose of building and training an NN-based fuzzer. The architecture proposed in this work can address all these problems – SBST and NN related alike. It is a generative model that produces its own training data. In a way, this architecture is similar to that of reinforcement learning (RL) where an agent explores an environment, discovers rewards and learns to navigate the space more effectively. In our approach, the agent is the generative model while the program under test is the environment. This might seem like a bizarre proposition as there is no apparent way to evaluate the quality of the generated data. You can produce all the random data you like, but how do you know if it is any good? What is your reward signal? We suggest that the principle of diversification can be adapted from SBST to evaluate the generated data; the model is rewarded for *diversity*. This framework we call GNAST (Generative Network for Automated Software Testing) begins by throwing random inputs at a program. Most of these will be rejected by the program. Some, however, will trigger an unusual execution trace. Those are prioritised and kept in the training dataset. As the process continues, GNAST generates program inputs that trigger new behaviours and uses them in its training dataset. The fact that GNAST is NN based allows it to address the issues outlined above, as we will show in the sequel. In this work we implement a prototype of GNAST and present a number of initial findings. First and foremost, we show how such a system can be trained and how it produces diverse program inputs. Furthermore, the inputs are clearly sensible with respect to the syntax of the program under test. In addition, we can control the syntactic similarity of generated strings. Finally, rudimentary as the current state of GNAST is, it does actually discover crashes. Currently it is a prototypical proof of concept accompanied by several outstanding questions. As such, it cannot be readily compared with fully fledged fuzzers like AFL. Even though it is a prototype, GNAST presents a number of novel ideas. It is a generative NN-based automated software testing tool that does not require a training dataset – it produces its own. It is a new example of a deconvolutional generative model for string generation. It uses a novel quantified notion of similarity for program executions. It presents a prioritisation method for program executions based on a greedy algorithm called Farthest-First Traversal (FFT). Finally, it uses “unusualness” and diversity as an explicit training target and although this idea is common in testing, it is novel in this formulation. Overview of Approach {#sec:overview} ==================== We propose a framework for automated software testing. Its purpose is to explore the behaviours of a Software Under Test (SUT) diversely. Our system does this by generating inputs for the SUT, observing the executions under those inputs and adjusting further generation of inputs towards the most unusual behaviours. It is essentially an evolutionary fuzzer, albeit with convoluted[^1] generative and feedback mechanisms. The tool is based on two neural networks (NN), an execution trace profiler and a prioritisation mechanism of executions. The structure is shown in and its algorithm is presented in . A single epoch of the algorithm corresponds to two passes through the framework in : the first is a generative pass where the networks’ weights are not updated, and a training pass – where they are. The algorithm and framework are described next, with the numbers in brackets corresponding to the lines in and . The process is initialised by feeding Gaussian noise to an untrained Generative Neural Network (GNN) (*2-6*). It produces a batch of program inputs $X$. Most of these will be nonsensical; strings of random characters. The inputs are then executed by the SUT and the execution traces $T$ are collected (*12*). As most inputs are just noise, most execution traces will yield the “invalid input” execution trace. Some inputs however may contain features that are valid, which will be reflected in their execution traces. The execution traces are then encoded by a Variational Autoencoder (VAE) (*14*). It casts the discrete, unorderable execution trace into an n-dimensional “latent” space encoding $E$. The latent space encoding is a quantifiable representation of features of execution traces, and we can reason about their similarity in terms of Euclidean distance. The encoded executions are then ranked by the “unusualness” of their encoded traces using the Farthest-First Traversal (FFT) algorithm (*18*). Redundant datapoints are discarded and the most interesting ones are kept in a persistent training dataset (*20*). The dataset is composed of the execution traces $T$, their encodings $E$ and the program inputs that triggered them $X$. The dataset is then used to train both the VAE and the GNN (*24, 26*). As the VAE learns to encode the execution traces of the training dataset, new, unusual ones stand out from the bunch. The GNN, in turn, learns to produce program inputs that trigger a variety of traces, i.e. program behaviours. We also add noise to perturb the dataset towards exploration so that more novel behaviours are found (*29*). We suggest that the proposed framework represents a fundamentally novel approach to using neural networks for diversity driven testing of programs. ![The GNAST Framework. A detailed explanation of the training and generative processes is given in . The algorithm corresponding to this image is shown in , with numbers in brackets corresponding to line numbers.[]{data-label="fig:framework"}](framework){width="\linewidth"} $e \leftarrow 0$ $\{Z_0\} \sim \mathcal{N}(\mu,\,\sigma^{2})$ $\{Res_0\} \leftarrow \{<X=\emptyset, T=\emptyset, Z=Z_0\}$ Research Questions ================== While many of the individual mechanisms of GNAST are inspired by other work, the overall structure is fundamentally novel. It is an RL-inspired loop that generates training data for itself by sampling the output distribution and evaluates samples with an external diversity-driven oracle. The novelty brings about a huge number of design and configuration decisions, all of which have an effect on the research questions outlined below.\ **First**, there was no guarantee that the training of such a system would converge at all. It is also not clear what optimisers, layer sizes, numbers of hidden layers etc. to use. Non-convergence is essentially underfitting – the mechanism does not learn to approximate the data. Whether GNAST’s training converges is the the focus of the first research question.\ **RQ1**: *“Does GNAST framework’s training converge?”*\ **Second**, it is insufficient for GNAST’s training to simply converge. It also needs to *not* converge too far, so as to continue generating new datapoints. New datapoints are essential both for exercising varied behaviours of the SUT as well as building up a diverse dataset for training. Much like non-convergence in RQ1 means underfitting, converging too far corresponds to overfitting – the system learns to produce only a few datapoints and since those are kept in the training dataset, their effect becomes ever stronger. The second research question looks at diversity during training.\ **RQ2**: *“Does GNAST maintain diversity throughout training?”*\ **Third**, if the mechanism does train in an acceptable way, we then need to evaluate whether the produced program inputs are sensible. Granted, a program may crash under a completely unexpected random input, but a fundamental principle of fuzzing (and indeed any other testing) is that inputs ought to be consumable by the SUT, beyond an “invalid input” check. This means they ought to be somewhat well-formed, or at least have some relevant syntactic features. AFL generates strings that can hardly be called well-formed, as the representation yielded by its instrumentation is only a crude representation of a program’s behaviour. Since the instrumentation is lifted out of AFL, the strings generated by GNAST were expected to contain some syntactic feature similar to those made by AFL, but there was *no* expectation of the them being properly well-formed. The aim of the third research question is to see whether the generated strings are completely random or comparable in structure to those made by AFL.\ **RQ3**: *“Do the generated program inputs have syntactic features similar to those generated by an AFL baseline?”*\ **Fourth**, one of the intended features of GNAST is the ability to control the similarity of syntactic features of the produced inputs by adjusting the input. Once GNAST is trained, the latent space that was used as input to the GNN can be replaced with n-dimensional normal noise. The GNN thus becomes a stand-alone generator which takes a vector of reals as input and generates a string on the output. Input values close to each other are ought to produce similar strings while distant inputs should produce dissimilar ones. Whether this is the case is investigated in the fourth research question.\ **RQ4**: *“Do strings generated from nearby points in the latent space share syntactic features vs. those far apart?”*\ **Fifth**, by the same design as syntactic similarity of generated program inputs, GNAST ought to be able to generate strings that would trigger a specific desired behaviour. After all, one of the components of the loss function is the reconstruction of input. Our fifth research question looks at this aspect.\ **RQ5**: *“Can GNAST generate input strings that trigger specific behaviours of a SUT?”*\ **Finally**, although GNAST is only a proof of concept with a lot of additional work to be done, we would like to know if the framework “has legs” as a design for a fuzzer. The ultimate aim of a fuzzer is to find crashes, and our last research question is simple but poignant.\ **RQ6**: *"Does GNAST discover crashes in **sparse**[^2]?* Related Work and Background =========================== The proposed framework applies machine learning techniques, specifically GNNs, to automated software testing. This section links the mechanisms of GNAST to concepts, ideas and inspirations in those fields. Fuzzing and the American Fuzzy Lop ---------------------------------- Fuzzing is an automated testing technique which attempts to discover faults by bombarding a program random inputs [@sutton2007fuzzing]. Modern fuzzers use feedback to improve the input generation process; they evaluate their progress in order to search for faults more effectively. American Fuzzy Lop (AFL) is a popular fuzzer widely used in academia [@zalewski2007american]. A number of recent papers have taken AFL as a basis, and improved on it. These include work on improving AFL’s instrumentation [@gan2018collafl; @chen2018angora], alternative search strategies [@bohme2017coverage; @bohme2017directed; @petsios2017slowfuzz], producing better initial seeds [@lv2018smartseed], locating interesting input string mutation locations [@rajpal2017not; @she2018neuzz] and introducing program context information [@rawat2017vuzzer]. Although we use some parts of AFL, our work does not attempt to improve AFL. Instead, we are proposing a fundamentally different architecture. Representation and Fitness Function ----------------------------------- Feedback-driven automatic testing tools like modern fuzzers, fall under the field of Search Based Software Testing (SBST) [@harman2012search; @mcminn2004search]. As any other SBST approach, the design of a fuzzer depends on a representation and a fitness function [@harman2001search]. Representation is the choice of what to observe about a candidate solution, in this case an execution. It may be a coverage profile, a sequence of executed basic blocks or something simple like execution time. In this work, the representation is based on the execution trace profiling mechanism from AFL. AFL’s representation of an execution is an approximate count of decision point transitions. It does not capture context nor sequence information about the execution. The fitness function is what transforms a representation into a quantified assessment of quality of a candidate solution. But what constitutes quality in fuzzing and how can it be quantified? Fuzzing, and software testing in general, aims to exercise program behaviours diversely: diversity is quality. The original AFL uses a handful of heuristics to identify executions that are distinct or “interesting”, i.e. of high quality. For instance, if an execution exercises a transition that has not been previously seen, it is considered interesting. Whilst the heuristics of AFL are empirically effective, they are not principled and do not order nor quantify the similarity of executions or behaviours. Quantifying Program Behaviour ----------------------------- GNAST is intended to give an ordering, and quantify the similarity of executions. The approach relies on an autoencoder neural network which is used to process execution traces. An autoencoder is an NN which attempts to reconstruct its inputs at the outputs, while arranging the datapoints into an n-dimensional encoding (“latent”) space by similarity of their features [@Goodfellow-et-al-2016]. Specifically, we use a flavour of an autoencoder called the Variational Autoencoder (VAE) [@kingma2013auto]. The essential detail of a VAE is in the structure of its encoding layer. Rather than encoding datapoints as single real number values, they are instead encoded as tuples of mean and variance $(\mu, \sigma^2)$. The result of this construction is that datapoints tend to be more evenly spread across the latent space and interpolation between them is smoother than in a regular autoencoder. In GNAST, the inputs and outputs to the VAE are the execution traces and the latent space encoding is a representation of their most salient (i.e. most distinguishing) features. The benefit of encoding traces in this way is that it imposes an order relation on discrete, seemingly unorderable datapoints. Furthermore, the space is continuous and smooth. These are characteristics of a good search landscape [@harman2004metrics]. Since execution traces are the representation of program behaviour, the latent space in fact constitutes the space of behaviours. We suggest that since we want to explore diverse behaviours, this latent space is what we ought to be exploring diversely. Ranking Algorithm ----------------- Once we have constructed an n-dimensional Euclidean space to represent program behaviours, we need a method for evaluating which datapoint is unusual and which is redundant. We propose doing this using an algorithm known as the Farthest-First Traversal (FFT) or the greedy permutation. FFT arranges a set of points into a sequence such that the minimal distance of each following point is maximised. In other words, the prefix of the sequence is always maximally representative of the whole dataset. We are not aware of work which would use FFT in ranking candidate solutions in SBST. The algorithm is shown in and described below, with line numbers shown in brackets. Prior to the actual algorithm, the pairwise distance of each datapoint is calculated (**2**). Then the result is initialised with two maximally distant elements (**3**). The next element to append is chosen such that it is furthest from ones already in the result sequence, i.e. the minimal distance is maximised (**5**). Each following element in the sequence is thus maximally different from ones already in the prefix. Once the dataset is sequenced, its tail (the most redundant portion) is discarded. We propose that pruning the dataset according to this notion of similarity keeps it diverse and maximally representative. Generative Neural Networks {#sec:gnn} -------------------------- The heart of GNAST is a Generative Neural Network (GNN) that is trained to produce program inputs using the culled, representative dataset. As the dataset expands, the generator gets new datapoints to drive its training towards previously unseen behaviours and program inputs. Perhaps the best known example of a GNN design for strings is the Recurrent Neural Network (RNN) [@Goodfellow-et-al-2016; @rumelhart1986learning; @hochreiter1997long]. A very early version of GNAST was implemented with an RNN (modelling on work by Bowman *et al.* [@bowman2015generating]), but it was much too slow to train and to generate new inputs, so that design was abandoned. Our generator is therefore based on an alternative design, inspired by the Deconvolutional Generative Adversarial Network (DCGAN) [@radford2015unsupervised]. In DCGAN, two neural networks – a generator and a discriminator – are pitched against each other. The generator takes Gaussian noise as input, passes it through a stack of transposed convolutions [@Goodfellow-et-al-2016] and produces an image on the output. The discriminator takes an image as input and outputs an answer of whether the image was real or fake (generated). While the discriminator learns to be ever more effective in telling fake images from real ones, the generator attempts to produce images that fool the discriminator into believing they are real. This tandem of networks ends up creating realistic images. There are two significant problems of readily applying DCGAN to generation of discrete data. The first problem is that unlike pixel values in images, discrete variables such as characters in strings are not readily orderable. For instance, “bat” is *not* semantically almost the same as “cat” although the two words only differ by a single (alphabetically adjacent) character: $"cat" \neq ("bat"+1.0)$ [@goodfellow_reddit_2016]. On the other hand, in continuous variables such as pixel hue values, a slight offset does not dramatically change the meaning: a cat can be still identified even if the image is slightly blurry. The other problem, inherent to neural networks, is the need for a lot of training data [@beleites2013sample]. DCGAN requires a corpus of real datapoints for training the discriminator. This in itself is a disqualifying requirement for a fuzzer, as it would require a large, representative dataset of valid program inputs. This defeats the purpose of a fuzzer: if you have a truly representative dataset of inputs, it could just be used to test the program. Furthermore, using only valid inputs would train the generator to produce only valid inputs. This is not ideal for fuzzing, as the program ought to be tested on both valid and invalid inputs. In GNAST, in place of a discriminator network, we have the combination of a SUT, a VAE and FFT ranking. This mechanism serves the same purpose as the discriminator in DCGAN: to, as it were, quantify the quality of candidate solutions. Furthermore, thanks to the VAE, we have a continuous input space ordered by features of execution traces. This ordering of the input space imposes an order on the generated strings. In addition, we have unlimited training data thanks to GNAST producing its own. These design features thus address major problems of the use of generative models for string generation. In the machine learning arena, the networks employed in this prototype are not novel. Recent advances in GNNs have brought about numerous novel architectures, many of them for image generation, e.g. [@he2016identity; @karras2019style; @zhang2017stackgan], and even for text generation [@bowman2015generating; @wang2018text]. Indeed the concept of training an NN given some response from an external source is reminiscent of RL. From this point of view, the GNN represents the actor, the SUT is the environment, and the VAE with FFT is the critic[@konda2000actor]. As for the direct use of GNNs for test input generation – the closest use case to ours – we are only aware of work by Godefroid et al. where an RNN was used to test a PDF reader [@godefroid2017learn] with mixed results. Experimental Setup {#sec:experiments} ================== This section presents the implementation details of each component of GNAST, as well as the experiments we ran. To the best of our knowledge, the proposed framework is novel in many aspects. A large part of our experiments was therefore exploratory; various designs and configurations were considered and tested, albeit not exhaustively due to resource limitations. The NN mechanisms were implemented with the Tensorflow framework [@tensorflow2015-whitepaper]. Citations to standard deep learning concepts are omitted for clarity, and the reader is referred to the Deep Learning Book [@Goodfellow-et-al-2016] for reference. Trace Profiling --------------- Two mechanisms are lifted out of AFL: the program instrumentation (and hence representation) and the execution harness. To be clear, *none* of AFL’s generative or prioritisation mechanisms were used. We did not modify or parametrise these mechanisms, so there are no options to be considered, save for, of course, replacing the instrumentation with an alternative tool altogether. AFL’s execution trace representation is a count of decision point transitions. First, the source code is instrumented with a drop-in replacement for GCC (or G++ or Clang) prior to testing. For efficiency, only a rough number of transitions is kept (8 buckets), and the trace is of a fixed size of 64K. An execution trace is thus a 64K long, typically very sparse vector with 8 distinct values. Although GNAST does not take advantage of the efficiency of this representation but it is nonetheless appropriate for three reasons. First, it is simple to implement by adapting AFL’s code. Second, the empirical evidence of AFL’s performance is in and of itself an indication of the representation’s effectiveness. Finally, there are many new aspects to our framework, and using an alternative, potentially ineffective trace representation would add to the uncertainty of the framework overall. AFL’s execution harness is referred to as a “fork server”. The basic idea of the fork server is to spin up the SUT up to the initial `main()` call and keep a snapshot of the program in that state. Whenever the SUT is executed under a new input, this initial state is copied and execution proceeds from there. Thanks to this simple mechanism, the time required for initialisation is avoided and fuzzing becomes much faster, so the inputs produced by the fuzzer can be executed quickly. Variational Autoencoder ----------------------- The VAE is composed of an encoder and a decoder, modelled on the architecture of Kingma’s work [@kingma2013auto]. The encoder is as follows. The first layer is the input of size 64K, matching the size of the trace. This is followed by one or more densely connected hidden layers. Next is the encoding layer which is composed of a densely connected $\mu$ and $\sigma^2$ layers. In addition, the encoding layer includes a source of Gaussian noise. The construction of the encoding layer $z$ is shown in . Once the VAE has been trained, the VAE is queried for an encoding of a trace, it is $\mu$ layer’s output that is fetched. That is, $\sigma^2$ and $z$ layers are only used during training to give the latent space the desired characteristics, but the actual encoding is the value $\mu$. $$z = \mu + e^{0.5\sigma^2} * \mathcal{N}(0,I) \label{eq:encoding_layer}$$ The output of the $z$ layer is fed to hidden layer(s) of the decoder. The output of the decoder a categorical softmax layer of size $(64K, 8)$. This is specific to this trace representation where counts of state transitions are bucketed into 8 values. This allows the VAE to be trained using categorical cross-entropy. The loss function of the VAE is composed of two terms, as shown in . The first term is the reconstruction loss and the second is the latent encoding regularisation loss. The purpose of the former is to get the VAE to actually encode the data. The latter term forces the latent space to approximate a normal distribution; to give it a convenient shape. More technically, the loss components are the Kullback-Leibler divergence between the approximate ($q$) and the true posterior ($p$) distributions, given datapoints ($x$) and model parameters ($\theta$), and the lower bound ($\mathcal{L}$)w on the marginal likelihood of datapoint *i* respectively [@kingma2013auto]. $$loss_{VAE} = D_{KL}(q_{\phi}(z | x^{(i)}) \|p_{\theta}(z | x^{(i)})) + \mathcal{L}(\theta,\phi;x^{(i)}) \label{eq:vae_loss}$$ The configurable parameters of the VAE are the number, size and activation functions of hidden layers, the size and noise rate of the encoding layer as well as the losses and optimisers. These parameters affect the nature of the VAE. We are not aware of work formally defining a “good” VAE but from first principles of NNs, we can assume that the latent space must not be underfitted nor overfitted. Avoiding underfitting means that the VAE needs to encode the features of the data. The purpose of this is obvious: if the network does not encode the data, its latent space has no meaning whatsoever. Under visual inspection, an underfitted latent space appears like a spherical cloud of points around zero. Practically, underfitting can be easily identified by whether the model’s loss value descends – i.e. the network trains. An overfitted latent space, in turn, looks like a manifold, e.g. a line. In this case the network learned to simply memorise the datapoints and mapped them to distinct points, rather than an area. In the case of overfitting, the latent space encoding is likewise meaningless. Generative Neural Network ------------------------- The input to the GNN is the encoding of a trace produced by the VAE, with addition of small Gaussian noise $\sigma=0.1$. Gaussian noise as input was also considered but discarded as a less interpretable; there would be no mapping from behaviours to program inputs. The purpose of this noise is to perturb the data so that the framework explores novel behaviours. Much like in DCGAN, the input is then upscaled with a densely connected layer(s), and reshaped to match the size of the output of the GNN (using deconvolutions with stride $>1$). The main portion of the GNN is a stack of transposed convolutions (commonly called “deconvolutions” – hence DCGAN) with strides one or two. This stack of deconvolutions may also include shortcuts, i.e. residual blocks [@he2016deep]. These are intended to strengthen the signal through the network thereby alleviating the vanishing gradient problem [@hochreiter1998vanishing]. The output layer is of size $(str\_len_{max}, dict\_size)$. In our experiments the maximum string length $str\_len_{max}$ was fixed at $512$ and the number of possible characters $dict\_size$ was 129: 128 ASCII characters and 0 for padding. During training, the output layer is trained using softmax cross-entropy of one-hot encoded strings and the reproduction error of the encoded trace (). The first term $-\sum_{c=1}^{M}y_{o,c} \log(p_{o,c})$ trains the the GNN to approximate the mapping of trace encodings to strings. It is a standard loss for training categorical classifiers [@Goodfellow-et-al-2016]. The second term $|\hat{z} - z|$ aims to approximate the SUTs semantics within the GNN. This term is minimised when the value of the encoded trace on the input $z$ to the GNN matches the *true* trace $\hat{z}$ encoding that the SUT produces when executed under the generated input. The overall loss function maps traces to strings *and* tries to make those strings produce a specific behaviour. $$loss_{GNN} = -\sum_{c=1}^{M}y_{o,c} \log(p_{o,c}) + |\hat{z} - z| \label{eq:gnn_loss}$$ During generation, rather than simply choosing the most likely character for each index of the output string (“argmax”), the generator samples from the output probability distribution, $x = \langle x_i \sim p_i; i < str\_len_{max} \rangle$. This sampling technique selects *plausible* (rather than *most likely*) characters given a learned probability distribution. There are numerous possible parameter configurations for the GNN. Common options include the size and the number of upsampling and deconvolutional layers, activation functions and optimisers. In addition, fundamental architectural options had to be considered. Namely, whether to use string reconstruction cross-entropy, trace reconstruction MSE or their combination as a loss; whether to use trace encodings or Gaussian noise as the input; and whether to sample or use argmax on the output. Experiments Conducted --------------------- We conducted experiments on three SUTs that take strings with complicated grammar as input. The first one is an XML linter called *xmllint* from libxml [@libxml]. The second is *sparse* [@sparse], a lightweight parser for C [@sparse]. The last program is *cJSON*, a parser for the JSON format [@cjson]. Generating inputs for these programs out of thin air is a non-trivial task. Also, since these programs validate their inputs by design, the validation process ought to be reflected in execution traces. Execution traces that depend on syntax validation would give GNAST and AFL a good representation to work with. For the comparative analysis against a baseline, we first ran AFL on each program for 48 hours. This produced 3240, 11286 and 2071 queue inputs for *xmllint*, *sparse* and *cJSON* respectively. Of the three SUTs, AFL only found crashes in *sparse*. In GNAST’s implementation, we evaluated its the performance under dozens (if not hundreds) of parameter configurations. There was not a fixed time budget, nor a strict performance measure for each configuration. Instead it was a trial and error exploration of the effects of various configurations with respect to the research questions. Once the most promising structure and configuration was found, we trained GNAST on each program for 48 hours. These trained instances of GNAST were then be used for RQs three through six. Results {#sec:results} ======= The research questions explore the nature of the proposed architecture. The findings of our investigation show that GNAST does indeed train while maintaining diversity, that it produces sensible program inputs, and that proximity in the latent space corresponds to similarity of generated strings. However, the result to RQ5 was negative: we cannot generate program inputs that would trigger specific program behaviours. Finally, GNAST does find crashes in *sparse* – the SUT in our corpus where AFL also found crashes. RQ1 – Training Convergence -------------------------- GNAST trained successfully under some configurations, however it failed under others. Unlike standard NNs, a successful training is not meant to converge arbitrarily close to zero because GNAST is ought to continually produce new data. Instead, the convergence ought to be simply *noticeable*. Our primary definition of failure is the failure to converge, i.e. underfitting. A secondary notion of failure is instability – a case when the training failed with the loss reaching `inf`. In fact, these two effects correspond to a vanishing and exploding gradient problems respectively [@hochreiter1998vanishing; @pascanu2012understanding]. The VAE’s loss failed to converge when the noise in the encoding layer was too high and when the hidden layers were very small, e.g. 8 units. Lowering the amount of noise to $\sigma = 0.1$, using 512 or more neurons in the hidden layers along with a Leaky ReLU activation allowed the VAE to converge. GNN’s training did not converge when the structure was too weak; there were too many layers or too few filters. When it came to depth, more than a dozen layers did not appear to converge in a reasonable time. Residual blocks resolved this issue however and structures of up to 64 layers converged. With fewer than 32 filters the network also failed to converge. Initial configurations often resulted in instability, indeed much more often than we had previously encountered in working with NNs. We believe there are three reasons for the instability. The first is aggressive optimisers, particularly Adam [@kingma2014adam]. Adam is susceptible to instability when close to convergence [@wilson2017marginal], which is a potential explanation. The second reason is Gaussian noise in the latent layer of the VAE and on the input to the GNN. Lowering the noise $\sigma$ to $0.1$ resolved this instance of instability. The last cause, we suggest, is the constantly shifting dataset. This is the most unusual reason, specific to GNAST, but not to NNs in general. Modern optimisers like RMSProp [@tieleman2012lecture] and Adam both keep track of past gradients to calculate the next optimisation steps. Adam also uses momentum. Injecting new data throws these optimisers off: taking a step towards what previously seemed an optimal point, and finding a completely unexpected datapoint there causes the loss to diverge. We are not aware of work where the training dataset would be updated like in GNAST so this suggestion is speculative, and warrants further research. The final, most promising set of hyperparameters was the following. An RMSProp optimiser with a learning rate to $0.0001$, $42$ convolutional layers with residual connections, a kernel size of $3$, Leaky ReLu activation, batch normalisation after activation, VAE’s latent encoding as input, and a categorical cross-entropy loss. The training loss followed an unusual trajectory: first down quickly, then up, then down slowly again. This is not a major finding per se, but an observation of expected behaviour, given a framework that produces its own training data. While there are few datapoints at the start of training, the model quickly learns how to reproduce them and the loss goes down. As new datapoints are found, the loss increases, and then slowly descends again when features of the new datapoints are learnt. The central finding of this RQ is that GNAST trains. The positive outcome to this RQ validates the principle of GNAST and makes subsequent questions worthwhile. RQ2 – Diversity During Training {#res:rq2} ------------------------------- The second property of a successful training process is that it maintains diversity. Since GNAST produces its own training data, it is critical for the framework not to collapse into generating the same outputs over and over. Overfitting was seen both in the VAE and the GNN. We consider a model failing to maintain diversity if no new traces are produced over 20 epochs of generation and training. When the VAE was too powerful, the loss tended close to zero and under visual inspection, the latent space looked like the datapoints are placed on a manifold. This occurred when there was no noise or the hidden layers were too powerful (too many cells or layers). In this scenario, the locality in the latent space is meaningless which rids the VAE of its purpose. That is to say, an overfitting VAE is a useless structure with respect to ordering the execution traces – not necessarily an immediate problem for diversity. Making the GNN too strong would in turn result in a reduction in diversity of generated strings. Too many filters in the deconvolutional layers, too little noise on the input or when the dropout rate was set too low – all contributed to a diminished diversity. We do not have a principled explanation for this, but it may be due to the GNN effectively learning to ignore the input layer and just encoding the dataset into its structure, before it manages to become sufficiently diverse and representative – an artefact of the recursive nature of GNAST. The fundamental architectural choice of sampling on the output, rather than taking the maximum, had a very strong effect on diversity. In an untrained model, changes in the input propagated to the output very weakly. That is, when feeding inputs drawn from a normal distribution to the GNN, and then taking argmax at the output, the generated strings altered very slightly, if at all. Sampling on the output however readily produces numerous varied outputs. This provides ample training data for GNAST while also following the learnt output distribution. The best loss function with respect to diversity was the combination of cross-entropy on the outputs and MSE between the input to the GNN and the resulting true encoding (when the generated string was passed through the SUT and the VAE). Using only cross-entropy, the model did produce diverse outputs initially, but then lost diversity and stabilised into preferred outputs. Using only the encoding reproduction, the framework did not appear to converge, and new inputs were found very slowly, perhaps even coincidentally. In combination however, the model kept producing new outputs seemingly indefinitely[^3]. In addition, this combination of components for the loss function had a clear effect on crash discovery, as described below in . We cannot explain why it is the *combination* of these loss components that has this property and this is a central direction for future work. RQ3 – Syntactic Features ------------------------ One of the characteristics we desire of GNAST is that it ends up generating sensible program inputs. We evaluate the “sensibleness” of the generated corpora by inspecting the most common n-grams. This analysis is comparative because the current implementation of GNAST uses AFL’s instrumentation, so it could not produce better formed strings than those made by AFL. N-grams are sequences of characters of lengths $[3...10]$ that occur most frequently in the corpora. Typical features found in corpora produced by AFL and GNAST are discussed below and sample snippets are presented in Listings \[lst:AFL\_examples\] and \[lst:GNAST\_examples\]. There were similarities in n-grams across corpora generated by both AFL and GNAST. For *xmllint*, the most prevailing features are the triangular braces `<` and `>`. Usually these are not properly matched however. That is, neither tool finds that opening and closing braces ought to come in pairs. Colon `:` also appeared often. In XML, this special character is used to denote a name prefix to resolve name conflicts. Though `:` comes up often, it is not in the correct location, i.e. within a tag name. Another control sequence that kept appearing is the processing instruction `<?` which sends the command within a tag to third party software [@xmlSyntaxRules]. *Sparse* is a very simple parser for C code and it only identifies basic syntactic errors. It is therefore unsurprising that the generated strings did not appear very much like actual C code. Two features stood out however: braces `(` and `)` and the presence of semicolons `;`. Common character sequences for *cJSON* included curly and square braces `{`, `}` and `[`, `]`, colons `:` and line breaks `\n`. These are all special characters associated with the correct syntax of JSON. Again though, neither AFL nor GNAST generated anything resembling well-formed JSON. These observations were in line with the expectation of some syntactic features, but not well-formedness. There were also differences in the corpora generated by AFL and GNAST. The first difference is the length of generated strings. Whilst AFL’s heuristics keep the maximum length unrestricted, under the current implementation, the maximum string length of GNAST is capped at 512 characters. Furthermore, AFL occasionally prunes the corpus by deleting slices of the strings and observing whether this changes the execution traces. GNAST on the other hand has no preference for shorter strings, so most of the strings in its corpus are ca. 500-510 characters in length[^4]. Second, strings of AFL’s corpus contain very long sequences of repeated characters and character sequences. For instance, there are input strings where the letter `K` is repeated thousands of times. Although GNAST sometimes repeats sequences as well, not nearly to such degree. Third, strings produced by GNAST appear to be more exploratory: where AFL finds an interesting n-gram and keeps reusing it, GNAST does not. As a result, GNAST generated strings appear less structured. Finally, while neither tool is restricted to only printable characters, AFL uses them more frequently than GNAST does. To our knowledge, AFL does not have heuristics that prioritise printable characters, so we do not have an immediate explanation for this effect. It is obvious that the generated strings are a far cry from well-formed, whether generated by AFL or GNAST. Nonetheless, they are not random sequences of characters either; some syntactic features are clearly identifiable. We conclude that GNAST generates strings that appear to be comparable in nature to those made by AFL. This means that the feedback mechanism works and that GNAST leverages it effectively to learn significant features of the SUTs’ input syntax. // xmllint <YY>vYY&&&aZaaaa <b:><b:\\>< <Y:YvYYb></Y:YvY <?xml.ver:V?VFxx <YvHP>><?PM> <IQQQQQQQQQQQQQQQQQQQQY><YvY<YvQQQQQY=pnQ <\xd3\xbe>\xde[]:f\xde ((((((((() // Sparse [KKQKKKKKKKKmKKKKKKKK*KKKKKKKKKKKKKmK K?;K(C){K **/****\x00\x00\x00 ////////// // cJSON "\n{":3 \x1c\x01} \n{":3 \x1c\x01}\x13 +\u9'999\u+\u9A99 // xmllint <:p><n:/>\n HQ%\x1f>\n#jS,u\x03\x1f<\x05UV\x05\n1 +\n&<?\n7>"<? n=#+\n_<G:7> [WH\x13}\x19Bc<] <(eCQ\x05\x1bkL{GX\x01=O%Fck\x05Q) // Sparse i;();;gT;/*[n if()[[*(P**nc*H.F^//) //\x044U\x15COrm /*Q9\x15C1\x01q cU\x01UUv {s\x1c\x15r8\x05)!0C;\r #if({;+;?;*;U;1>Om; // cJSON {*,\x04} {\x80\x1aK:5\x0843"\x08y\x0b^\x04\x19"#\x02} {"\x1d\x7f\x10 <@?\x086\x1f\x1d\x08\x1d\x02\x1b{=\x08=\x08} RQ4 – Similarity of Elements in Latent space {#sec:res_rq4} -------------------------------------------- By design, GNAST is intended to map encodings of program executions to program inputs that triggered those executions. The latent space encodings define a notion of similarity and this ought to be reflected in the strings generated by the GNN: strings generated from points close to each other in the latent space should be similar, and strings from distant points dissimilar. This property was evaluated in the following way. Ten thousand vectors from a normal distribution are drawn, and ordered by two algorithms: the greedy FFT algorithm from most to least representative, as well as an opposite “closest first traversal” (CFT). CFT orders the datapoints such that the pairwise distance of each following element is minimised with respect to the existing sequence. We then feed the first hundred elements of the FFT and CFT sequences to a trained GNN and compare the generated strings. The expectation is that the strings from the FFT sequence ought to be less syntactically similar than those generated from CFT. Syntactic similarity is assessed by the Jaccard index of the overlap of n-grams of lengths $[1...10]$ of each string vs. the n-grams of the whole dataset of 100 elements. That is $\frac{|A \cap B|}{|A \cup B|}$, where $A$ is the set of n-grams in each individual string and $B$ is the set of n-grams in all 100 strings. The mean Jaccard indices are shown in . Since the inputs are randomly generated, the process was repeated ten times and results averaged. The values are not to be compared across programs, but rather between the two sequences. They show that strings generated from adjacent input values are consistently more similar in terms of n-grams versus those generated from distant points. This means that GNAST can generate new program inputs with a notion of relative similarity. Such an ability is a step towards ultimately defining search strategies in a continuous Euclidean space – one of the future prospects this work aims to enable. **SUT** **FFT** **CFT** ----------- ------------ ------------ *xmllint* 0.199767 0.453304 *sparse* 0.01205251 0.05063868 *cJSON* 0.090320 0.337037 : Mean Jaccard similarity of the n-grams of the first 100 elements in FFT and CFT sequences. Strings generated from nearby points in the input space have a higher overlap of n-grams, as shown by the higher values of the CFT column. This means that we can control the syntactic similarity of elements generated by GNAST.[]{data-label="tbl:rq3"} RQ5 – Targeting Specific Behaviours ----------------------------------- A further intended property of GNAST is the ability to generate program inputs that trigger a specific behaviour. In other words, given an input $z$ to the GNN, it generates a program input $x$, which when executed by the SUT produces a trace $t$, and when $t$ is encoded, the encoding $z'$ ought to be close to $z$. The evaluation here was straight forward. We sampled 100 GNN inputs $Z$, passed them through the framework and looked at the cosine distances $D_{cos}(Z,Z')$. A cosine distance of $0.0$ is perfect correlation, $2.0$ inverse correlation and $1.0$ a lack of correlation. Results to this RQ were negative: consistently within $0.1$ of $1.0$, the actual encoding values of traces $z'$ were random with respect to the inputs $z$. So we cannot generate a string which will trigger an exact, specific behaviour. We do not have an immediate explanation for why this is, and we intend to look into this in future work. RQ6 – Finding Crashes {#res:rq6} --------------------- This RQ is very important for a fuzzer in the long run. At this stage however, GNAST is a proof of concept with some limitations and numerous ideas for future work (discussed below in ). The question of whether GNAST finds crashes is therefore merely an indication of its potential – not an evaluation of its current ability. The only SUT where AFL discovered crashes is *sparse* and GNAST found crashes in it as well. This is not evidence of GNAST being a better fuzzer than AFL, but that the framework has the potential of being made into one, down the line. Crash discovery was affected by the loss function of the GNN surprisingly strongly. We observed above in that the MSE component of GNN’s loss is useful for maintaining diversity. Furthermore however, without this component, GNAST did not discover crashes in *sparse*. We retrained GNAST against *sparse* ten times to confirm this effect: 5 with the MSE loss, and 5 without. In each case, it did not discover crashes without the MSE loss. This is a somewhat baffling finding with two consequences. First, the MSE loss is indeed important for exploration as discussed in . Second, exploration is, in this case, important for crash discovery. At this point it is unclear why the MSE component helps exploration and thus crash discovery, and we intend to investigate this effect thoroughly in future work. Future Work {#sec:future_work} =========== The current implementation of GNAST is a proof of concept prototype. As such, it has a number of limitations that can be investigated in future work. First, there are just very many possible configuration options and given the novelty, there are hardly any reference systems on which to model the parameters. A larger, more systematic ablation study should be conducted in the future. Second, we use a generator based on transposed convolutions where the maximum string length is fixed. Methods for allowing variable and indeed unlimited strings lengths should be investigated. Third, the alphabet used for generating strings is the whole ASCII character set. Introducing some prior knowledge of the target syntax, e.g. using sequences of characters or a restricted subset may improve the performance of GNAST. Fourth, we investigated the trade-off between training convergence and diversity using fixed learning rates, learning schedules and noise levels, which might not have been optimal. That is, a better trade-off of learning and exploration could be achieved by dynamically adjusting the learning schedules and noises. Fifth, the expressiveness of the trace representation we lifted out of AFL is limited; it is a somewhat crude abstraction of program behaviour. Alternative execution traces profilers ought to be studied. Finally, there is the negative result to RQ 5. As said, we cannot explain this outcome at this stage. Despite these limitations, we believe and hope that this work will be of interest to the community and inspire future research and collaboration. Conclusion ========== In this paper, we present the Generative Network for Automated Software Testing (GNAST) framework. It is intended to address multiple problems of SBST. These include problems with the selection, design, interpretation and implementation of representations, fitness functions and input generation methods. Furthermore, it bypasses a fundamental issue of scarcity of representative and useful training data for neural networks. GNAST generates program inputs with a generative NN, collects the resulting execution traces, and maps them onto an n-dimensional space with an autoencoder. The encoding imposes an ordering on executions so we may reason about similarity of executions quantitatively. GNAST uses this encoded representation to prune redundant datapoints in order to keep the dataset maximally representative of the range of observed program behaviours. The process is continued ad infinitum with the framework continually exploring the SUT’s behaviours while learning to produce ever more useful inputs. We believe GNAST to be the first of its kind generative neural network for automated software testing; one based on the idea of striving towards a novel notion of diversity by leveraging a SUT as an external oracle – much like the environment in an RL scenario. We explored the behaviour of the proposed framework in a series of research questions. We showed that such an architecture can be trained, and that it keeps producing new, diverse and sensible program inputs. We also showed that GNAST’s notion of execution trace similarity translates to syntactic similarity of the generated strings. This means that we can produce new program inputs with a continuous control of their syntactic similarity. Although we intended to, we cannot currently generate program inputs such that they would trigger an exact target behaviour. We believe our work to be conceptually novel for SBST, NNs and their combination. Our analysis was not a complete, systematic exploration and ablation of all the possible configurations and options. Currently GNAST is a proof of concept which requires further work. On the other hand, it also opens numerous directions for future research and will be hopefully met with interest in the community. [^1]: Pun intended. [^2]: The program in which AFL found crashes." [^3]: We ran the model for over 96 hours and it kept generating new inputs for each SUT. [^4]: Strings of length shorter than 512 are due to removed padding bytes in generated strings.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Quantum batteries are quantum mechanical systems with many degrees of freedom which can be used to store energy and that display fast charging. The physics behind fast charging is still unclear. Is this just due to the collective behavior of the underlying interacting many-body system or does it have its roots in the quantum mechanical nature of the system itself? In this work we address these questions by studying three examples of quantum-mechanical many-body batteries with rigorous classical analogs. We find that the answer is model dependent and, even within the same model, depends on the value of the coupling constant that controls the interaction between the charger and the battery itself.' author: - Gian Marcello Andolina - Maximilian Keck - Andrea Mari - Vittorio Giovannetti - Marco Polini title: 'Quantum versus classical many-body batteries' --- Introduction ============ Recently there has been a great deal of interest in quantum batteries (QBs) [@Alicki13; @Hovhannisyan13; @Binder15; @Campaioli17; @Ferraro17; @Le17; @Andolina18; @Andolina18b; @Campaioli18; @Farina18; @Julia-Farre18; @Zhang18], i.e. quantum mechanical systems that are able to store energy. These works have a key common thread in trying to understand whether quantumness yields a temporal speed-up of the charging process. A first, abstract approach [@Binder15; @Campaioli17] studied the possibility to charge $N$ systems via unitary operations. The authors introduced a parallel charging scheme, in which each of the subsystems is acted upon independently of the others, and a collective charging scheme, where global unitary operations acting on the full Hilbert space of all subsystems are allowed. In these works it was shown that the charging time scales with $N$, decreasing for increasing $N$. In the collective charging case and for large $N$, the power delivered by a QB is much larger than the one delivered by the parallel scheme. This speed-up was dubbed “quantum advantage” [@Binder15; @Campaioli17; @Le17; @Ferraro17]. Furthermore, in Ref.  it was shown that entanglement is not required to speed-up the evolution of a QB, since states which are confined in the sphere of separable states share an identical speed-up. However, the authors of Ref.  pointed out that such highly mixed states host only a vanishing amount of energy, yielding therefore a highly non-optimal result from the point of view of energy storage and delivery. In the same spirit, the authors of Refs.  studied similar issues but in realistic setups which can be implemented in a laboratory, such as arrays of qubits in cavities [@Ferraro17; @Andolina18; @Andolina18b; @Farina18] and spin chains in external magnetic fields [@Le17]. In Refs. , the battery units are not charged via abstract unitaries but, rather, by other quantum mechanical systems dubbed “chargers”. In this framework, the parallel scheme is the one in which each battery is charged by its own charger, independently of the others—see Fig. \[fig:Sketch\]. On the contrary, the collective scheme is the one in which all batteries are charged by the very same charger. Also in this context, the collective scheme outperforms the parallel one in terms of speed of the charging process. Finally, the authors of Ref.  demonstrated that quantum batteries have the potential for faster charging over their classical counterparts. As they noticed, however, the classical counterparts were assumed to be composed of non-interacting units. [Figs/Sketch]{}(4,31) In this Article we compare the performance of QBs with that of their appropriate classical versions. Such comparison is clearly of great interest for foundational reasons but has no implications on the development of scalable solid-state systems where energy transfer processes and their time scales can be studied experimentally. Indeed, any solid-state QB device is going to operate on the basis of electrons, photons, spins, etc, which are inherently described by quantum mechanics. We focus on three models. In the first, a single bosonic mode (the charger) is coupled to $N$ harmonic oscillators (the proper battery composed of $N$ subunits). In the second one, $N$ qubits playing the role of charging units are coupled to another set of $N$ qubits playing the role of the proper battery. Finally, the third one is the Dicke QB introduced in Ref. . In the first case, the performance of classical and quantum versions of the model is identical. In the second case, the classical version outperforms the quantum one. In the third case, there is a range of values of the charger-matter coupling parameter $g$ for which the quantum (classical) model performs better than the classical (quantum) one. Our Article is organized as following. In Sect. \[Comp\] we explain how the classical versus quantum comparison is actually carried out in this Article, briefly reviewing the correspondence between quantum commutators and classical Poisson brackets. In Sect. \[Fom\] we recap the charging protocol first introduced in Refs.  and introduce the figures of merit needed to evaluate the performance of classical and quantum many-body batteries. In Sect. \[sect:HO\] we discuss the first model (single bosonic mode coupled to $N$ harmonic oscillators). We demonstrate analytically that in this case classical and quantum versions of the model display fast charging with the same time scale. In Sect. \[sect:spin\] we introduce the second model ($N$ qubits coupled to $N$ qubits) and demonstrate how the classical version of the model outperforms the quantum one. In Sect. \[sect:Dicke\] we compare the Dicke QB model introduced in Ref.  with the corresponding classical analogue, showing numerically that the relative performance depends on the charger-matter coupling $g$. Finally, in Sect. \[concl\] we report a summary of our main findings and our conclusions. Comparison between quantum and classical mechanics {#Comp} ================================================== In quantum mechanics, the evolution of an operator $\hat{O}$ in time $t$ is described by the Heisenberg equation of motion $\hbar~d\hat{O}(t)/dt=i[{\mathcal{H}},\hat{O}(t)]$, where ${\cal H}$ is the Hamiltonian. Moreover, canonically conjugate variables, such as position $\hat{q}_i$ and momentum $\hat{p}_j$, fulfill the commutation relation $[\hat{q}_i,\hat{p}_j]=i\hbar\delta_{i,j}$. In the case of angular momentum $\hat{\bm J}$, a similar relation holds between different components: $[\hat{J}_i,\hat{J}_j]=\sum_k i\hbar\epsilon_{ijk}\hat{J}_k$, where $\epsilon_{ijk}$ is the Levi-Civita tensor. In Hamiltonian mechanics, a classical physical system is uniquely described by a set of canonical coordinates $\boldsymbol{x}^{\rm T}=( {\bm p}, {\bm q})$, where the components $q_i, p_i$ are conjugate variables obeying $\{q_i, p_j\}=\delta_{i,j}$. Here, $\{u,v\}\equiv\sum_i(\partial_{q_i} u~ \partial_{p_i}v- \partial_{p_i}u~ \partial_{q_i}v)$ denotes the Poisson brackets. The time evolution of the system is uniquely defined by Hamilton’s equations: $$\begin{aligned} \label{eq:HJ} \frac{dq_i}{dt}&=&\partial_{p_i}\mathcal{H}^{\rm cl}({\bm x})~, \nonumber \\ \frac{dp_i}{dt}&=&-\partial_{q_i}\mathcal{H}^{\rm cl}({\bm x})~.\end{aligned}$$ A proper comparison between quantum and classical systems can be made by following the canonical quantization procedure [@DIRAC]. Once the Hamilton’s function $\mathcal{H}^{\rm cl}({\bm x})$ of a classical system is written in terms of conjugate variables with Poisson brackets $\{q_i, p_j\}=\delta_{i,j}$, quantization is carried out by replacing classical coordinates by operators and enforcing canonical commutation relations instead of canonical Poisson brackets. While finding the classical analog of a quantum system with degrees of freedom that are position and momentum is straightforward and consists in making the replacements $\hat{q}_i\to{q}_i$ and $\hat{p}_j\to{p}_j$, the classical version of quantum mechanical angular momentum is more subtle. It turns out [@BraunBook; @Carlos18] that the right choice is to replace the components $\hat{J}_i$ of the angular momentum operator $\hat{\bm J}$, with $\hat{\bm J}^2=\hbar^2J(J+1)$, with the classical canonical coordinates $J_z=J\cos(\theta)$ and $\phi=\arctan(J_y/J_x)$, so that $\{J \cos(\theta), \phi \}=1$, i.e. $\hat{J}_z \to J\cos(\theta)$, $\hat{J}_x \to J\sin(\theta)\cos(\phi)$, and $\hat{J}_y \to J\sin(\theta)\sin(\phi)$. In the remainder of this Article we set $\hbar=1$. Charging protocol and figures of merit {#Fom} ====================================== We start by reviewing a model for the charging process of a QB [@Ferraro17; @Andolina18; @Andolina18b; @Farina18]. As stated above, the classical and quantum cases are both described by an Hamiltonian formalism. We can therefore introduce the charging protocol in terms of a general Hamiltonian, without specifying [*a priori*]{} whether we treat the classical or quantum case. As such, we will describe the protocol in general, commenting explicitly on the classical and quantum cases only when it is needed. In our charging protocol [@Ferraro17; @Andolina18; @Andolina18b; @Farina18], a first system $\rm A$ acts as the energy “charger” for a second system $\rm B$, which instead acts as the proper battery. They are characterized by local Hamiltonians $\mathcal{H}_{\rm A}$ and $\mathcal{H}_{\rm B}$, respectively, which, for the sake of convenience, are both chosen to have zero ground-state energy. We also assume ${\rm B}$ to be composed by $N$ non-mutually interacting elements. (Effective interactions between these elements are induced by the charger. In the Dicke QB case, for example, the cavity mode induces effective interactions between all the qubits.) In the quantum case, the system at time $t=0$ is in a pure factorized state $| \psi\rangle_{\rm A} \otimes |0\rangle_{\rm B}$, $|0\rangle_{\rm B}$ being the ground state of $\mathcal{H}_{\rm B}$ and $| \psi\rangle_{\rm A}$ having mean local energy $E^{(N)}_{\rm A}(0)\equiv {_{\rm A} \langle} \psi|\mathcal{H}_{\rm A}|\psi\rangle_{\rm A} >0$, where $N$ is the number of elements which compose the battery. Analogously, in the classical case we impose that the system B at time $t=0$ is in the configuration with the lowest energy and we fix the energy in the charger A to be $E^{(N)}_{\rm A}(0)>0$. By switching on a coupling Hamiltonian $\mathcal{H}_1$ between A and B, our aim is to provide as much energy as possible to $\rm B$, in some finite amount of time $\tau$, the charging time of the protocol. For this purpose, we write the global Hamiltonian of the AB system as $$\label{eq:protocol} \mathcal{H}(t) \equiv \mathcal{H}_{\rm A}+\mathcal{H}_{\rm B}+\lambda(t)\mathcal{H}_1~,$$ where $\lambda(t)$ is a time-dependent parameter that represents the external control we exert on the system, and which we assume to be given by a step function equal to $1$ for $t\in[0,\tau]$ and zero elsewhere. Accordingly, in the quantum case, we denote by $|\psi(t) \rangle_{\rm AB}$ the evolved state of the AB system at time $t$, its total energy $E(t) \equiv {_{\rm AB}\langle} \psi(t) |\mathcal{H}(t)| \psi(t) \rangle_{\rm AB}$ being constant at all times with the exception of the switching points, $t=0$ and $t=\tau$, where some non-zero energy can be transferred to ${\rm AB}$ by the external control. (See Ref.  for a detailed analysis of the energy cost of modulating the interaction.) The same conditions hold in the classical case where we denote by $\boldsymbol{x}^{\rm T}(t)=( \boldsymbol{p}(t), \boldsymbol{q}(t))$ and $E(t)=\mathcal{H}^{\rm cl}\big(\boldsymbol{x}(t)\big)$ the solution of Hamilton’s equations of motion and the total energy at time $t$, respectively. Here, $ \boldsymbol{p}$ and $ \boldsymbol{q}$ are classical conjugate variables. It is also useful to define the vector $\boldsymbol{x}_{\rm B}^{\rm T}(t)=( \boldsymbol{p}_{\rm B}(t), \boldsymbol{q}_{\rm B}(t))$, denoting the position in phase space of B at time $t$. In the quantum case, we are mainly interested in the mean local energy of the battery at the end of the protocol, i.e. $$\label{stored energy} E^{(N)}_{\rm B}(\tau)\equiv {\rm tr}[\mathcal{H}_{\rm B} \rho_{\rm B}(\tau)]~,$$ $\rho_{\rm B}(\tau)$ being the reduced density matrix of the battery at time $\tau$. It is worth noticing that while $E^{(N)}_{\rm B}(\tau)$ does not necessarily represent the amount of energy that one can recover from the battery after charging, it has been shown that for large enough $N$ this is not a relevant issue [@Andolina18b]. In the classical case, the corresponding quantity is the energy in B, $E^{(N)}_{\rm B}(\tau)=\mathcal{H}^{\rm cl}_{\rm B}(\boldsymbol{x}_{\rm B}(\tau))$. The performance of the charger-battery set-up can be studied by analyzing the average storing power $P^{(N)}_{\rm B}(\tau)\equiv E^{(N)}_{\rm B}(\tau)/\tau$. Specifically, we define the maximum average power as $\bar P^{(N)}_{\rm B}\equiv \max_\tau [P^{(N)}_{\rm B}(\tau)]$. Finally, we introduce the optimal charging time $\bar{\tau}$, $\bar P^{(N)}_{\rm B}=P^{(N)}_{\rm B}(\bar{\tau})$, and the energy at the maximum power, $\bar{E}^{(N)}_{\rm B}\equiv{E}^{(N)}_{\rm B}(\bar{\tau})$. Our aim is to compare the parallel charging scenario against the collective one [@Binder15; @Campaioli17; @Ferraro17]. As mentioned above, we define as a parallel charging, the protocol in which $N$ batteries are independently charged by $N$ chargers. Each charger has an energy $E_{\rm A}^{(1)}(0)$. Conversely, the collective charging case is the one in which all $N$ batteries are charged by the same charger. In order to do a clear comparison, in the collective charging case we impose that the charger has total energy equal to the sum of the energies of all the chargers of the parallel charging scheme, i.e. $E_{\rm A}^{(N)}(0)=NE_{\rm A}^{(1)}(0)$. Since we are interested in comparing the power of the protocols, we denote by the symbol ${\bar{P}_{\sharp}}$ (${\bar{P}_{\parallel}}$) the maximum power in the collective (parallel) protocol. Following Ref. , we introduce the so-called collective advantage: $$\label{Gamma} \Gamma \equiv \frac{\bar{P}_{\sharp}}{\bar{P}_{\parallel}}~.$$ We have $\bar{P}_{\sharp}=\bar{P}^{(N)}_{\rm B}$ and $\bar{P}_{\parallel}=N\bar{P}^{(1)}_{\rm B}$. The latter property follows from the fact that the power in the parallel charging scheme is trivially extensive. The figure of merit in Eq. (\[Gamma\]) quantifies how convenient is to charge a battery in a collective fashion rather than in a parallel way. While in Refs.  this quantity is named “quantum advantage”, it is possible to define $\Gamma$ also in the classical case. Since our main goal is to compare quantum and classical batteries, we will denote by $\Gamma_{\rm qu}$ the collective advantage produced by a quantum Hamiltonian and by $\Gamma_{\rm cl}$ the collective advantage produced by the analog classical Hamiltonian. What matters is therefore the ratio $$\label{R} R\equiv \frac{\Gamma_{\rm qu}}{\Gamma_{\rm cl}}~.$$ If $R=1$, the QB and its classical analog share the same collective boost in the charging process. Conversely, having $R>1$ means that there is a genuine quantum advantage. Finally, $R<1$ means that the collective dynamics in the classical model is more beneficial. The quantity $R$ will be crucial below in determining if fast charging is due to exquisitely quantum resources or, rather, if it has a collective (i.e. many-body) origin due to effective interactions between the battery subunits, which is present also in the classical case. Harmonic oscillator batteries {#sect:HO} ============================= In this Section we study a system composed by $N+1$ harmonic oscillators, one acting as a charger while the remaining $N$ playing the role of the proper battery. This system is described by the following Hamiltonian, $$\begin{aligned} \label{eq:Hhoho} \mathcal{H}_{\rm A}&=&\omega_0 a^\dagger a~,\nonumber\\ \mathcal{H}_{\rm B}&=&\omega_0\sum_i b^\dagger_ib_i~,\nonumber\\ \mathcal{H}_{1}&=&g\sum_i \big(a b^\dagger_i+a^\dagger b_i\big)~,\end{aligned}$$ where $a$ ($b_i$) is the destruction bosonic operator acting on A (on the $i$-th unit of the battery B), and ${\omega}_0$ and $g$ are the characteristic frequency of both systems and the charger-battery coupling parameter, respectively. For simplicity, we choose $E_{\rm A}^{(1)}(0)=\omega_0$. It is useful to introduce the bright mode [@Ciuti05] $B=\sum_i b_i/\sqrt{N}$, which is a bosonic mode fulfilling $[B,B^\dagger]=1$. Expressing the Hamiltonian in terms of the bright mode, we obtain: $$\begin{aligned} \label{eq:HhohoB} \mathcal{H}_{\rm B}&=&\omega_0 B^\dagger B~,\nonumber\\ \mathcal{H}_{1}&=&g_N\left( a B^\dagger+a^\dagger B\right)~,\end{aligned}$$ where $$\label{eq:g_N} g_N \equiv \sqrt{N}g~.$$ Hence, the AB system is equivalent to two harmonic oscillators with a renormalized coupling $g_N$. It is straightforward to obtain the dynamics of the energy of B, which is independent of the initial state $\ket{\psi}_{\rm A}$ in A. In order to calculate the stored energy  we find then useful to adopt the Heisenberg representation, writing $E_{\rm B}(\tau) ={\rm tr}[\rho_{\rm AB}(0)\mathcal{H}_{\rm B}(\tau)]$, where $\rho_{\rm AB}(0)$ is the density matrix of the full system at the initial time, with $\mathcal{H}_{\rm B}(\tau)\equiv e^{i \mathcal{H} \tau} \mathcal{H}_{\rm B}e^{-i \mathcal{H} \tau}$. Expressing $a$ and $b$ as functions of the normal operators $\gamma_{\pm}=(a\pm B)\sqrt{2}$ and using that the latter evolve simply as $\gamma_{\pm}(t) = e^{-i\omega_{\pm}t}\gamma_{\pm}$ with $\omega_{\pm}=\omega_0\pm g_N$, we obtain $$\begin{aligned} \label{evolvedH} \mathcal{H}_{\rm B}(\tau)&=&\frac{\omega_0}{2}\Bigg\{a^\dagger a+B^\dagger B \\&-&\left[\frac{e^{-i2g_{N}\tau}}{2} (a^\dagger a-B^\dagger B+B^\dagger a-a^\dagger B) +{\rm H.c.}\right] \Bigg\}~, \nonumber\end{aligned}$$ and, finally: $$\begin{aligned} \label{eq:Ebho} E^{(N)}_{\rm B}(\tau)=N\omega_0\sin^2(g\sqrt{N}\tau)~.\end{aligned}$$ Defining $Y={\max_x}[\sin^2(x)/x]$, the maximum power reads $\bar{P}^{(N)}_{\rm B}=N\sqrt{N}g\omega_0Y$. Accordingly, we have: $$\begin{aligned} \label{eq:Gammaho} \Gamma_{\rm qu}=\sqrt{N}~.\end{aligned}$$ We note that if $\ket{\psi}_{\rm A}$ is a coherent state, the evolved state $|\psi(t) \rangle_{\rm AB}$ remains factorized at all times [@Andolina18; @WallsMilburn2007]. This is an example where the advantage is present, despite the total absence of correlations. Now we study the classical analog of the quantum model in Eq. (\[eq:Hhoho\]), which can be simply obtained by reversing the quantization procedure and substituting quantum commutators with classical Poisson brackets. The corresponding classical Hamiltonian describes a set of coupled springs: $$\begin{aligned} \label{eq:HhohoCl} \mathcal{H}^{\rm cl}_{\rm A}&=&\frac{\omega_0}{2}\left(p_a^2+q_a^2\right),\nonumber~\\ \mathcal{H}^{\rm cl}_{\rm B}&=&\frac{\omega_0}{2}\sum_i\left(p_{b_i}^2+q_{b_i}^2\right)\nonumber~,\\ \mathcal{H}^{\rm cl}_{1}&=&g \sum_i \left( q_a q_{b_i}+p_a p_{b_i}\right)~,\end{aligned}$$ where $(p_a,q_a)$ are conjugate variables of the charger and $(\boldsymbol{p}_{b_{i}},\boldsymbol{q}_{b_{i}})$ are conjugate variables of the $i$-th battery. As earlier, we choose $E_{\rm A}^{(1)}(0)=\omega_0$. We now introduce $Q_b=\sum_i q_{b_i}/\sqrt{N}$ and $P_b=\sum_i p_{b_i}/\sqrt{N}$. The classical Hamiltonian becomes $$\begin{aligned} \label{eq:HhohoClR} \mathcal{H}^{\rm cl}_{\rm B}&=&\frac{\omega_0}{2}\left(P_{b}^2+Q_{b}^2\right)\nonumber~,\\ \mathcal{H}^{\rm cl}_{1}&=&g_N\left( q_a Q_{b}+p_a P_{b}\right)~.\end{aligned}$$ We conclude that also in the classical case the model maps into that of two coupled oscillators with a renormalized coupling $g_N$. Hamilton’s equations of motion follow from Eqs. , , and : $$\begin{aligned} \label{eq:hohoHJ} \frac{d{p}_a}{dt}&=&-\omega_0q_a-g_N Q_b~,\nonumber \\ \frac{d{q}_a}{dt}&=&\omega_0p_a+g_N P_b~,\nonumber \\ \frac{d{P}_b}{dt}&=&-\omega_0Q_b-g_Nq_a~,\nonumber \\ \frac{d{Q}_a}{dt}&=&\omega_0P_b+g_Np_a~.\end{aligned}$$ Solving these equations we find that, irrespective of the particular initial condition, the stored energy reads $E^{(N)}_{\rm B}(\tau)=N\omega_0\sin^2(g\sqrt{N}\tau)$. This implies $$\begin{aligned} \label{eq:GammahoCl} \Gamma_{\rm cl}=\sqrt{N}~,\end{aligned}$$ and $R=1$. This is the main result of this Section. For the case of harmonic oscillator batteries defined in (\[eq:Hhoho\]), fast charging, i.e. $\Gamma \propto \sqrt{N}$, is solely due to the collective behavior of the underlying many-particle system, and does not have its roots in the quantumness of its Hamiltonian. Spin batteries {#sect:spin} ============== In this Section we study a system composed by $N$ qubits, acting as charger, coupled to another set of $N$ qubits, which play the role of the battery. The quantum Hamiltonian is $$\begin{aligned} \label{eq:HSpins} \mathcal{H}_{\rm A}&=&\omega_0\left(J^{(a)}_z+\frac{N}{2}\right)~,\nonumber\\ \mathcal{H}_{\rm B}&=&\omega_0\left(J^{(b)}_{z}+\frac{N}{2}\right)~,\nonumber\\ \mathcal{H}_{1}&=&4g \left(J^{(a)}_xJ^{(b)}_x+J^{(a)}_yJ^{(b)}_y\right)~,\end{aligned}$$ where $J^{(a)}_\alpha$ ($J^{(b)}_\alpha$) with $\alpha=x,y,z$ are the components of a collective spin operator of length $J=N/2$ acting on the Hilbert space of the charger A (battery B), while all the other parameters have the same meaning as in Eq. . [Figs/GammaQuantumSpinLL.pdf]{} (1,70)[(a)]{} [Figs/GammaClassicalSPINS.pdf]{} (1,70)[(b)]{} [Figs/RSpins.pdf]{} (1,70)[(c)]{} Defining $\mathcal{H}_{0}=\mathcal{H}_{\rm A}+\mathcal{H}_{\rm B}$, the propagator in the interaction picture simply reads $\tilde{U}_t=e^{i\mathcal{H}_{0}t}e^{-i\mathcal{H}t}=e^{-i\mathcal{H}_1t}$. Hence, in this model there is no dependence of the dynamics on the energy scale $\omega_0$, and $\tilde{U}_t$ depends only on the product $gt$. As in the case of Eq. (\[eq:Gammaho\]), this scaling implies that the collective advantage $\Gamma_{\rm qu}$ for this model does not depend on the value of $g$ but only on $N$. In Fig. \[fig:3\](a) we report the log-log plot of the collective advantage $\Gamma_{\rm qu}$ as a function of $N$. Fits to the numerical data (not shown) indicate a quasi-linear dependence on $N$ for large $N$ of the form $$\label{eq:GammaSpins} \Gamma_{\rm qu}\propto {N^{\alpha}}~,$$ with $\alpha \sim 1$ and a proportionality constant $\sim0.25$. We now move on to analyze the classical case. Following the discussion of Sect. \[Comp\], we model the analog classical Hamiltonian as $$\begin{aligned} \label{eq:HSpinsCL} \mathcal{H}^{\rm cl}_{\rm A}&=&N\omega_0\frac{\big[\cos(\theta_a)+1\big]}{2},\nonumber~\\ \mathcal{H}^{\rm cl}_{\rm B}&=&N\omega_0\frac{\big[\cos(\theta_b)+1\big]}{2}\nonumber~,\\ \mathcal{H}^{\rm cl}_{1}&=&g N^2\sin(\theta_a)\sin(\theta_b)\cos(\phi_a-\phi_b)~,\end{aligned}$$ where $(N\cos(\theta_a)/2,\phi_a)$ and $(N\cos(\theta_b)/2,\phi_b)$ are conjugate variables [@BraunBook; @Carlos18]. Hamilton’s equations of motion follow from Eqs.  and . We find $$\begin{aligned} \label{eq:HSpinsH} &&\frac{d\cos(\theta_a)}{dt}=2gN \sin(\theta_a)\sin(\theta_b)\sin(\phi_a-\phi_b),\nonumber \\ &&\frac{d\phi_{a}}{dt}=\omega_0-2 g N\cot(\theta_a) \sin(\theta_b)\cos(\phi_a-\phi_b)~.\end{aligned}$$ Since the Hamiltonian is invariant under the exchange of variables $a\leftrightarrow b$, the equations of motion for $\cos(\theta_{b}) $ and $\phi_{b}$ can be simply obtained by exchanging $a\leftrightarrow b$. It is now useful to define $\varphi_a=\phi_a+\omega_0 t$ and $\varphi_b=\phi_b+\omega_0 t$, which allow us to write Eq. (\[eq:HSpinsH\]) as following: $$\begin{aligned} \label{eq:HSpinsH2} &&\frac{d\cos(\theta_a)}{dt} =2gN \sin(\theta_a)\sin(\theta_b)\sin(\varphi_a-\varphi_b),\nonumber \\ && \frac{d\varphi_{a}}{dt}=-2 g N\cot(\theta_a) \sin(\theta_b)\cos(\varphi_a-\varphi_b)~.\end{aligned}$$ These equations show that the only energy scale in the problem is $gN$. On the basis of simple dimensional analysis we therefore expect $\bar{\tau}\propto 1/(gN)$. Accordingly, since the energy of the system is extensive, this will yield $\bar{P}_{\parallel} \propto N$ while $\bar{P}_{\sharp}\propto N^2$ leading to $\Gamma_{\rm cl} \propto N$. This argument is not asymptotic, i.e. does not only apply for $N\gg 1$. In Fig. \[fig:3\](b) we plot the classical collective advantage obtained by solving numerically Hamilton’s equation of motion. Indeed, we clearly see a linear growth in $N$, also for small values of $N$, perfectly consistent with the dimensional argument. Finally, in Fig. \[fig:3\](c) we show the ratio $R$ defined as in Eq. (\[R\]), for the case of our spin batteries. We conclude that, for this model, quantum mechanical dynamics yields a [*disadvantage*]{} rather than an advantage, as $R<1$ for all $N$. This is the second main result of this Article. Dicke batteries {#sect:Dicke} =============== [Figs/GammaQuantumLL.pdf]{} (1,70)[(a)]{} [Figs/GammaClassicalLL.pdf]{} (1,70)[(b)]{} [Figs//RDicke.pdf]{} (1,70)[(c)]{} [Figs/RDickeG1.pdf]{} (1,70)[(d)]{} In this Section we study the case of Dicke batteries [@Ferraro17; @Andolina18b]. In a Dicke QB, one cavity mode, acting as charger, is coupled to $N$ qubits, which play the role of the battery. The quantum Hamiltonian is [@Ferraro17] (see also Refs. ) $$\begin{aligned} \label{eq:Dicke} \mathcal{H}_{\rm A}&=&\omega_0 ~a^\dagger a~,\nonumber\\ \mathcal{H}_{\rm B}&=&\omega_0\left(J_z+\frac{N}{2}\right)~,\nonumber\\ \mathcal{H}_{1}&=&2g \left(a^\dagger+a\right)J_x~,\end{aligned}$$ where $J_\alpha$ with $\alpha=x,y,z$ are the components of a collective spin operator of length $J=N/2$, while all the other parameters have the same meaning as in Eq. . As in the other models introduced in previous Sections, we choose $E_{\rm A}^{(1)}(0)=\omega_0$. Moreover, for the sake of simplicity, we fix $\ket{\psi}_{\rm A}$ to be a Fock state. In Ref.  it was shown that the particular choice of the initial state does not change qualitatively the collective advantage. While a detailed analysis of Dicke QBs is reported in Ref. , here we summarize the main findings—Fig. \[fig:2\](a)—and compare them with those obtained for the classical analog of a Dicke QB. In Fig. \[fig:2\](a) we plot the collective advantage $\Gamma_{\rm qu}$ of a Dicke QB for different choices of the coupling parameter $g$. In agreement with Ref. , fits to the numerical data (not shown) suggest the following power-law scaling in the limit of large $N$ $$\begin{aligned} \label{eq:GammaDicke} \Gamma_{\rm qu}\propto \sqrt{N}~.\end{aligned}$$ We now analyze the classical case. In the literature there is a well-established classical analog of the Dicke model [@deAguiar92; @Rodriguez18; @Carlos18], which reads as follow $$\begin{aligned} \label{eq:DickeCl} \mathcal{H}^{\rm cl}_{\rm A}&=&\frac{\omega_0}{2}\left(p_a^2+q_a^2\right),\nonumber~\\ \mathcal{H}^{\rm cl}_{\rm B}&=&N\omega_0\frac{\big[\cos(\theta)+1\big]}{2}\nonumber~,\\ \mathcal{H}^{\rm cl}_{1}&=&g\sqrt{2}Nq_a \sin(\theta)\cos(\phi)~,\end{aligned}$$ where $(p_a,q_a)$ and $(N\cos(\theta)/2,\phi)$ are classical conjugate variables [@BraunBook; @Carlos18]. This Hamiltonian describes a spring coupled to a nonlinear pendulum of length $N$. We would like to stress that the model defined by Eq. (\[eq:DickeCl\]) is not a semi-classical approximation of the quantum Hamiltonian in Eq. (\[eq:Dicke\]), but represents instead an intrinsically classical description of a classical spin coupled to a cavity, directly obtainable from classical Hamiltonian mechanics. Our aim is indeed not to approximate the quantum model, but to understand the differences between the quantum and the classical batteries. As in all previous cases, we choose $E_{\rm A}^{(1)}(0)=\omega_0$. We still have the freedom to choose initial conditions, since the previous condition imposes only the constraint $p_a^2(0)+q_a^2(0)=2N\omega_0$. For the sake of simplicity, we choose $p_a(0)=q_a(0)$. We have checked that other initial conditions do not alter our main conclusions. From Eqs.  and  we find Hamilton’s equations of motion for the classical Dicke battery: $$\begin{aligned} \label{eq:DickeHJ} &&\frac{d{p}_a}{dt}=-\omega_0q_a-\sqrt{2}Ngq_a\sin(\theta)\cos(\phi)~,\nonumber \\ &&\frac{d{q}_a}{dt}=\omega_0p_a~,\nonumber \\ &&\frac{d\cos(\theta)}{dt}=2\sqrt{2}gq_a\sin(\theta)\sin(\phi),\nonumber \\ &&\frac{d\phi}{dt}=\omega_0-2\sqrt{2}gq_a\cos(\phi)\cot(\theta)~.\end{aligned}$$ We can rescale these equations in such a way to have $P^{2}_{a}(0)+Q^{2}_a(0)=2$, i.e. $P_a=\sqrt{N}p_a$ and $Q_a=\sqrt{N}q_a$. We obtain: $$\begin{aligned} \label{eq:DickeHJT} &&\frac{d{P}_a}{dt}=-\omega_0Q_a-\sqrt{2}g_NQ_a\sin(\theta)\cos(\phi)~,\nonumber \\ &&\frac{d{Q}_a}{dt}=\omega_0P_a~,\nonumber \\ &&\frac{d\cos(\theta)}{dt}=2\sqrt{2}g_NQ_a\sin(\theta)\sin(\phi),\nonumber \\ &&\frac{d\phi}{dt}=\omega_0-2\sqrt{2}g_NQ_a\cos(\phi)\cot(\theta)~,\end{aligned}$$ where $g_{N}$ has been defined in Eq. (\[eq:g\_N\]). We note that, in these equations, the only parameters with physical dimensions (of energy) are $\omega_0$ and $g_N$. Since $\bar{\tau}$ has physical dimensions of inverse energy (in our units), the optimal charging time must have the following form: $$\label{DEFEFFE} \bar{\tau}=\frac{1}{g_{N}}F(\omega_0/g_N)~,$$ where $F(x)$ is an unknown dimensionless function. From this expression we can conclude that, as long as $F(x)$ does not reach zero for $x=0$, also in the classical scenario the collective advantange parameter will exhibit a $\sqrt{N}$ scaling similar to the one in Eq. (\[eq:GammaDicke\]) observed for the quantum counterpart, i.e. $\Gamma_{\rm cl} \propto \sqrt{N}$. Indeed, assuming $F(0)\neq 0$, from (\[DEFEFFE\]) it follows that for large enough $N$ the charging time can be approximated as $\bar{\tau}\simeq F(0)/g_N$ with a $1/\sqrt{N}$ scaling. Accordingly, since the energy is an extensive quantity, we will have, asymptotically, $\bar{P}^{(N)}_{\rm B}\propto N\sqrt{N}$, which implies $\Gamma_{\rm cl} \propto \sqrt{N}$ as anticipated. To put this observation on a firmer ground, we resort to numerical integration of Eqs. . In Fig. \[fig:2\](b) we plot the collective advantage $\Gamma_{\rm cl}$ as a function of $N$, for different values of $g$. A comparison with the expected $\sqrt{N}$ scaling of $\Gamma_{\rm cl}$ in the large-$N$ limit is also shown. (The expected saturation to the $\sqrt{N}$ scaling law requires $g_{N}/\omega_{0}\gg1$ and is therefore difficult to reach numerically for small values of $g/\omega_{0}$.) We now proceed with a more quantitive comparison between $\Gamma_{\rm qu}$ and $\Gamma_{\rm cl}$. In Fig. \[fig:2\](c) we report the plot of the quantity $R$ of Eq. (\[R\]) as a function of $N$, for different values of $g$. We clearly see that the ratio $R$ can be smaller or larger than unity depending on the value of $g$. This is emphasized in Fig. \[fig:2\](d), where we show $R$ as a function of $g$ for $N=50$. This is the third main result of this Article. The quantum advantage shown by a Dicke QB in a window of values of $g$ is on the order of $10\%$ and therefore not spectacular but clearly indicates the possibility to engineer more complex quantum Hamiltonians to achieve much better quantum performances. These will be the subject of future work. Summary and conclusions {#concl} ======================= In this Article we have compared three quantum battery models against their rigorous classical versions in order to better understand the origin of the fast charging phenomenon discussed in previous literature. In particular, we have defined a genuine [*quantum advantage*]{} (i.e. $R>1$) via the ratio $R$ in Eq. (\[R\]) between the collective advantages in the quantum and classical cases, $\Gamma_{\rm cl}$ and $\Gamma_{\rm qu}$, respectively. In the case of harmonic oscillator batteries—see Sect. \[sect:HO\]—$R=1$ for all values of $N$ and $g$. Quantum harmonic oscillator batteries defined as in Eq. (\[eq:Hhoho\]) do not therefore display any quantum advantage. The case of spin batteries, discussed in Sect. \[sect:spin\], is even worse. In this model, indeed, $R<1$ for all values of $N$ and $g$. We can safely conclude that, in these two cases, fast charging in the quantum case (i.e. the fact that $\Gamma_{\rm qu}$ increases for increasing $N$) is solely due to the collective behavior of the many-body systems described by the quantum Hamiltonians in Eqs. (\[eq:Hhoho\]) and (\[eq:HSpins\]), which is also present in the corresponding classical Hamiltonians. The case of Dicke batteries, discussed in Sect. \[sect:Dicke\], is far more richer. In this case, the ratio $R$ depends on the charger-battery coupling parameter $g$ and, for each fixed $N$, can be larger than unity in a range of values of $g$. As evident from Figs. \[fig:2\](c) and (d), the quantum advantage displayed by a Dicke quantum battery at optimal coupling is on the order of $10\%$. More work is needed to discover quantum models of batteries with larger values of $R$. For the sake of completeness, we note that the authors of Ref.  have very recently proposed to study the evolution of the battery state in the energy eigenspace of the battery Hamiltonian. Combining this geometric approach with bounds on the power, they are able to distinguish whether the quantum advantage in a charging process stems either from the speed of evolution or the non-local character of the battery state. Acknowledgments =============== Part of the numerical work has been performed by using the Python toolbox QuTiP2 [@QuTip]. We wish to thank D. Farina, D. Ferraro, P. Erdman, M. Esposito, and, especially, M. Bera, V. Cavina, S. Juliá-Farrè, and M. Lewenstein for many useful discussions. [77]{} R. Alicki and M. Fannes, [Phys. Rev. E [**87**]{}, 042123 (2013)](http://dx.doi.org/10.1103/PhysRevE.87.042123). K.V. Hovhannisyan, M. Perarnau-Llobet, M. Huber, and A. Acín, [Phys. Rev. Lett. [**111**]{}, 240201 (2013)](https://doi.org/10.1103/PhysRevLett.111.240401). F.C. Binder, S. Vinjanampathy, K. Modi, and J. Goold, [New J. Phys. [**17**]{}, 075015 (2015)](http://dx.doi.org/10.1088/1367-2630/17/7/075015). F. Campaioli, F.A. Pollock, F.C. Binder, L. Céleri, J. Goold, S. Vinjanampathy, and K. Modi, [Phys. Rev. Lett. [**118**]{}, 150601 (2017)](http://dx.doi.org/10.1103/PhysRevLett.118.150601). T.P. Le, J. Levinsen, K. Modi, M. Parish, and F.A. Pollock, Andolina18 [Phys. Rev. A [**97**]{}, 022106 (2018)](https://dx.doi.org/10.1103/PhysRevA.97.022106). D. Ferraro, M. Campisi, G.M. Andolina, V. Pellegrini, and M. Polini, [Phys. Rev. Lett. [**120**]{}, 117702 (2018)](https://dx.doi.org/110.1103/PhysRevLett.120.117702). G.M. Andolina, D. Farina, A. Mari, V. Pellegrini, V. Giovannetti, and M. Polini, [Phys. Rev. B [**98**]{}, 205423 (2018)](https://doi.org/10.1103/PhysRevB.98.205423). G.M. Andolina, M. Keck, A. Mari, M. Campisi, V. Giovannetti, and M. Polini, [arXiv:1807.08656](https://arxiv.org/abs/1807.08656). D. Farina, G.M. Andolina, A. Mari, M. Polini, and V. Giovannetti, [arXiv:1810.10890](https://arxiv.org/abs/1810.10890). S. Juliá-Farrè, T. Salamon, A. Riera, M.N. Bera, and M. Lewenstein, [arXiv:1811.04005](https://arxiv.org/abs/1811.04005). Y.-Y. Zhang, T.-R. Yang, L. Fu, and X. Wang, [arXiv:1811.04395](https://arxiv.org/abs/1811.04395). For a recent review see e.g. F. Campaioli, F.A. Pollock, and S. Vinjanampathy, [arXiv:1805.05507](https://arxiv.org/abs/1805.05507). P.A.M. Dirac, [*Principles of Quantum Mechanics*]{} (Oxford University Press, 1982). J. Ch' avez-Carlos, B. L' opez-del-Carpio, M.A. Bastarrachea-Magnani, P. Str' ansky, S. Lerma-Hern' andez, L.F. Santos, and J.G. Hirsch, [arXiv:1807.10292](https://arxiv.org/abs/1807.10292). D. Braun, [*Dissipative Quantum Chaos and Decoherence*](https://doi.org/10.1007/3-540-40916-5) (Springer Tracts in Modern Physics, 2001). C. Ciuti, G. Bastard, and I. Carusotto, [Phys. Rev. B  [**72**]{} 115303 (2005)](https://doi.org/10.1103/PhysRevB.72.115303). D.F. Walls and G.J. Milburn, [*Quantum Optics*](https://doi.org/10.1007/978-3-540-28574-8) (Springer Science & Business Media, 2007). R.H. Dicke, [Phys. Rev. [**93**]{}, 99 (1954)](https://doi.org/10.1103/PhysRev.93.99). The Hamiltonian in Eq.  has been widely used in the literature [@Carlos18; @Julia-Farre18] with a different normalization of the coupling constant, namely with the replacement $g\to g/\sqrt{N}$. This different choice guarantees well-defined results [@Julia-Farre18] if one works in the thermodynamic limit defined by $N\to \infty$, $L\to \infty$ with $n \equiv N/L = {\rm const}$, where $L$ is the length of the cavity. In this limit, the length of the cavity scales with the number $N$ of qubits in order to keep the density $n$ of qubits constant. Whether one uses Eq.  or Eq.  with the replacement $g\to g/\sqrt{N}$ ultimately depends on the experimental setup. For example, in circuit-QED setups like the one realized in Ref. , the length of the photonic cavity (i.e. the length of the transmission line resonator) does not scale with the number of qubits (i.e. the number of transmons). Indeed, in Ref.  the resonator is $\sim 20~{\rm mm}$ long, while a transmon has a linear size which is on the order of $300~{\rm \mu m}$. This implies that the resonator used in the setup of Ref.  can host something like $N = 40$-$50$ qubits, without any need to scale its length with $N$ for their accommodation. The authors of Ref.  used the same Hamiltonian as in Eq.  and Ref.  to explain their data. J.M. Fink, R. Bianchetti, M. Baur, M. Göppl, L. Steffen, S. Filipp, P.J. Leek, A. Blais, and A. Wallraff, [Phys. Rev. Lett. [**103**]{}, 083601 (2009)](http://dx.doi.org/10.1103/PhysRevLett.103.083601). M.A.M. de Aguiar, K. Furuya, C.H. Lewenkopf, and M.C. Nemes, [Ann. Phys. [**216**]{}, 291 (1992)](https://doi.org/10.1016/0003-4916(92)90178-O). J.P.J. Rodriguez, S.A. Chilingaryan, and B.M. Rodr' iguez-Lara, [arXiv:1808.03193](https://arxiv.org/pdf/1808.03193.pdf). J.R. Johansson, P.D. Nation, and F. Nori, [Comp. Phys. Comm. [**184**]{}, 1234 (2013)](https://doi.org/10.1016/j.cpc.2012.11.019).
{ "pile_set_name": "ArXiv" }
--- abstract: 'Ablation of Cu and Al targets has been performed with laser pulses in the intensity range of . We compare the measured removal depth with 1D hydrodynamic simulations. The electron-ion temperature decoupling is taken into account using the standard “two-temperature model”. The influence of the early heat transfer by electronic thermal conduction on hydrodynamic material expansion and mechanical behavior is investigated. A good agreement between experimental and numerical matter ablation rates shows the importance of including solid-to-vapor evolution of the metal in the current modeling of the laser matter interaction.' author: - 'J.P. Colombier$^{1,2}$' - 'P. Combis$^{1}$' - 'F. Bonneau$^{1}$' - 'R. Le Harzic$^{2}$' - 'E. Audouard$^{2}$' title: Hydrodynamic simulations of metal ablation by femtosecond laser irradiation --- Introduction ============ Ultrafast phenomena driven by subpicosecond laser pulses have been the subject of thorough investigation for many years in order to explain the ablation of solid materials [@Komashko; @Laville; @Schafer; @Zhigilei]. As opposed to the laser-dielectric interaction where thermal and athermal ablation regimes probably takes place [@Stoian; @Bulgakova; @Feit], the laser-metal interaction is mainly governed by the thermal ablation one [@Gamaly; @Bulgakova]. The laser energy is absorbed by the free electrons first. The pulse duration being shorter than the electron-phonon relaxation time , electrons and ions subsystems must be considered separately. The “two-temperature model” (TTM) describes the thermal transport of energy by the electrons and the energy transfer from the electrons to the lattice [@Anisimov]. Numerous theoretical and experimental previous works have been devoted to the study of the matter ablation with a single laser pulse. Experimental irradiation conditions in current applications are largely investigated to optimize the ablation rate: pulse duration [@Sallé], fluence [@Nolte] and background gas [@Wynne; @Preuss]. However, a complete view including all the relevant physical mechanisms is still lacking. To get a better understanding of the ablation process and to extend the results into situations not covered by the experiments, two kinds of investigations are put at work : (i) a complete identification of the various physical mechanisms responsible for the material removal from the surface, (ii) an evaluation of the impact of these various processes on the amount of ablated matter. In the works previously addressed, few calculations are able to provide a direct comparison with experiments. Most of them are focused on thermal transport and a more detailed description of the physical processes occuring in the material has to be incorporated to really describe the whole ablation process. Among these, works based on hydrodynamic modeling [@Eidmann; @Laville; @Afanasiev; @Komashko] have been recently associated to the TTM to describe ablation process. To overcome the drawbacks of a material fluid treatment, a mechanical extension of the TTM has been proposed to model the ultrafast thermomelasticity behavior of a metal film [@Chen]. Works based on molecular dynamics allow to access to the influence of the ultrafast energy deposition on the thermal and mechanical evolution properties of the material [@Perez; @Meunier]. With a different approach, other authors have performed a microscopic analysis of the mechanisms of ultrashort pulse laser melting by means of a hybrid molecular-dynamic and fluid modelisation [@Schafer; @Zhigilei]. From all these investigations, it appears that none of these effects may be neglegted to reproduce the features of the damage resulting from an ultrashort laser irradiation. Moreover, there is a lack of investigations which combine experimental and theoretical results so that current models are still questionnable. In the present simulations, the TTM provides energy deposition in an expanding material intimately correlated with the processes governing the ablation in the ultrashort pulse case, which is a specificity of our hydrodynamic approach. Simulation results give useful insight into the presented experiment data. Transport properties of electrons are not very well understood in nonequilibrium electron-ion systems. However, the comprehension of these phenomena in the context of ultrafast interaction is essential not only for fundamental purposes but also for micromachining applications. A precise description of the effect of the electronic temperature on the absorption seems to be still unsettled [@Milchberg; @Fisher; @Rethfeld], and it has not been taken into account in the presented calculations. Nevertheless, the model employed in this work uses a large set of current available data. Obviously, numerical calculations are always requiring additional information such as electron-phonon or electron-electron relaxation times, which may be extracted, from experimental data [@Corkum; @Gusev]. Reciprocally, comparison between simulations and experiments allows to validate physical data introduced into the theoretical models. For instance, we shall see in the following that the measure of the pressure variation with time inside the material would be very informative. These data, however, are difficult to measure with a high accuracy. By contrast, experimental measure of laser ablation rate seems to be easier to be obtained and compared with numerical simulations. In this article, numerical and experimental results on ablated matter are reported. For this purpose, the TTM is inserted into a hydrodynamic code in order to describe the material removal. First we detail the physical processes which are taken into account in our computations within the framework of hydrodynamical modeling. Then, the effects caused by relaxation processes on the evolution of shock waves are examined. We next present the experiments which have been performed to obtain ablation depth measurements as a function of the laser fluence. Finally, our discussions are based on results of numerical simulations on Cu and Al samples compared with specific experimental measurements. Theoretical model ================= To represent numerically the interaction between the laser and the metallic target, we used a 1D Lagrangian hydrodynamic code [@Delpor]. Solving the Helmholtz wave equation permitted us to determine the electromagnetic field through the region illuminated by the laser. The deposited energy is then deduced using the Joule-Lenz law. A precise result for the absorption from an arbitrary medium can be obtained from the direct solution of the equation for the electromagnetic field. Let us consider a planar wave propagating along z axis. We write the following Helmholtz equation for the complex amplitude of the electric field $E_{z}$ with frequency $\omega$ : $$\label{three} \Delta E_{z}-\nabla\nabla E_{z}+\left(1+i\frac{4\pi}{\omega}\sigma\left(T,\rho\right)\right)\frac{\omega^{2}}{c^{2}}E_{z}=0$$ where $\sigma(T,\rho)$ is the complex conductivity and $c$ is the light speed. The function $\sigma$ is calculated as a function of the density $\rho(z)$ and the temperature $T(z)$. The relative permittivity of the medium is supposed to be equal to the unity in the case of a metal target. These simulations need accurate data such as transport coefficients in solids, liquids, vapors and dense or diluted plasmas, specially refractive indices [@Palik], electric and thermal conductivities [@Ebeling], and mechanical properties such as material strength. The evolution of the irradiated target is governed by the fluid hydrodynamic and heat diffusion equations connected with a multi-phase equation of state (EOS). Thermodynamic functions that realistically describe characteristics of metal properties in various parts of phase diagram are needed. A such set of different metal EOS is generated by means of a numerical tool developped by the authors of the reference [@EOS]. As an illustration of the EOS used, displays isobars in the phase diagram temperature-specific energy of the copper for a wide range of pressure. Such data reveal the dependence on the thermodynamic properties of the melting and vaporisation processes. The mechanical propagation of shock waves and fracture are also simulated. Elastoplasticity is described by a strain rate independent model (relevant to high strain rate conditions at high pressure, typically beyond ) which accounts for pressure, temperature and strain dependent yield strength and shear modulus [@SCG]. Laser induced stresses are the combination of the hydrostatic pressure and the response to the shearing deformation. In the temperature range of interest here, the effect of radiative energy transport on the hydrodynamic motion is negligible. Hence we ignore energy transport by radiation when solving the hydrodynamic equations. ![Representation of isobars in a phase diagram of the copper EOS in the region of phase transitions.\[EOS\_cu\]](EOS_copper.EPS){width="8.5cm"} Ultrashort laser irradiation and the associated ionic and electronic temperature decoupling require to introduce specific electronic parameters. We assume that free electrons form a thermal distribution during the interaction and use the Fermi-Dirac distribution to determine the electron properties (energy, pressure and heat capacity) as a function of the density and the temperature [@Ashcroft]. The evanescent electromagnetic wave is absorbed by the electrons. In our range of intensity, they are quickly heated at a temperature of few eV. During and after the pulse, the energy diffuses among the electrons and is then transferred to the lattice. Classical heat diffusion plays a significant role as soon as a thermal gradient occurs in the material. Diffusion processes are described by the following equations : $$\begin{aligned} \label{one} \rho \ C_{e} \displaystyle\frac{\partial T_{e}}{\partial t}&=& \displaystyle\nabla \displaystyle\left(K_{e}\displaystyle\nabla T_{e}\displaystyle\right)- \gamma (T_{e}-T_{i}) + S \hspace{1cm} \; \\ \label{two} \rho \ C_{i} \displaystyle\frac{\partial T_{i}}{\partial t}&=& \displaystyle\nabla \displaystyle\left( K_{i} \displaystyle\nabla T_{i}\displaystyle\right)+ \gamma (T_{e}-T_{i})\end{aligned}$$ where $T$, $C$ and $K$ are the temperature, the specific heat and the thermal conductivity respectively. Indices “e” and “i” stand for electron and ion species. $\rho$ is the mass density. $\gamma$ characterizes the rate of energy exchange between the two subsystems and $S$ is the space and time dependent laser source term determined by the Joule-Lenz law. Introduction of the TTM in a hydro-code allows us to take into account the density dependence of both specific heats and conductivities. $C_{e}(T_{e},\rho)$ is calculated with the Fermi model using $\rho$ dependent chemical potentials [@Ashcroft]. The electron thermal conductivity $K_{e}$ is commonly expressed in the form [@Anis-Reth]: $$\label{three}K_{e} = \alpha \frac{(\theta_{e}^{2}+0.16)^{5/4}(\theta_{e}^{2}+0.44)} {(\theta_{e}^{2}+0.092)^{1/2}(\theta_{e}^{2}+\beta \theta_{i})} \theta_{e}$$where $\theta_{e}$ and $\theta_{i}$ are electron and ion temperature normalized to the Fermi temperature ($\theta_{e}=T_{e}/T_{F}$, $\theta_{i}=T_{i}/T_{F}$) and $\alpha$, $\beta$ are material dependent parameters [@Schafer; @Wang]. The linear variation of coupling term with ($T_{e}-T_{i}$) is classic in TTM : we have taken $\gamma=\gamma_{0}$ as for copper [@Corkum], and for aluminum [@Fisher]. It must be noticed that the values of $K_{e}$ and $\gamma$ are subject to considerable uncertainty in literature [@Schafer]. To accurately describe the ultrafast response, we incorporate electronic pressure into the set of hydrodynamic equations. The system of equations for electron and ion subsystems can be written in the Lagrangian form : $$\label{four} \begin{cases} \vspace{0.2cm} \displaystyle\frac{\partial\varepsilon_{e}}{\partial t}= -p_{e}\displaystyle\frac{\partial V}{\partial t}, \hspace{1.9cm} \displaystyle\frac{\partial\varepsilon_{i}}{\partial t}= -p_{i}\displaystyle\frac{\partial V}{\partial t}\\ \displaystyle\frac{\partial u}{\partial t}= -\frac{\partial}{\partial m}(p_{e}+p_{i}),\hspace{1cm} \displaystyle\frac{\partial V}{\partial t}=\frac{\partial u}{\partial m} \end{cases}$$ where $u$ is the fluid velocity, $m$ the mass and $V$ the specific volume. $p_{e}$, $\varepsilon_{e}$, $p_{i}$, $\varepsilon_{i}$ are the pressure and the specific energy of electrons and ions. ![Contour plots of the stress resulting from the irradiation of a copper target by a , laser pulse.\[sptime\]](sptime.eps){width="8.5cm"} Equations (\[one\]) to (\[four\]) connected with a multiphase equation of state (EOS) [@EOS] constitute a self-consistent description of the matter evolution. However, the pressure provided by this EOS is the sum of the electronic and ionic pressure at the equilibrium and has to be replaced by the sum of these two contributions out of equilibrium. The electronic pressure is independently determined by means of the standard fermion gas model. As a consequence, the total pressure used in the above calculations is determined as . Simulations and analysis ======================== To start with, the interaction of a , FWHM gaussian pulse at wavelength with a copper target is investigated. shows the space-time evolution of the induced stress. The metal surface is heated to a maximum of , after the irradiation. At this time, the free surface expands in a liquid state with a velocity of . Due to the electron heating, an electronic compression wave appears at the end of the laser pulse. The electron-ion energy exchange results in a significant increase in the ionic pressure, which propagates inside the metal. Behind the shock, tensile stress occurs associated with very high strain rate around . In order to study the sensitivity of the above results with respect to the physical parameters, we compare in the time variation of the computed stress at depth under standard conditions ($\gamma = \gamma_{0}$ and $K_{e} = K_{e_{o}}$) with those obtained with an increased coupling factor or electronic conductivity. ![Time evolution of the stress in copper at depth resulting from a , laser pulse. Standard conditions :  and (solid line), (dashed line), (dotted line).\[stress\]](stress.eps){width="8.5cm"} In the first hundred picoseconds, the stress growth is directly related to the heating depth. The increased coupling factor leads to a steeper thermal gradient and a lower temperature at compared to the other situations. The resulting stress is therefore lower in this case. The peak of the shock wave, propagating from the surface, reaches the depth at . Increasing the coupling factor accelerates the energy transfer from the electrons to the lattice and results in higher compression. Inversely, an enhanced electronic conductivity spreads the energy spatial profile and yields reduced stresses. It must be noticed that in the three cases, the compression is followed by a high tensile stress greater than the characteristic tensile strength of the material. Nevertheless, the loading in tension is not long enough () to induce a fracture in the medium [@Tuler-Butcher; @strength]. The shock duration and intensity provide a good signature of the balance between the electronic diffusion and the electron-ion coupling. Further improvements will be discussed in the following. To obtain local information on the energy transfer induced by a femtosecond laser irradiation, a high-resolution time measurement of the stress reaching the rear side of a thin sample could be achieved [@Tollier; @Romain]. Such experimental records could be directly compared with our simulations which would led us to refine the physical values accordingly. To validate the computation, we performed ablation measurements and compared them to the current simulations using standard values of for aluminum and copper samples. Experiment details ================== Ultrashort laser pulses are generated by an amplified all solid-state Ti:Saphire laser chain. Low energy pulses are extracted from a mode-locked oscillator (, , , ). The pulses are then injected into an amplifying chain including : an optical pulse stretcher, a regenerative amplifier associated with a two-pass amplifier using a as pumping source, and a pulse compressor. Linearly polarized pulses with wavelength centered around 800 nm, an energy of at repetition rate and typical duration of were delivered. The samples are mounted on a three-motorized-axes system with 0.5 $\mu$m accuracy. Experiments are performed in the image plane of an aperture placed before the objective. The resulting spatial laser profile on sample is “top hat” so that borders and spurious conical drilling effects are reduced. Focusing objectives of or focal lengths to obtain fluences in a range of 0.5 to 35 J/cm$^{2}$ with the same beam size, 16 $\mu$m in diameter, in the image plane. For ablation depth measurements, grooves are machined by moving the sample [@these]. The sample speed is adapted such that each point on the groove axis undergoes 8 consecutive irradiations at each target pass. The number of times the sample passes in front of the fixed beam can be adjusted. shows a scanning electron microscopy (SEM) picture of the machined grooves on copper for 2, 4, 6, 8 and 10 passes. ![SEM picture of machined-groove profiles, from 2 to 10 passes, on a copper sample.\[profilo\]](profilo.eps){width="8.5cm"} The groove depth is measured using an optical profilometer with a depth resolution. The ablation rates for each groove are deduced taking into account the sample speed, the repetition rate of the laser, the beam size and the number of passes. For each energy density, an averaged ablation rate is determined and the number of passes has been chosen to obtain reproducible results. From these experiments on copper and aluminum targets, we evaluate, for different fluences, an ablation depth averaged over a few tens of laser shots. The theoretical ablation depth is deduced from 1D numerical simulations using a criterion discussed hereafter. Results and discussion ====================== At moderate fluence, the propagation of the shock wave induced by the energy deposition on the lattice causes the surface expansion at very high speed and significant non-uniform strain rates. Simultaneously, the surface of the target is melted or vaporized as soon as the conditions of temperature and density required are fulfilled. High strain rates can turn the liquid region into an ensemble of droplets and ablation follows. This process is called the homogeneous nucleation [@nucleation]. Unfortunately, quantitative values on the formation and ejection of liquid droplets are difficult to access because the interfacial solid-liquid microscopic properties of the nucleation centers are not accurately known. Nevertheless, in our simulation we can consider that the liquid layer accelerated outside the target corresponds to the ablated matter. Large values of strain rates () indeed signal that droplet formation may occur and are sufficient to produce ablation. At higher fluence, the surface is strongly vaporized. The gas expands near the free surface and compresses the internal matter. The vaporized part of the target added to the fraction of the above-defined liquid layer constitute the calculated ablated matter. Experimental results on ablation of copper and aluminum are compared with the numerical estimates in . ![Experimental and numerical (solid line) ablation depth as a function of the laser fluence on a copper target obtained with a 170 fs laser pulse. The dashed line shows evolution of the specific removal rate.\[cu\]](copper.eps){width="8.5cm"} ![Experimental and numerical (solid line) ablation depth as a function of the laser fluence on an aluminum target obtained with a 170 fs laser pulse. The dashed line shows evolution of the specific removal rate.\[alu\]](alu.eps){width="8.5cm"} Sharp numerical ablation thresholds are visible at 3 and for Cu and Al targets respectively. At high fluence, the ablation saturates for both materials. This saturation occurs mainly for two reasons. Vaporization and subsequent gas expansion consume energy that does not contribute to the ablation process. Moreover, the liquid layer confinement increases as far as the gas expands and limits the liquid removal. As defined by Feit *et al* [@Feit], a specific removal depth, i.e., depth removed per unit incident fluence could be a relevant parameter to estimate the ablation efficiency at a fixed pulse duration. Calculations of this quantity is carried out as a function of the laser fluence. Dashed curve presented in indicate a maximum value of in the aluminum case. The curve is smoother for copper in and the maximum specific removal depth is reached at a fluence around . This point corresponds to the occurence of a critical behavior which confirms a change in the hydrodynamic behavior. While the thickness of the liquid layer grows with the incident energy, the specific removal depth rises. Afterwards, when the gas expansion starts, a growing part of the laser energy is transformed in kinetic energy and the specific removal depth drops. This suggests that an optimum material removal exists and refers to the situation when the surface vaporisation is limited. It appears that this quantity is relevant for material processing which can be optimized by operating at this optimal fluence. At low fluence, a noticeable discrepancy arises between the experimental and numerical results. The calculated ablated matter for a copper target is below the experimental results, while for an aluminum target, numerical results are above. We suspect that electron transport properties should be further improved. It has been shown that a significant decrease in the electrical conductivity may take place as a result of the electronic temperature increase [@Milchberg], which our model discards. However, the experimental measurements and the theoretical calculations come to a reasonable agreement at higher fluence. As it has been shown by Fisher *et al* [@Fisher], in the vicinity of a 800 nm wavelength, the interband absorption occuring in an aluminum target decreases with increasing temperature. The authors show that, with 50 fs laser pulses, the absorbtion coefficient presents a minimum near ablation threshold, at laser intensity. The evolution of interband absorption with the temperature is not taken into account in our calculations. Consequently, we may overestimate the energy absorbed in this intensity region. For copper, the simulation overestimates the ablation threshold. This can be due to a deficit of physical process comprehension or to the inaccuracy of the parameters introduced in the model. No single value of $\gamma$ or $K_{e}$ can perfectly fit the sets of data shown in Fig. \[cu\] and \[alu\]. As for the discussion of pressure presented above, one can think that a change in $\gamma$ or $K_{e}$ has a similar effect on the threshold fluence $F_{Th}$. Therefore, it is interesting to investigate the fluence threshold dependence on $\gamma$ and $K_{e}$. A parametric analysis has been performed for a copper target. displays the threshold fluence which has been obtained for different parameter couples and . The deposited energy with lower thermal conductivity or higher coupling factor can penetrate deeper into the material. As a consequence, $F_{Th}$ is lowered with respect to that of the reference case and would be comparable with experimental data in the vicinity of the threshold. However, simulations performed in these conditions have shown a higher disagreement with experimental data at higher fluence due to an earlier expanded vapor. On the contrary, for a reduced $\gamma$ or an enhanced $K_{e}$ value, $F_{Th}$ increased and the ablation depth at high fluence is overestimated. Results obtained with one-temperature simulations evidence the importance of the TTM to reproduce the experimental results. The good agreement obtained between experimental data and simulations underlines the need of coupling the TTM model with a hydrodynamic code for ablation simulations in metals. Numerical results presented in this paper give an overall description of processes occuring during ultrashort laser ablation experiments. ![Calculation of the threshold fluence dependence on the coupling parameter $\gamma$ and the electronic conductivity $K_{e}$ for copper. Simulations are performed for different ratio of the standard value of one parameter remaining the second one constant. Consequently, the intersection of curves coincide with the value used in the calculation of ablation rate presented in .\[seuil\]](seuil.eps){width="8.5cm"} Conclusion ========== In this paper, we have reported new results on the interaction of femtosecond laser pulses with metal targets at intensities up to $10^{14}$ W/cm$^{2}$. Numerical computations were carried out by means of a 1D hydrodynamic code describing the laser field absorption and the subsequent phase transitions of matter. Simulations were compared to original ablation experiments performed on aluminum and copper samples. The behavior of the ablation depth as a function of laser fluence confirms the importance of the use of specific two-temperature equation of state and hydrodynamics. An optimum condition for laser fluence has been identified and could provide a precious information for efficient material processing. We have highlighted that ablation process is not only governed by electronic diffusion but also by the high shock formation and propagation. The differences between experimental and numerical results remain, however, more pronounced for low laser fluences. We took a great care to extend the metal properties to the nonequilibrium case. Nevertheless, inclusion of realistic material parameters, such as sophisticated band structure of metals or various scattering mechanisms, would allow calculations with more accuracy. Also, a full nonequilibrium treatment should take into account the conductivity dependence with both electron-electron and electron-phonon collisions. This work is in progress and implies an elaborate optical absorption model more suitable for ultrashort laser pulse. Simulations suggest that the in-depth stresses induced by an ultrashort laser pulse provide information of the matter dynamics in time, with which experimental pressure measurements could be compared. In particular, because it develops over temporal scales larger than the energy deposition one, the characteristic shape of the delayed shock conveys information about the interaction physics and it should thus supply a promising way for exploring matter distortions upon picosecond time scales. A.M. Komashko, M.D. Feit, A.M. Rubenchik, M.D. Perry, and P.S. Banks, Appl. Phys. A: Mater. Sci. Process. A69, Suppl. S95 (1999). S. Laville, F. Vidal, T.W. Johnston, O. Barthélemy, M. Chaker, B. Le Drogoff, J. Margot, and M. Sabsabi, Phys. Rev. E 66, 066415 (2002). C. Schäfer, H.M. Urbassek, and L.V. Zhigilei, Phys. Rev. B 66, 115404 (2002). D.S. Ivanov and L.V. Zhigilei, Phys. Rev. Lett. 91, 105701, (2003). R. Stoian, A. Rosenfeld, D. Ashkenasi, I.V. Hertel, N.M. Bulgakova, and E.E.B. Campbell, Phys. Rev. Lett. 88, 097603 (2002). N.M. Bulgakova, R. Stoian, A. Rosenfeld, I.V. Hertel, and E.E.B. Campbell, Phys. Rev. B 69, 054102 (2004). M.D. Feit, A.M. Komashko, A.M. Rubenchik, Appl. Phys. A: Mater. Sci. Process. 79, 1657 (2004). E.G. Gamaly, A.V. Rode, B. Luther-Davies, V.T. Tikhonchuk, Phys. Plasmas 9, 949 (2002). S.I. Anisimov, B.L. Kapeliovich, and T.L. Perel’man, Zh. Eksp. Teor. Fiz. 66, 776 (1974) \[Sov. Phys. JETP 39, 375 (1974)\]. B. Sallé, O. Gobert, P. Meynadier, M. Perdrix, G. Petite, and A. Semerok, Appl. Phys. A: Mater. Sci. Process. A69, Suppl. S381 (1999). S. Nolte, C. Momma, H. Jacobs, A. Tünnermann, B.N. Chichkov, B. Wellegehausen, and H. Welling, J. Opt. Soc. Am. B 14, 2716 (1997). A.E. Wynne, B.C. Stuart, Appl. Phys. A 76, 373 (2003). Preuss, A. Demchuk, and M. Stuke, Appl. Phys. A: Mater. Sci. Process. A61, 33 (1995). K. Eidmann, J. Meyer-ter-Vehn, T. Schlegel, and S. Hüller, Phys. Rev. E 62, 1202 (2000). Y.V. Afanasiev, B.N. Chichkov, N.N. Demchenko,, V.A. Isakov, and I.N. Zavestovskaya, 28th Conference on Contr. Fusion and Plasma Phys. Funchal, ECA 25A, 2021 (2001). J.K. Chen, J.E. Beraun, L.E. Grimes, and D.Y. Tzou, Int. J. Solids Struct. 39, 3199 (2002). D. Perez and L. J. Lewis, Phys. Rev. Lett. 89, 255504 (2002). P. Lorazo, L.J. Lewis, and M. Meunier, Phys. Rev. Lett. 91, 225502 (2003). H.M. Milchberg, R.R. Freeman, S.C. Davey, and R.M. More, Phys. Rev. Lett. 61, 2364 (1988). D. Fisher, M. Fraenkel, Z. Henis, E. Moshe, and S. Eliezer, Phys. Rev. E 65, 016409 (2001). B. Rethfeld, A. Kaiser, M. Vicanek and G. Simon, Phys. Rev. B, 65, 214303 (2002). P.B. Corkum, F. Brunel, N.K. Sherman, and T. Srinivasan-Rao, Phys. Rev. Lett. 61, 2886 (1988). V. E. Gusev and O. B. Wright, Phys. Rev. B 57, 2878 (1998). F. Bonneau, P. Combis, J.L. Rullier, J. Vierne, B. Bertussi, M. Commandré, L. Gallais, J.Y. Natoli, I. Bertron, F. Malaise, and J.T. Donohue, Appl. Phys. B, 78, 447-452, (2004). E. Palik: Handbook of Optical Constants of Solids (Academic Press, London 1985). W. Ebeling, A. Förster, V. Fortov, V. Griaznov, A. Polishchuk: *Thermophysical Properties of Hot Dense Plasmas* (Teubner, Stuttgart 1991). A.V. Bushman, I.V. Lomonosov, and V.E. Fortov, Sov. Tech. Rev. B 5, 1 (1993). D.J. Steinberg, S.G. Cochran, and M.W. Guinan, J. Appl. Phys. 51, 1498 (1980). N.W. Ashcroft and N.D. Mermin, Solid State Physics (Saunders College Publishing, 1976). S.I. Anisimov and B. Rethfeld, Proc. SPIE 3093, 192 (1997). X.Y. Wang, D.M. Riffe, Y.-S. Lee, and M.C. Downer, Phys. Rev. B 50, 8016 (1994). F.R. Tuler, B.M. Butcher, Int. J. Fract. Mech. 4, 431 (1968). V.I. Romanchenko and G.V. Stepanov, J. Appl. Mech. Tech. Phys. 21, 555 (1981). L. Tollier and R. Fabbro, J. Appl. Phys. 83, 1224 (1997). J.P. Romain, F. Bonneau, G. Dayma, M. Boustie, T. de Rességuier and P. Combis, J. Phys. : Condens. Matter 14, 10793 (2002). R. Le Harzic, Ph.D. thesis, Université de Saint-Etienne, (2002). B. Rethfeld, K. Sokolowski-Tinten, D. Von der Linde, and S.I. Anisimov, Phys. Rev. B 65, 092103 (2002).
{ "pile_set_name": "ArXiv" }
--- abstract: 'A new cavity-chain layout has been proposed for the main linac of the TESLA linear collider [@SUSU]. This superstructure-layout is based upon four 7-cell superconducting standing-wave cavities, coupled by short beam pipes. The main advantages of the superstructure are an increase in the active accelerating length in TESLA and a saving in rf components, especially power couplers, as compared to the present 9-cell cavities. The proposed scheme allows to handle the field-flatness tuning and the HOM damping at sub-unit level, in contrast to standard multi-cell cavities. The superstructure-layout is extensively studied at DESY since 1999. Computations have been performed for the rf properties of the cavity-chain, the bunch-to-bunch energy spread and multibunch dynamics. A copper model of the superstructure has been built in order to compare with the simulations and for testing the field-profile tuning and the HOM damping scheme. A “proof of principle” niobium prototype of the superstructure is now under construction and will be tested with beam at the TESLA Test Facility in 2001. In this paper we present latest results of these investigations.' author: - | N. Baboi, M. Liepe, J. Sekutowicz\ Deutsches Elektronen-Synchrotron DESY, D-22603 Hamburg, Germany,\ M. Ferrario, INFN, Frascati, Italy title: 'Superconducting Superstructure for the TESLA Collider: New Results' --- INTRODUCTION ============ The cost for a superconducting linear collider can be significantly reduced by minimizing the number of microwave components, and increasing the fill factor in a machine. Here the fill factor is meant as a ratio of the active cavity length to the total cavity length (active length plus interconnection). These two conditions become partially fulfilled when the number of cells ($N$) in a structure -fed by one fundamental mode (FM) coupler- increases. Unfortunately there are two limitations on the cell’s number in one accelerating structure: firstly the field flatness -the sensitivity of the field pattern increases proportional to $N^2$- and secondly trapped higher order modes (HOM). In order to overcome these limitations on $N$, the concept of the superstructure has been proposed for the TESLA main linac [@SUSU]. In this concept four 7-cell cavities (sub-units) are coupled by short beam tubes. The whole chain can be fed by one FM coupler attached at one end beam tube. The length of the interconnections between the cavities is chosen to be half of the wave length. Therefore the $\pi$-0 mode ($\pi$ cell-to-cell phase advance and 0 cavity-to-cavity phase advance) can be used for acceleration. In the proposed scheme HOM couplers can be attached to interconnections and to end beam tubes. All sub-units are equipped with a tuner. Accordingly the field flatness and the HOM damping can be still handled at the 7-cell sub-unit level. REFILLING OF CELLS AND BUNCH-TO-BUNCH ENERGY SPREAD =================================================== The energy flow through cell-interconnections and the resulting bunch-to-bunch energy spread has been extensively studied for the superstructure with two independent codes: HOMDYN [@Ferrario] and MAFIA [@MAFIA][@Dohlus]. Negligible spread in the energy gain, smaller than $6 \cdot 10^{-5} $ for the whole train of 2820 bunches, proofs that energy flow is big enough to re-fill cells in the time between two sequential bunches; see Fig. \[spread\]. The energy spread results from the interference of the accelerating mode with other modes from the FM passband. The difference in energy becomes smaller at the end of the pulse due to the decay of the interfering modes. ![Calculated energy gain for 2820 bunches accelerated by the proposed superstructure.[]{data-label="spread"}](TUA15pic1.eps){width="65mm"} FIELD FLATNESS TUNING ===================== The $\pi$-0 mode will be used for the acceleration of beam in the superstructure. Before assembly, each of the four 7-cell cavities will be pre-tuned for flat field profile and the chosen frequency of the $\pi$-0 mode. The pre-tuning procedure is based on measurements of all modes of the fundamental mode passband. It allows to adjust the profile with accuracy of better than 2-3 $\%$ for a 9-cell TESLA cavity. This error corresponds to a frequency accuracy of the individual cells of $\pm$ 30 kHz. After the cavity chain of a superstructure has been assembled and is operated in the linac at 2K, the frequency of each sub-unit can be corrected in order to equalize the mean value of the field amplitude in all sub-units (not between cells within one sub-unit). This field profile correction is possible during the linac operation, since each 7-cell structure is equipped with its own frequency tuner. The method proposed to equalize the average accelerating field of sub-units during operation is based on perturbation theory, similar to the standard bead-pull method of L. Maier and J. Slater [@pert]. At first, present fields of all sub-units are measured. For that, successively, the volume of each sub-unit is changed by the same amount (stepping motor of each tuner will be moved by the same number of steps) to measure the frequency change of the $\pi$-0 mode. The change is proportional to the stored energy in the sub-unit of the superstructure. For each sub-unit relative values can be defined and used to calculate frequency corrections needed to equalize the field. This method has been tested on a room temperature Cu model of the superstructure -see Fig. \[vergleich\]- and by computer simulations, see Fig. \[tuning\]. One should note that the method requires only one pickup probe for all 28 cells, and therefore effectively reduces the numbers of cables, feedthroughs and electronics needed for the control. ![Field profile before field flatness tuning. Shown is a comparison between the measured field profile (bead pulling on a Cu model of the superstructure) and the field profile calculated from the measured frequency perturbations of the individual cavities.[]{data-label="vergleich"}](TUA15pic2.eps){width="65mm"} ![Example of field flatness tuning by tuning the individual cavities (computer simulation). For the frequency of the individual cells a variation of $\pm$ 30 kHz is assumed.[]{data-label="tuning"}](TUA15pic3.eps){width="65mm"} STATISTICS OF FIELD FLATNESS ============================ As discussed above the field flatness in a cold superstructure can be handled at the 7-cell sub-unit level by adjusting the frequency of each sub-unit. In order to verify this, the field flatness in a superstructure has been calculated before and after tuning of the individual cavities. The frequencies of the cavities have been corrected accordingly to the proposed tuning method. For the frequency of the individual cells a variation of $\pm$ 30 kHz is assumed, based on the experience with the TESLA 9-cell cavities. The statistics of 10000 calculated field profiles is shown in Fig. \[statistic\]. By adjusting the frequencies of the individual cavities the field unflatness is significantly reduced. ![Calculated field flatness statistics of 10000 superstructures before (a) and after field flatness tuning by adjusting the frequencies of the individual cavities (b). The frequencies of the individual cells varies by $\pm$ 30 kHz.[]{data-label="statistic"}](TUA15pic4.eps){width="65mm"} HOM DAMPING AND MULTIBUNCH EMITTANCE ==================================== ![image](TUA15pic5.eps){width="165mm"} The vertical normalized multibunch emittance at the interaction point of the TESLA collider is desired to be $3 \cdot 10^{-8}$ m$\cdot$rad. Simulations of the emittance growth along the TESLA linac showed, that the dipole modes with dominating impedance (R/Q) should for that be damped to the level of $Q_{ext}<2\cdot 10^5$ [@EPAC2k]. The interconnecting tubes of the superstructure allow to put HOM couplers between the 7-cell cavities. Measurements on a Cu model of the superstructure at room temperature have demonstrated, that the required damping can be achieved with five HOM couplers: three attached at the interconnections and one at both ends [@CUSUSU]; see Fig. \[hom\]. Note that the sum of all listed dipole modes impedances is almost ten times smaller than the BBU limit. NB PROTOTYPE ============ A first “proof of principle” niobium prototype of the superstructure is under construction [@NBSUSU]. The sub-units are under fabrication and will be vertical tested similar to TESLA 9-cell cavities. The beam test for the prototype is scheduled for Spring 2001. It will allow to verify energy spread computations and RF measurements on the room temperature models. This will include the test of the HOM damping, the performance of the HOM couplers at higher magnetic field and the tuning method during operation at 2K. CONCLUSIONS =========== The presented measurements and calculations demonstrate, that in the proposed superstructure the refilling of cells, the HOM damping, the field flatness and the field flatness tuning can be handled. For the final prove, that the superstructure layout can be used for acceleration, a niobium prototype will be tested with beam at the TESLA Test Facility linac. ACKNOWLEDGEMENTS ================ This work has benefited greatly from discussions with the members of the TESLA collaboration. [9]{} J. Sekutowicz, M. Ferrario and Ch. Tang, Phys. Rev. ST Accel. Beams, vol. 2, No. 6 (1999) M. Ferrario,. A. Mosnier, L. Serafini, F. Tazzioli, J. M. Tessier, Particle Accelerators, Vol. 52 (1996) R. Klatt et al., Proc. of Linear Accelerator Conference, Stanford, June 1986 M. Dohlus, private communication L. Maier and J. Slater, Journal of Applied Physics, Vol. 23, No. 1, January 1952, page 68-77 N. Baboi, R. Brinkmann, M. Liepe, J. Sekutowicz, EPAC2000, Viena, to be published H. Chen, G. Kreps, M. Liepe, V. Puntus and J. Sekutowicz, Proc. of the 9th Workshop on rf Superconductivity, Santa Fe, 1999 R. Bandelmann et al., Proc. of the 9th Workshop on rf Superconductivity, Santa Fe, 1999
{ "pile_set_name": "ArXiv" }
--- abstract: | The Neumann initial-boundary problem for the chemotaxis system $$\begin{aligned} \label{prob:abstract} \tag{$\star$} \begin{cases} u_t = \Delta u - \nabla \cdot (u \nabla v) + \kappa(|x|) u - \mu(|x|) u^p, \\ 0 = \Delta v - \frac{m(t)}{|\Omega|} + u, \quad m(t) {\coloneqq}{\int_\Omega}u(\cdot, t) \end{cases} \end{aligned}$$ is studied in a ball $\Omega = B_R(0) \subset {\mathbb{R}}^2$, $R {>}0$ for $p \ge 1$ and sufficiently smooth functions $\kappa, \mu: [0, R] {\rightarrow}[0, \infty)$.\ We prove that whenever $\mu', -\kappa' \ge 0$ as well as $\mu(s) \le \mu_1 s^{2p-2}$ for all $s \in [0, R]$ and some $\mu_1 {>}0$ then for all $m_0 {>}8 \pi$ there exists $u_0 \in {{C^{0}({{\overline}\Omega})}}$ with ${\int_\Omega}u_0 = m_0$ and a solution $(u, v)$ to with initial datum $u_0$ blowing up in finite time. If in addition $\kappa \equiv 0$ then all solutions with initial mass smaller than $8 \pi$ are global in time, displaying a certain critical mass phenomenon.\ On the other hand, if $p {>}2$, we show that for all $\mu$ satisfying $\mu(s) \ge \mu_1 s^{p-2-{\varepsilon}}$ for all $s \in [0, R]$ and some $\mu_1, {\varepsilon}{>}0$ the system admits a global classical solution for each initial datum $0 \le u_0 \in {{C^{0}({{\overline}\Omega})}}$.\ **Key words:** [chemotaxis, critical mass, finite-time blow-up, logistic source]{}\ **AMS Classification (2010):** [35B44 (primary); 35B33, 35K65, 92C17 (secondary)]{} author: - | Mario Fuest[^1]\ [Institut für Mathematik, Universität Paderborn,]{}\ [33098 Paderborn, Germany]{} title: 'Finite-time blow-up in a two-dimensional Keller–Segel system with an environmental dependent logistic source' --- Introduction ============ We live in a heterogeneous environment and the fact that for instance growth or death rates may depend on spatial features has been incorporated into several models describing population dynamics. Among the more famous examples is the system $$\begin{aligned} \label{prob:competition} \begin{cases} u_t = d_1 \Delta u + u [{\kappa}(x) - u - v], \\ v_t = d_2 \Delta v + v [{\kappa}(x) - u - v] \end{cases}\end{aligned}$$ with $d_1, d_2 {>}0$ and ${\kappa}: \Omega {\rightarrow}[0, \infty)$, $\Omega \subset {\mathbb{R}}$, $n \in {\mathbb{N}}$, being a smooth, bounded domain, modelling two species $u$ and $v$ competing for a common resource, where ${\kappa}$ represents a reproduction rate influenced by the environment. It has the remarkable property that whenever $d_1 {<}d_2$, then there exists $u_\infty({\kappa}) {>}0$ such that for any initial data $u_0, v_0 \in {{C^{0}({{\overline}\Omega})}}$ with $u_0, v_0 \ge 0$ and $v_0 \not\equiv 0$ the corresponding solution $(u, v)$ converges to $(u_\infty({\kappa}), 0)$ – provided ${\kappa}$ is not constant, which reflects spatial heterogeneity ([@DockeryEtAlEvolutionSlowDispersal1998]). If, however, ${\kappa}$ is constant then $(\lambda {\kappa}, (1-\lambda) {\kappa}))$ is a steady state of for all $\lambda \in [0, 1]$ implying that species with different diffusion rates may coexist in homogeneous environments. Furthermore, there is considerable activtiy in the analysis of systems similar to ; for instance, convections terms have been added to these equations ([@LouEtAlGlobalDynamicsLotka2018]) and the case of weak competition ([@HeNiGlobalDynamicsLotka2016; @LouEffectsMigrationSpatial2006]) has been studied in great detail as well. These results (among others) may arouse interest to consider environmental depending functions in other models as well: The system $$\begin{aligned} \label{prob:log_pp} \begin{cases} u_t = \Delta u - \nabla \cdot (u \nabla v) + \kappa u - \mu u^p, \\ v_t = \Delta v - v + u, \end{cases}\end{aligned}$$ in $\Omega \times (0, T)$, where $\Omega \subset {\mathbb{R}}^n$, $n \in {\mathbb{N}}$, is a smooth, bounded domain, $T \in (0, \infty]$ and $\kappa, \mu {>}0$ and $p \ge 1$ are given parameters, is relevant in the modeling of, for instance, micro- and macroscopic population dynamics ([@HillenPainterUserGuidePDE2009], [@ShigesadaEtAlSpatialSegregationInteracting1979]) or tumor invasion processes ([@ChaplainLolasMathematicalModellingCancer2005]). For these so-called chemotaxis systems, at first introduced by Keller and Segel ([@KellerSegelTravelingBandsChemotactic1971]) even questions of global existence and boundedness are of great interest. After all, if one chooses $\kappa = \mu \equiv 0$ in in space-dimensions two ([@HorstmannWangBlowupChemotaxisModel2001; @SenbaSuzukiParabolicSystemChemotaxis2001]) and higher ([@WinklerFinitetimeBlowupHigherdimensional2013]) there are initial data leading to blow-up. For a more broad introduction to Keller–Segel models, which have been intensively studied in the past decades, we refer to the survey [@BellomoEtAlMathematicalTheoryKeller2015]. Intuitively, the superlinear degrading term $\mu u^p$ (with $\mu {>}0$ and $p {>}1$) in ) should somewhat decrease the possibility of (finite-time) blow-up. However, exactly how large $\mu$ and $p$ need to be in order to guarantee global existence seems to be an open question, even for constant $\kappa, \mu \ge 0$. If $n = 2$ and $\mu {>}0$ all classical solutions to exist globally in time ([@OsakiEtAlExponentialAttractorChemotaxisgrowth2002]). One may even replace $u^2$ by a function growing slightly slower than $s \mapsto s^2$ ([@XiangSublogisticSourceCan2018]). The same holds true in higher dimensions, provided $p {>}2$ or $p = 2$ and $\mu {>}\frac{n}{4}$ ([@WinklerBoundednessHigherDimensionalParabolicParabolic2010]), while for $p = 2$ and any $\mu {>}0$ at least global weak solutions have been constructed, which become smooth after finite time provided $\kappa$ is small enough ([@LankeitEventualSmoothnessAsymptotics2015]). As chemicals can be assumed to diffuse much faster than cells a typical simplification of is the parabolic-elliptic system $$\begin{aligned} \label{prob:log_pe} \begin{cases} u_t = \Delta u - \nabla \cdot (u \nabla v) + \kappa u - \mu u^p, \\ 0 = \Delta v - v + u. \end{cases}\end{aligned}$$ For $n = 2$ the conditions $p \ge 2$ and $\mu {>}0$ suffice to ensure global existence while for $n \ge 3$, $p = 2$ and $\mu \ge \frac{n-2}{n}$ or $n \ge 3$, $p {>}2$ and arbitrary $\mu {>}0$ the same can be achieved ([@KangStevensBlowupGlobalSolutions2016; @TelloWinklerChemotaxisSystemLogistic2007]). On the other hand, any thresholds may be surpassed, if $p = 2$, $\mu \in (0, 1)$ and the diffusion is sufficiently weak, that is, $\Delta u$ in the first equation in is replaced by ${\varepsilon}\Delta u$ for suitable ${\varepsilon}{>}0$ ([@WinklerHowFarCan2014; @LankeitChemotaxisCanPrevent2015]). This stays in contrast to the case without cross-diffusion as then ${\overline}u {\coloneqq}\max\{\|u_0\|_{{{L^{\infty}(\Omega)}}}, \frac{\kappa}{\mu}\}$ always forms a supersolution and thus indicates that in chemotaxis systems with logistic source nontrivial structures may emerge at least on intermediate time scales. Even more drastic formations are known to form if $p$ is chosen close to (but sill larger than) $1$. After initial data causing finite-time blow-up have been constructed in dimensions five and higher for certain $p {>}\frac32$ in a system closely related to in [@WinklerBlowupHigherdimensionalChemotaxis2011], in [@WinklerFinitetimeBlowupLowdimensional2018] finite-time blow-up has also been shown to occur in for any $n \ge 3$ and $$\begin{aligned} \begin{cases} p {<}\frac76, & n \in \{3, 4\}, \\ p {<}1 + \frac{1}{2(n-1)}, & n \ge 5. \end{cases}\end{aligned}$$ Hence, at least in space-dimensions three and higher even superlinear degegration terms do not always ensure global existence. The case of $\mu$ and $\kappa$ depending on space (and time) has also been studied. In their three-paper series [@SalakoShenParabolicellipticChemotaxisModel2018; @SalakoShenParabolicellipticChemotaxisModel2018a; @SalakoShenParabolicellipticChemotaxisModel2018b] Salako and Shen showed inter alia global existence of solutions to with $\Omega = {\mathbb{R}}$ provided $\inf_{x \in \Omega} \mu(x) {>}1$. #### Main results Apparently, rigorously proving blow-up in Keller–Segel systems is a difficult problem. Known proofs for parabolic-parabolic chemotaxis systems strongly rely on certain energy structures ([@CieslakLaurencotFiniteTimeBlowup2010; @HorstmannWangBlowupChemotaxisModel2001; @WinklerAggregationVsGlobal2010]) while in the parabolic-elliptic setting additional approaches are moment-type arguments ([@BilerLocalGlobalSolvability1998; @NagaiBlowupNonradialSolutions2001]) However, all these methods appear inadequate for chemotaxis systems with logistic source. In this paper we further simplify and consider $$\begin{aligned} \label{prob:p} \tag{P} \begin{cases} u_t = \Delta u - \nabla \cdot (u \nabla v) + \kappa(|x|) u - \mu(|x|) u^p, & \text{in $\Omega \times (0, T)$}, \\ 0 = \Delta v - \frac{m(t)}{|\Omega|} + u, \quad m(t) {\coloneqq}{\int_\Omega}u(\cdot, t), & \text{in $\Omega \times (0, T)$}, \\ \partial_\nu u = \partial_\nu v = 0, & \text{on $\partial \Omega \times (0, T)$}, \\ u(\cdot, 0) = u_0, & \text{in $\Omega$} \end{cases}\end{aligned}$$ for given functions $\kappa, \mu, u_0: \Omega {\rightarrow}{\mathbb{R}}$ and $T \in (0, \infty]$ where we henceforth fix $R {>}0$ and $\Omega {\coloneqq}B_R(0) \subset {\mathbb{R}}^2$. Our main results are the following. \[th:blow\_up\] Let $p \ge 1$, $\alpha \ge 2(p - 1)$, $\mu_1 {>}0$ and suppose that $\kappa, \mu \in C^0([0, R]) \cap C^1((0, R))$ satisfy $$\begin{aligned} \label{eq:blow_up:cond_kappa_mu} \kappa, -\kappa', \mu, \mu' \ge 0 \quad \text{in $(0, R)$} \end{aligned}$$ as well as $$\begin{aligned} \label{eq:blow_up:cond_mu} \mu(s) \le \mu_1 s^\alpha \quad \text{for all $s \in [0, R]$}. \end{aligned}$$ For any $m_0 {>}8\pi$ there exist $r_1 \in (0, R)$ and $\tilde m \in (0, m_0)$ such that if $$\begin{aligned} \label{eq:blow_up:cond_u0} 0 \le u_0 \in C^0({{\overline}\Omega}) \quad \text{is radially symmetric and radially decreasing} \end{aligned}$$ with $$\begin{aligned} \label{eq:blow_up:mass_concentration} {\int_\Omega}u_0 = m_0 \quad \text{and} \quad \int_{B_{r_1}(0)} u_0 \ge \tilde m, \end{aligned}$$ then there exists a classical solution $(u, v)$ to with initial datum $u_0$ blowing up in finite time; that is, there exists ${T_{\max}}\in (0, \infty)$ such that $$\begin{aligned} \label{eq:blow_up:limsup_u} \limsup_{t {\nearrow}{T_{\max}}} \|u(\cdot, t)\|_{L^\infty(\Omega)} = \infty. \end{aligned}$$ To give a more concrete example, the conditions and are for instance fulfilled if $p = 2$, $\kappa \ge 0$ is a constant and $\mu(r) = r^2, r \in [0, R]$. This result will be complemented by two statements on global solvability. Firstly, we show at least in the case $\kappa \equiv 0$ the value $8\pi$ – which does not, as one could have expected, depend on $\alpha$ or $p$ – is essentially optimal. \[prop:critical\_mass\] Let $\kappa \equiv 0$, $0 \le \mu \in C^0([0, R]) \cap C^1((0, R))$ and $p \ge 1$. For any nonnegative radially symmetric $u_0 \in C^0({{\overline}\Omega})$ with ${\int_\Omega}u_0 {<}8\pi$ there exists a global classical solution $(u, v)$ to with initial datum $u_0$. Secondly, if $p {>}2$, we prove that for arbitrary initial data global classical solutions exist provided $\mu$ does not grow too fast. \[prop:global\_ex\] Let $p {>}2$, $\alpha {<}p - 2$, $\mu_1 {>}0$ and $\kappa, \mu \in C^0([0, R]) \cap C^1((0, R))$. If $$\begin{aligned} \label{eq:global_ex:cond_mu} \mu(s) \ge \mu_1 s^\alpha \quad \text{for all $s \in [0, R]$} \end{aligned}$$ then admits a global classical solution for any nonnegative initial datum $u_0 \in {{C^{0}({{\overline}\Omega})}}$. #### Plan of the paper For the proof of Theorem \[th:blow\_up\] we will rely on a transformation introduced by Jäger and Luckhaus in [@JagerLuckhausExplosionsSolutionsSystem1992]. As will be seen in Lemma \[lm:pde\_w\] below the function $w: [0, R]^2 \times [0, {T_{\max}}) {\rightarrow}{\mathbb{R}}$ defined by $$\begin{aligned} w(s, t) {\coloneqq}\int_0^{\sqrt s} \rho u(\rho, t) {\,\mathrm{d}}\rho, \quad s \in [0, R^2], t \in [0, {T_{\max}}),\end{aligned}$$ solves the *scalar* PDI $$\begin{aligned} \label{eq:intro:p_pdi} w_t &\ge 4s w_{ss} + 2 w w_s - \frac{m(t)}{|\Omega|} s w_s - 2^{p-1} \int_0^s \mu(\sqrt \sigma) w_s^p(\sigma, \cdot) {\,\mathrm{d}\sigma}\quad \text{in $(0, R^2) \times (0, {T_{\max}})$}.\end{aligned}$$ In similar – but higher dimensional – settings for certain $s_0, \gamma {>}0$ the function $$\begin{aligned} \phi: [0, {T_{\max}}) {\rightarrow}{\mathbb{R}}; \quad t \mapsto \int_0^{s_0} s^{-\gamma} (s-s_0) w(s, t) {\,\mathrm{d}s},\end{aligned}$$ where $w$ denotes a similar transformed quantity, has been shown to solve a certain ODI implying finite-time blow-up ([@WinklerCriticalBlowupExponent2018], [@WinklerFinitetimeBlowupLowdimensional2018]). However, these techniques seem to be insufficient to provide any insights in the two dimensional setting, as the term stemming from the diffusion can apparently not be dealt with anymore. Therefore, we follow a different approach. In order to show finite-time blow-up for with $\kappa = \mu \equiv 0$ in the planar setting Winkler ([@WinklerHowUnstableSpatial2018]) has recently utilized the function $$\begin{aligned} \phi: [0, {T_{\max}}) {\rightarrow}{\mathbb{R}}; \quad t \mapsto \int_0^{s_0} (s-s_0)^\beta w(s, t) {\,\mathrm{d}s}\end{aligned}$$ for certain $s_0, \beta {>}0$ instead. Most terms in can be dealt similarly as in [@WinklerHowUnstableSpatial2018] – except for the nonlocal term $\int_0^s \mu(\sqrt \sigma) w_s^p(\sigma, \cdot) {\,\mathrm{d}\sigma}$ which is, of course, not present if $\mu \equiv 0$. The main idea for dealing with this integral is to derive a pointwise bound for $w_s$ (Lemma \[lm:ws\_bdd\]) and then integrate by parts, where the condition $\alpha \ge 2(p-1)$ is apparently needed in order to able to handle the remaining terms (Lemma \[lm:i4\]). Finally, we will then see by an ODI comparison argument that for suitably chosen initial data $\phi$ (and hence $u$) cannot exist globally in time. Preliminaries ============= The following statement on local existence, in its essence based on a fixed point argument, is standard. Hence we may omit a proof here and just refer to, for instance, [@CieslakWinklerFinitetimeBlowupQuasilinear2008] or [@TelloWinklerChemotaxisSystemLogistic2007] for more detailed arguments in similar frameworks. \[lm:local\_ex\] Let $0 \le u_0 \in C^0({{\overline}\Omega})$ and $\kappa, \mu \in C^0([0, R]) \cap C^1((0, R))$. Then there exist ${T_{\max}}\in (0, \infty]$ and a classical solution $(u, v)$ to uniquely determined by $$\begin{aligned} u &\in C^0({{\overline}\Omega}\times [0, {T_{\max}})) \cap C^{2, 1}({{\overline}\Omega}\times (0, {T_{\max}})), \\ v &\in \bigcap_{q {>}2} C^0([0, {T_{\max}}); W^{1, q}(\Omega)) \cap C^{2, 0}({{\overline}\Omega}\times (0, {T_{\max}})) \end{aligned}$$ and $$\begin{aligned} {\int_\Omega}v(\cdot, t) = 0 \quad \text{for all $t \in (0, {T_{\max}})$}. \end{aligned}$$ Moreover, this solution is nonnegative in the first component, radially symmetric if $u_0$ is radially symmetric and such that if ${T_{\max}}{<}\infty$ then $$\begin{aligned} \limsup_{t {\nearrow}{T_{\max}}} \|u(\cdot, t)\|_{L^\infty(\Omega)} = \infty. \end{aligned}$$ Unless otherwise stated we henceforth fix $u_0 \in {{C^{0}({{\overline}\Omega})}}$ satisfying as well as $\kappa, \mu \in C^0([0, R]) \cap C^1((0, R))$ fulfilling and denote the corresponding solution provided by Lemma \[lm:local\_ex\] by $(u, v)$ as well as the maximal existence time by ${T_{\max}}$. Finally, we set $m_0 {\coloneqq}m(0)$ and $\kappa_1 {\coloneqq}\|\kappa\|_{L^\infty((0, R))}$. \[lm:mass\] For all $t \in (0, {T_{\max}})$ the inequalities $$\begin{aligned} 0 \le m(t) \le m_0 {{\mathrm{e}}}^{\kappa_1 t} \end{aligned}$$ hold. Nonnegativity of $u$ implies $m \ge 0$ while an ODI comparison argument yields $m(t) \le m_0 {{\mathrm{e}}}^{\kappa_1 t}$ for $t {>}0$ due to $m' \le \kappa_1 m$ in $(0, {T_{\max}})$. As mentioned in the introduction the proof of Theorem \[th:blow\_up\] will rely on transforming into a scalar equation. \[lm:pde\_w\] Define $$\begin{aligned} w(s, t) {\coloneqq}\int_0^{\sqrt{s}} \rho u(\rho, t) {\,\mathrm{d}\rho}, \quad s \in [0, R^2], t \in [0, {T_{\max}}). \end{aligned}$$ Then $$\begin{aligned} \label{eq:pde_w:w_s_eq_u} w_s(s, t) = \tfrac12 u(\sqrt s, t) \end{aligned}$$ and $$\begin{aligned} \label{eq:pde_w:pde} w_t(s, t) &= 4s w_{ss}(s, t) + 2 w(s, t) w_s(s, t) - \frac{m(t)}{|\Omega|} s w_s(s, t) \notag \\ &{\mathrel{{\hphantom}{=}}}+ \int_0^s \left(\kappa(\sqrt \sigma) w_s(\sigma, t) - 2^{p-1} \mu(\sqrt \sigma) w_s^p(\sigma, t) \right) {\,\mathrm{d}\sigma}\end{aligned}$$ for $s \in (0, R^2)$ and $t \in (0, {T_{\max}})$. The first two equations in read in radial form $$\begin{aligned} u_t &= \frac1r (r u_r - r u v_r)_r + \kappa(r) u - \mu(r) u^p \quad \text{and} \\ 0 &= \frac1r (r v_r)_r -\frac{m(t)}{|\Omega|} + u, \end{aligned}$$ that is $$\begin{aligned} r v_r(r, \cdot) = \int_0^r \left( \frac{m(t)}{|\Omega|} \rho - \rho u(\rho, \cdot) \right) {\,\mathrm{d}\rho}= \frac{m(t)}{2|\Omega|} r^2 - w(r^2, \cdot). \end{aligned}$$ Thus, a direct calculation yields $$\begin{aligned} w_s(s, t) &= \frac{1}{2\sqrt s} \cdot \sqrt s u(\sqrt s, t) = \frac12 u(\sqrt s, t), \\ w_{ss}(s, t) &= \frac12 u_r(\sqrt s, t) \cdot \frac1{2 \sqrt s} = \frac1{4 \sqrt s} u_r(\sqrt s, t) \quad \text{and} \\ w_t(s, t) &= \int_0^{\sqrt s} \frac{\rho}{\rho} [\rho u_r(\rho, t) - \rho u(\rho, t) v_r(\rho, t)]_r {\,\mathrm{d}\rho}+ \int_0^{\sqrt s} \rho [\kappa(\rho) u(\rho, t) - \mu(\rho) u^p(\rho, t) ] {\,\mathrm{d}\rho}\\ &= \sqrt s u_r(\sqrt s, t) - u(\sqrt s, t) \left[ \frac{m(t)}{2|\Omega|} s - w(s, t) \right] - \frac12 \int_0^s \left( \kappa(\sqrt \sigma) u + \mu(\sqrt \sigma) u^p(\sqrt \sigma, t) \right) {\,\mathrm{d}\sigma}\\ &= 4 s w_{ss}(s, t) + 2 w(s, t) w_s(s, t) - \frac{m(t)}{|\Omega|} s w_s(s, t) - \int_0^s \left(\kappa(\sqrt \sigma) w_s + 2^{p-1} \mu(\sqrt \sigma) w_s^p(\sigma, t) \right) {\,\mathrm{d}\sigma}\end{aligned}$$ for $s \in (0, R^2)$ and $t \in (0, {T_{\max}})$. Supercritical mass allows for blow-up ===================================== Crucially relying on transforming into the scalar equation we will prove Theorem \[th:blow\_up\] at the end of this section. The function $\phi$ ------------------- \[lm:phi\_first\_ode\] Let $\beta {>}-1$ and $s_0 \in (0, R^2)$. The function $$\begin{aligned} \phi: [0, {T_{\max}}) {\rightarrow}{\mathbb{R}}, \quad t \mapsto {\int_0^{s_0}}(s_0-s)^\beta w(s, t) {\,\mathrm{d}s}\end{aligned}$$ belongs to $C^0([0, {T_{\max}})) \cap C^1((0, {T_{\max}}))$ and satisfies $$\begin{aligned} \label{eq:phi_first_ode:ode} \phi'(t) &\ge 4 {\int_0^{s_0}}(s_0-s)^\beta s w_{ss}(s, t) {\,\mathrm{d}s}\notag \\ &{\mathrel{{\hphantom}{=}}}+ 2 {\int_0^{s_0}}(s_0-s)^\beta s w(s, t) w_s(s, t) {\,\mathrm{d}s}\notag \\ &{\mathrel{{\hphantom}{=}}}- \frac{m(t)}{|\Omega|} {\int_0^{s_0}}(s_0-s)^\beta s w_s(s, t) {\,\mathrm{d}s}\notag \\ &{\mathrel{{\hphantom}{=}}}- 2^{p-1} {\int_0^{s_0}}\int_0^s (s_0-s)^\beta \mu(\sqrt{\sigma}) w_s^p(\sigma, t) {\,\mathrm{d}\sigma}{\,\mathrm{d}s}\notag \\ &{\eqqcolon}I_1(t) + I_2(t) + I_3(t) + I_4(t) \end{aligned}$$ for all $t \in (0, {T_{\max}})$. As $w \in C^0({{\overline}\Omega}\times [0, {T_{\max}})) \cap C^1({{\overline}\Omega}\times (0, {T_{\max}}))$ by Lemma \[lm:pde\_w\], the asserted regularity of $\phi$ follows from standard Lebesgue integration theory, while is then a direct consequence of Lemma \[lm:pde\_w\] and nonnegativity of $u$ and $\kappa$. Our goal is to show that after an appropriate choice of parameters $\phi$ satisfies a certain ODI, which then implies finiteness of ${T_{\max}}$. \[lm:ode\_blow\_up\] Let $T, \tilde T, c_1, c_2, c_3 {>}0$. If $y \in C^0([0, T)) \cap C^1((0, T))$ satisfies $$\begin{aligned} \begin{cases} y' \ge c_1 y^2 - c_2 y - c_3, \\ y(0) \ge y_0 \end{cases} \end{aligned}$$ in $(0, T)$ with $$\begin{aligned} y_0 \ge \frac{c_2 + \sqrt{c_1 c_3}}{c_1} + \frac1{c_1 \tilde T}, \end{aligned}$$ then necessarily $T \le \tilde T$. As $$\begin{aligned} c_1 s^2 - c_2 s - c_3 = 0 \quad \text{if and only if} \quad s = \frac{c_2 \pm \sqrt{c_2^2 + 4 c_1 c_3}}{2 c_1} {\eqqcolon}\lambda_\pm \end{aligned}$$ the ODI implies that $y$ is increasing if and only if $y \le \lambda_-$ or $y \ge \lambda_+$. Since $$\begin{aligned} \lambda_+ \le \frac{c_2 + \sqrt{c_1 c_3}}{c_1} {<}y_0 \end{aligned}$$ and $$\begin{aligned} (s-\lambda_-)(s-\lambda_+) \ge (s-\lambda_+)^2 \quad \text{for all $s \ge \lambda_+$} \end{aligned}$$ we conclude that $y$ is indeed increasing in $(0, T)$ and satisfies $$\begin{aligned} y' \ge c_1 (y - \lambda_+)^2 \end{aligned}$$ in $(0, T)$. Hence by integrating we obtain $$\begin{aligned} t = \int_0^t 1 {\,\mathrm{d}s}\le \int_{y(0)}^{y(t)} \frac1{c_1 (y - \lambda_+)^2} \le \frac1{c_1 (y_0 - \lambda_+)} - \frac1{c_1 (y(t) - \lambda_+)} {<}\tilde T - 0 = \tilde T \quad \text{for all $t \in (0, T)$}, \end{aligned}$$ which is absurd for $T {>}\tilde T$. Apart from the nonlocal term in all integrals therein as well as $\phi(0)$ can be estimated as in [@WinklerHowUnstableSpatial2018 Lemma 3.2]. For sake of completeness we nonetheless give short proofs for the following lemmata. \[lm:phi\_0\] Let $\beta {>}-1$ and $s_0 \in (0, R^2)$ as well as $\tilde m \in (0, m)$ and $\lambda \in (0, 1)$. If $$\begin{aligned} \int_{B_{r_1}(0)} u_0 \ge \tilde m \end{aligned}$$ with $r_1 {\coloneqq}(\lambda s_0)^2$, then $$\begin{aligned} \phi(0) \ge \frac{\tilde m}{2 \pi (\beta+1)} ((1-\lambda) s_0)^{\beta+1}. \end{aligned}$$ Set $s_1 {\coloneqq}\lambda s_0$. As $w_0$ is increasing (due to $u_0 \ge 0$) we have $$\begin{aligned} \phi(0) &= {\int_0^{s_0}}(s_0-s)^\beta w_0(s) {\,\mathrm{d}s}\\ &\ge \int_{s_1}^{s_0} (s_0-s)^\beta w_0(s_1) {\,\mathrm{d}s}\\ &= \int_0^{\sqrt{s_1}} \rho u_0(\rho) {\,\mathrm{d}\rho}\int_{s_1}^{s_0} (s_0-s)^\beta {\,\mathrm{d}s}\\ &\ge \frac{\tilde m}{2 \pi} \cdot \frac{((1-\lambda) s_0)^{\beta+1}}{\beta+1}. \qedhere \end{aligned}$$ \[lm:i1\] Let $\beta {>}1$ and $s_0 \in (0, R^2)$. Then for all $t \in (0, {T_{\max}})$ $$\begin{aligned} \label{eq:i1:statement} I_1(t) \ge -\frac{2}{\pi} s_0^\beta m_0 {{\mathrm{e}}}^{\kappa_1 t} \end{aligned}$$ holds, where $I_1$ is defined in . By integrating by parts twice we obtain for $t \in (0, {T_{\max}})$ $$\begin{aligned} I_1(t) &= 4 {\int_0^{s_0}}(s_0-s)^\beta s w_{ss}(s, t) {\,\mathrm{d}s}\\ &= 4 {\int_0^{s_0}}\left( \beta (s_0-s)^{\beta-1} s - (s_0-s)^\beta \right) w_s(s, t) {\,\mathrm{d}s}+ 0 \\ &= 4 {\int_0^{s_0}}(s_0-s)^{\beta-1} \left( (\beta+1) s - s_0 \right) w_s(s, t) {\,\mathrm{d}s}\\ &= 4 {\int_0^{s_0}}(s_0-s)^{\beta-2} \left[ (\beta-1) \left( (\beta+1) s - s_0 \right) - (\beta+1) (s_0-s) \right] w(s, t) {\,\mathrm{d}s}\\ &= - 8 \beta {\int_0^{s_0}}(s_0-s)^{\beta-2} \left(s_0 - \frac{\beta+1}{2} s\right) w(s, t) {\,\mathrm{d}s}. \intertext{As $w(\cdot, t)$ is nonnegative and increasing by \eqref{eq:pde_w:w_s_eq_u} and Lemma~\ref{lm:mass}, noting that $s_0 - \frac{\beta+1}{2} s \le 0$ if and only if $s \ge s_1 {\coloneqq}\frac{2s_0}{\beta+1} \in (0, s_0)$, we conclude } I_1(t) &\ge - 8 \beta {\int_0^{s_0}}(s_0-s)^{\beta-2} \left(s_0 - \frac{\beta+1}{2} s\right) w(s_1, t) {\,\mathrm{d}s}\\ &= - 4 s_0^\beta w(s_1, t). \end{aligned}$$ Because the definition of $w$ and Lemma  warrant that $$\begin{aligned} w(s_1, t) \le w(R^2, t) = \frac{m(t)}{2 \pi} \le \frac{m_0 {{\mathrm{e}}}^{\kappa_1 t}}{2 \pi} \quad \text{for $t \in (0, {T_{\max}})$} \end{aligned}$$ a consequence thereof is . \[lm:i2\_i3\] Let $\beta {>}0$, $s_0 \in (0, R^2)$ and $\eta \in (0, 1)$. With $I_2$ and $I_3$ as in $$\begin{aligned} \label{eq:i2_i3:statement} I_2(t) + I_3(t) \ge (1-\eta) \frac{\beta(\beta+2)}{s_0^{\beta+2}} \phi^2(t) - \frac{m_0^2 {{\mathrm{e}}}^{2\kappa_1 t}}{2 \eta (\beta+1)(\beta+2) |\Omega|^2 } s_0^{\beta+2} \end{aligned}$$ holds then for all $t \in (0, {T_{\max}})$. Let $t \in (0, {T_{\max}})$. An integration by parts yields $$\begin{aligned} I_2(t) &= 2{\int_0^{s_0}}(s_0-s)^\beta w(s, t) w_s(s, t) {\,\mathrm{d}s}\\ &= {\int_0^{s_0}}(s_0-s)^\beta (w^2)_s(s, t) {\,\mathrm{d}s}\\ &= \beta {\int_0^{s_0}}(s_0-s)^{\beta-1} w^2(s, t) {\,\mathrm{d}s}+ \left[ (s_0-s)^\beta w^2(s, t) \right]_0^{s_0} \\ &= \beta {\int_0^{s_0}}(s_0-s)^{\beta-1} w^2(s, t) {\,\mathrm{d}s}\end{aligned}$$ while by another integration by parts and Young’s inequality we have $$\begin{aligned} I_3(t) &= - \frac{m(t)}{|\Omega|} {\int_0^{s_0}}(s_0-s)^\beta s w_s(s, t) {\,\mathrm{d}s}\\ &= \frac{m(t)}{|\Omega|} {\int_0^{s_0}}(s_0-s)^\beta w(s, t) {\,\mathrm{d}s}- \frac{\beta m(t)}{|\Omega|} {\int_0^{s_0}}(s_0-s)^{\beta-1} s w(s, t) {\,\mathrm{d}s}+ 0 \\ &\ge 0 - \eta \beta {\int_0^{s_0}}(s_0-s)^{\beta-1} w^2(s, t) {\,\mathrm{d}s}- \frac{\beta m^2(t)}{4\eta |\Omega|^2} {\int_0^{s_0}}(s_0-s)^{\beta-1} s^2 {\,\mathrm{d}s}\\ &\ge - \eta \beta {\int_0^{s_0}}(s_0-s)^{\beta-1} w^2(s, t) {\,\mathrm{d}s}- \frac{m_0^2 {{\mathrm{e}}}^{2\kappa_1 t}}{2 \eta (\beta+1)(\beta+2) |\Omega|^2 } s_0^{\beta+2}. \end{aligned}$$ As also by Hölder’s inequality $$\begin{aligned} \phi(t) &= {\int_0^{s_0}}(s_0-s)^\beta w(s, t) {\,\mathrm{d}s}\le \left({\int_0^{s_0}}(s_0-s)^{\beta+1} {\,\mathrm{d}s}\right)^\frac12 \left({\int_0^{s_0}}(s_0-s)^{\beta-1} w^2(s, t) {\,\mathrm{d}s}\right)^\frac12, \end{aligned}$$ that is, $$\begin{aligned} \phi^2(t) &\le \frac{s_0^{\beta+2}}{\beta+2} {\int_0^{s_0}}(s_0-s)^\beta w^2(s, t) {\,\mathrm{d}s}, \end{aligned}$$ we conclude . The fourth integral ------------------- In order to be able to advantageously integrate by parts in the nonlocal term in we first derive a pointwise bound for $w_s$, which in turn is prepared by the following two lemmata. \[lm:v\_rr\_le\_u\] In $(0, R) \times (0, {T_{\max}})$ the inequality $-v_{rr} \le u$ holds. As $u \ge 0$ we have by the second equation in $$\begin{aligned} (r v_r(r, t))_r \le r \frac{m(t)}{|\Omega|} \quad \text{for $(r, t) \in (0, R) \times (0, {T_{\max}})$}, \end{aligned}$$ hence upon integrating $$v_r(r, t) \le \frac{r}{2} \frac{m(t)}{|\Omega|} \quad \text{for $(r, t) \in (0, R) \times (0, {T_{\max}})$}.$$ Again by the second equation in we have $v_{rr} = \frac{m(t)}{|\Omega|} - u - \frac1r v_r$ such that a direct consequence thereof is $v_{rr} \ge -u$. \[lm:ur\_le\_0\] Throughout $(0, R) \times (0, {T_{\max}})$ we have $u_r \le 0$. Without loss of generality we may assume that $u_0 \in C^2({{\overline}\Omega})$ with $\partial_\nu u_0 = 0$ on $\partial \Omega$, as for less regular initial data the statement follows by an approximation procedure as in [@WinklerCriticalBlowupExponent2018 Lemma 2.2]. Since additionally $\sup_{(x, t) \in [0, R] \times [0, T]} |\nabla v(x, t)| {<}\infty$ by elliptic regularity theory (cf. [@FriedmanPartialDifferentialEquations1976 Theorem 19.1]) for all $T \in (0, {T_{\max}})$ we may invoke [@LiebermanHolderContinuityGradient1987 Theorem 1.1] to obtain $$\begin{aligned} u \in C^{1, 0}({\overline}\Omega \times [0, {T_{\max}})) \cap C^{3, 1}({\overline}\Omega \times (0, {T_{\max}})). \end{aligned}$$ Hence, fixing $T \in (0, {T_{\max}})$ and letting $Q_T {\coloneqq}[0, R] \times [0, T]$, the function $z {\coloneqq}u_r|_{Q_T}$ belongs to $C^{0}(Q_T)$ as well as to $C^{2, 1}([0, R] \times (0, T))$ and satisfies, due to $u_t = u_{rr} + \frac1r u_r - u_r v_r - u \left(\frac{m(t)}{|\Omega|} - u\right) + \kappa(r) u - \mu(r) u^p$ in $Q_T$, $$\begin{aligned} z_t = z_{rr} + a(r, t) z_r + b(r, t) z + c(r, t) \quad \text{in $Q_T$}, \end{aligned}$$ wherein $$\begin{aligned} a(r, t) &{\coloneqq}\frac1r - v_r(r, t), \\ b(r, t) &{\coloneqq}-\frac1{r^2} - v_{rr}(r, t) - \frac{m(t)}{|\Omega|} + 2u(r, t) + \kappa(r) - p\mu(r) u^{p-1} \quad \text{and} \\ c(r, t) &{\coloneqq}\kappa'(r) u - \mu'(r) u^p(r, t) \end{aligned}$$ for $(r, t) \in Q_T$. As $\kappa' \le 0$ and $\mu' \ge 0$ by , $u_r(0, \cdot) = 0$ due to radial symmetry, $u_r(R, \cdot) \le 0$ since $u {>}0$ in $(0, R)$ and $u_{0r} \le 0$ because of we have $$\begin{aligned} \begin{cases} z_t \le z_{rr} + a(r, t) z_r + b(r, t) z & \text{in $(0, R) \times (0, T)$}, \\ z \le 0, & \text{on $\{0, R\} \times (0, T)$}, \\ z(\cdot, 0) \le 0, & \text{in $(0, R)$}. \end{cases} \end{aligned}$$ Lemma \[lm:v\_rr\_le\_u\] warrants that $-v_{rr} \le u$ in $Q_T$, hence $\sup_{(r, t) \in Q_T} b(x, t) \le 3u(r, t) + \kappa(r) {<}\infty$, such that the comparison principle [@QuittnerSoupletSuperlinearParabolicProblems2007 Proposition 52.4] becomes applicable and yields $z \le 0$. The statement follows then upon taking $T {\nearrow}{T_{\max}}$. \[lm:ws\_bdd\] We have $$\begin{aligned} w_s(s, t) \le \frac{m_0 {{\mathrm{e}}}^{\kappa_1 t}}{2 \pi s} \end{aligned}$$ for all $s \in (0, R^2), t \in (0, {T_{\max}})$. Let $r \in (0, R)$ and $t \in (0, {T_{\max}})$. On the one hand we have by Lemma \[lm:ur\_le\_0\] $$\begin{aligned} {\int_\Omega}u(\cdot, t) = 2\pi \int_0^R \rho u(\rho, t) {\,\mathrm{d}\rho}\ge 2\pi \int_0^r \rho u(r, t) {\,\mathrm{d}\rho}= \pi r^2 u(r, t) \end{aligned}$$ and one the other hand by Lemma \[lm:mass\] $$\begin{aligned} {\int_\Omega}u(\cdot, t) \le m_0 {{\mathrm{e}}}^{\kappa_1 t} \end{aligned}$$ such that $$\begin{aligned} u(r, t) \le \frac{m_0 {{\mathrm{e}}}^{\kappa_1 t}}{\pi r^2}. \end{aligned}$$ The statement follows due to $w_s(s, t) = \frac12 u(s^\frac12, t)$ for $s \in (0, R^2)$ and $t \in (0, {T_{\max}})$. The exponent $-1$ in Lemma \[lm:ws\_bdd\] is essentially optimal. Indeed, if we were able to show $w_s(s, t) \le f(t) s^{-q}$ for some $f \in C^0([0, \infty))$ and $q {<}1$ and all $(s, t) \in (0, R^2) \times (0, {T_{\max}})$, then also $u(r, t) \le 2 f(t) r^{-2q}$ for $r \in (0, R)$ and $t \in (0, {T_{\max}})$. However, this would yield $\sup_{t \in (0, T)} \|u(\cdot, t)\|_{{{L^{\lambda}(\Omega)}}} {<}\infty$ for some $\lambda {>}1$ and all finite $T \in (0, {T_{\max}}]$, which in turn would rapidly imply ${T_{\max}}= \infty$, confer the proof of Proposition \[prop:global\_ex\] below. With these preparations at hand we are finally able to deal with the fourth integral on the right-hand side of . \[lm:i4\] Let $\beta {>}-1$, $s_0 \in (0, \min\{1, R^2\})$ and suppose that $\mu$ satisfies for some $\mu_1 {>}0$ and $\alpha \ge 2(p-1)$. Then $$\begin{aligned} 2^{p-1} {\int_0^{s_0}}\int_0^s (s_0-s)^\beta \mu(\sqrt{\sigma}) w_s^p(\sigma, t) {\,\mathrm{d}\sigma}{\,\mathrm{d}s}\le C \phi(t) \end{aligned}$$ for all $t \in (0, {\hat T_{\max}})$, where $C {\coloneqq}\left(\frac{m_0 {{\mathrm{e}}}^\kappa}{\pi}\right)^{p-1} \mu_1$ and ${\hat T_{\max}}{\coloneqq}\min\{1, {T_{\max}}\}$. Let $\alpha ' := \frac{\alpha}{2} - (p-1)$. Due to we see that $\alpha' \ge 0$, such that an application of Lemma \[lm:ws\_bdd\] and an integration by parts yield $$\begin{aligned} &{\mathrel{{\hphantom}{=}}}2^{p-1} {\int_0^{s_0}}(s_0-s)^\beta \int_0^s \mu(\sqrt{\sigma}) w_s^p(\sigma, t) {\,\mathrm{d}\sigma}{\,\mathrm{d}s}\\ &\le \left(\frac{2m_0 {{\mathrm{e}}}^\kappa}{2\pi}\right)^{p-1} \mu_1 {\int_0^{s_0}}(s_0-s)^\beta \int_0^s \sigma^{\alpha'} w_s(\sigma, t) {\,\mathrm{d}\sigma}{\,\mathrm{d}s}\\ &= C {\int_0^{s_0}}(s_0-s)^\beta \left[ - \alpha' \int_0^s \sigma^{\alpha'-1} w(\sigma, t) {\,\mathrm{d}\sigma}+ s^{\alpha'} w(s, t) \right] {\,\mathrm{d}s}\\ &\le C {\int_0^{s_0}}(s_0-s)^\beta w(s, t) = C \phi(t) \end{aligned}$$ for $t \in (0, {\hat T_{\max}})$. Conclusion. Proof of Theorem \[th:blow\_up\] -------------------------------------------- As it turns out, for any initial mass $m_0 {>}8\pi$ we are able to find a suitable initial datum $u_0$ with ${\int_\Omega}u_0 = m_0$ as well as sufficiently small $s_0$ and sufficiently large $\beta$ such that a combination of the estimates above makes Lemma \[lm:ode\_blow\_up\] applicable – implying that $\phi$ and hence $u$ must blow up in finite time. Let $m_0 {>}8\pi$ and $\mu_1 {>}0$. The function $$f: (0, m_0] \times [0, {T_{\max}}) \times (1, \infty) \times [0, 1) \times [0, 1) {\rightarrow}{\mathbb{R}}$$ defined by $$(\tilde m, \tilde T, \beta, \lambda, \eta) \mapsto (1-\eta) \beta (\beta+2) \cdot \frac{\tilde m^2}{4\pi^2(\beta+1)^2} (1-\lambda)^{2\beta + 2} \cdot \frac{\pi}{2 m_0 {{\mathrm{e}}}^{\kappa \tilde T}}$$ is continuous and satisfies $$\begin{aligned} \lim_{\beta {\nearrow}\infty} f(m_0, 0, \beta, 0, 0) = \frac{m_0}{8\pi}. \end{aligned}$$ Thus, due to our assumption that $m_0 {>}8\pi$ we may first choose $\beta \in (1, \infty)$ and then $\tilde m \in (0, m)$, $\tilde T \in (0, \min\{1, {T_{\max}}\})$, $\lambda \in (0, 1)$ and $\eta \in (0, 1)$ as well as ${\varepsilon}\in (0, 1)$ such that $$\label{eq:blow_up:f_ge_1} (1-{\varepsilon})^2 f(\tilde m, \tilde T, \beta, \lambda, \eta) \ge 1.$$ For $s_0 {>}0$ let $$\begin{aligned} c_1(s_0) &{\coloneqq}(1-\eta) \frac{\beta(\beta+2)}{s_0^{\beta+2}}, \\ c_2(s_0) &{\coloneqq}\left(m_0 {{\mathrm{e}}}^\kappa \pi^{-1}\right)^{p-1} \mu_1, \\ c_{3,1}(s_0) &{\coloneqq}\frac{m_0^2 {{\mathrm{e}}}^{2\kappa_1 t}}{2 \eta (\beta+1)(\beta+2) |\Omega|^2 } s_0^{\beta+2} \quad \text{and} \\ \phi_0(s_0) &{\coloneqq}\frac{\tilde m}{2 \pi (\beta+1)} ((1-\lambda) s_0)^{\beta+1}, \end{aligned}$$ then there exist $d_1, d_2, d_3 {>}0$ such that $$\begin{aligned} \frac{c_2(s_0)}{c_1(s_0) \phi_0(s_0)} = d_1 s_0, \quad \frac{c_{3,1}(s_0)}{c_1(s_0) \phi_0^2(s_0)} = d_2 s_0^2 \quad \text{and} \quad \frac{1}{c_1(s_0) \phi_0(s_0)} = d_3 s_0 \end{aligned}$$ for all $s_0 {>}0$. Hence we may choose $s_0 \in (0, \min\{1, R^2\})$ small enough such that $$\begin{aligned} \frac{{\varepsilon}}{3} \phi_0(s_0) \ge \frac{c_2(s_0)}{c_1(s_0)}, \quad \left( \frac{{\varepsilon}}{3} \phi_0(s_0) \right)^2 \ge \frac{c_{3, 2}(s_0)}{c_1(s_0)} \quad \text{and} \quad \frac{{\varepsilon}}{3} \phi_0(s_0) \ge \frac{2}{c_1(s_0) \tilde T}. \end{aligned}$$ Set also $$\begin{aligned} c_{3,2}(s_0) &{\coloneqq}\frac{2 s_0^\beta m_0 {{\mathrm{e}}}^{\kappa \tilde T}}{\pi}, \end{aligned}$$ then $$\begin{aligned} \left( (1-{\varepsilon}) \phi_0(s_0) \right)^2 \ge \frac{c_{3, 2}(s_0)}{c_1(s_0)} \end{aligned}$$ by . Suppose now that $\kappa, \mu \in C^1([0, R])$ comply with and and that $u_0$ satisfies and with $r_1 {\coloneqq}(\lambda s_0)^2$, but that the corresponding solution $(u, v)$ given by Lemma \[lm:local\_ex\] is global in time. Due to the lemmata above the function $\phi$ defined in Lemma \[lm:phi\_first\_ode\] would then fulfill $$\begin{aligned} \begin{cases} \phi'(t) \ge c_1 \phi^2(t) - c_2 \phi(t) - c_{3,1} - c_{3, 2}, \quad t \in (0, \tilde T), \\ \phi(0) \ge \phi_0, \end{cases} \end{aligned}$$ where we abbreviated $c_i {\coloneqq}c_i(s_0)$ and $\phi_0 {\coloneqq}\phi_0(s_0)$. However, as $$\begin{aligned} \phi_0 &= \frac{{\varepsilon}}{3} \phi_0 + \frac{{\varepsilon}}{3} \phi_0 + (1-{\varepsilon}) \phi_0 + \frac{{\varepsilon}}{3} \phi_0 \\ &\ge \frac{c_2}{c_1} + \sqrt{\frac{c_{3,1}}{c_1}} + \sqrt{\frac{c_{3,2}}{c_1}} + \frac{2}{c_1 \tilde T} \\ &\ge \frac{c_2 + \sqrt{c_1 (c_{3,1} + c_{3,2})}}{c_1} + \frac{2}{c_1 \tilde T}, \end{aligned}$$ where we have again set $\phi_0 {\coloneqq}\phi_0(s_0)$, Lemma \[lm:ode\_blow\_up\] would imply $\tilde T \le \frac12 \tilde T$, hence our assumption that ${T_{\max}}= \infty$ must be false. Finally, is a direct consequence of Lemma \[lm:local\_ex\]. Notes on global solvability =========================== Finally, we include short proofs for Proposition \[prop:critical\_mass\] and Proposition \[prop:global\_ex\]. This proof is based on a comparison principle for the scalar equation . A similar idea (with a similar supersolution) has been employed in [@TaoWinklerCriticalMassInfinitetime2017 Lemma 5.2]. Let $u_0 \in {{C^{0}({{\overline}\Omega})}}$ be nonnegative and radially symmetric with $m_0 {\coloneqq}{\int_\Omega}u_0 {<}8 \pi$ as well as $(u, v)$ and $w$ be as constructed in Lemma \[lm:local\_ex\] and defined in Lemma \[lm:pde\_w\], respectively. Then we may choose $a \in (\frac{m_0}{2\pi}, 4)$ and as $w(\cdot, 0) \le \frac{m_0}{2\pi}$ in $(0, R^2)$ and $$\begin{aligned} \lim_{b {\searrow}0} \sup_{s \in (0, R^2)} \left| \frac{a s}{b + s} - a \right| = 0 \end{aligned}$$ we may also choose $b {>}0$ such that $$\begin{aligned} {\overline}w: [0, R^2] \times [0, \infty) {\rightarrow}{\mathbb{R}}, \quad (s, t) \mapsto \frac{a s}{b + s} \end{aligned}$$ fulfills ${\overline}w(\cdot, 0) \ge w(\cdot, 0)$ in $(0, R^2)$. Furthermore, by a direct computation $$\begin{aligned} {\overline}w_s(s, t) = \frac{ab}{(b + s)^2} \quad \text{and} \quad {\overline}w_{ss}(s, t) = -\frac{2ab}{(b + s)^3} \end{aligned}$$ for all $(s, t) \in (0, R^2) \times (0, \infty)$, hence $$\begin{aligned} &{\mathrel{{\hphantom}{=}}}{\overline}w_t(s, t) - 4s {\overline}w_{ss}(s, t) - 2 {\overline}w(s, t) {\overline}w_s(s, t) + \frac{m(t)}{|\Omega|} s {\overline}w_s(s, t) + 2^{p-1} \int_0^s 2^{p-1} \mu(\sqrt \sigma) {\overline}w_s^p(\sigma, t) {\,\mathrm{d}\sigma}\\ &\ge \frac{8 a b s}{(b + s)^3} - \frac{2 a^2 b s}{(b + s)^3} \ge 0 \end{aligned}$$ for all $(s, t) \in (0, R^2) \times (0, \infty)$ because of $a \le 4$. Therefore, ${\overline}w$ is a supersolution of , fulfills ${\overline}w(0, \cdot) \ge 0$ and ${\overline}w(R^2, \cdot) \ge \frac{m_0}{2\pi}$ as well as ${\overline}w(\cdot, 0) \ge w(\cdot, 0)$ such that the comparison principle warrants ${\overline}w \ge w$ in $(0, R^2) \times (0, {T_{\max}})$. As $w(0, t) = {\overline}w(0, t) = 0$ for all $t \in (0, {T_{\max}})$ this implies $$\begin{aligned} \limsup_{t {\nearrow}{T_{\max}}} w_s(0, t) = \limsup_{t {\nearrow}{T_{\max}}} \lim_{h {\searrow}0} \frac{w(h, t) - w(0, t)}{h} \le \limsup_{t {\nearrow}{T_{\max}}} \lim_{h {\searrow}0} \frac{{\overline}w(h, t) - {\overline}w(0, t)}{h} {<}\infty. \end{aligned}$$ Due to non-degeneracy of outside of the origin and boundedness of $w$ parabolic regularity ensures $\limsup_{t {\nearrow}{T_{\max}}} \|w_s(\cdot, t)\|_{L^\infty((0, R^2))} {<}\infty$ implying ${T_{\max}}= \infty$ by Lemma \[lm:local\_ex\]. Let $p {>}2$, $\alpha {<}p - 2$, $\mu_1 {>}0$, $\kappa, \mu \in C^1([0, R])$ be such that holds, $0 \le u_0 \in {{C^{0}({{\overline}\Omega})}}$ and denote the corresponding solution given by Lemma \[lm:local\_ex\] by $(u, v)$. By our assumption on $\alpha$ there exists $q \in (1, \min\{\frac{2p - 4 - \alpha}{\alpha}, 2\})$. Testing the first equation with $u^{q-1}$ gives $$\begin{aligned} \frac1q {\frac{\mathrm{d}}{\mathrm{d}t}}{\int_\Omega}u^q = - \frac{4(q-1)}{q^2} {\int_\Omega}|\nabla u^\frac{q}{2}|^2 + \frac{q-1}{q} {\int_\Omega}\nabla u^q \cdot \nabla v + {\int_\Omega}\kappa u^q - {\int_\Omega}\mu u^{p+q-1} \end{aligned}$$ in $(0, {T_{\max}})$, wherein $$\begin{aligned} {\int_\Omega}\nabla u^q \cdot \nabla v &= {\int_\Omega}u^{q+1} - {\int_\Omega}u^q \frac{m(t)}{|\Omega|} \le \frac{q}{\mu_1 (q-1)} {\int_\Omega}|x|^\alpha u^{p+q-1} + c_1 \int_0^R r^{1 - \alpha \frac{q+1}{p-2}} {\,\mathrm{d}r}\end{aligned}$$ in $(0, {T_{\max}})$ for some $c_1 {>}0$ by Young’s inequality (with exponents $\frac{p+q-1}{q+1}$ and $\frac{p+q-1}{p-2}$). By the definition of $q$ we have $1 - \alpha \frac{q+1}{p-2} {>}-1$, hence the function $y: [0, {T_{\max}}) {\rightarrow}{\mathbb{R}}, t \mapsto \frac1q {\int_\Omega}u^q$ satisfies $y' \le c_2$ in $(0, {T_{\max}})$ for some $c_2 {>}0$. Assuming for the sake of contradiction ${T_{\max}}{<}\infty$, this implies $\sup_{t \in (0, {T_{\max}})} {\int_\Omega}u^q(\cdot, t) {<}\infty$, hence by elliptic regularity theory (cf. [@FriedmanPartialDifferentialEquations1976 Theorem 19.1]) $\sup_{t \in (0, {T_{\max}})} \|v(\cdot, t)\|_{{{W^{2, q}(\Omega)}}}$ is finite as well. Therefore, the Sobolev embedding theorem warrants finiteness of $\sup_{t \in (0, {T_{\max}})} {\int_\Omega}|\nabla v(\cdot, t)|^\frac{2q}{2-q}$. Finally, as $\frac{2q}{2-q} {>}2$, a semi-group argument as in [@FuestBoundednessEnforcedMildly2019 Lemma 4.1] shows boundedness of $\{u(\cdot, t): t \in (0, {T_{\max}})\}$ in ${{L^{\infty}(\Omega)}}$ – contradicting Lemma \[lm:local\_ex\]. Acknowledgments {#acknowledgments .unnumbered} =============== The author is partially supported by the German Academic Scholarship Foundation and by the Deutsche Forschungsgemeinschaft within the project *Emergence of structures and advantages in cross-diffusion systems*, project number 411007140. [10]{} N. Bellomo, A. Bellouquid, Y. Tao, and M. Winkler. Toward a mathematical theory of KellerSegel models of pattern formation in biological tissues. , 25(09):1663–1763, 2015. [](http://dx.doi.org/10.1142/S021820251550044X). P. Biler. Local and global solvability of some systems modelling chemotaxis. , 8, 1998. M. A. J. Chaplain and G. Lolas. Mathematical modelling of cancer cell invasion of tissue: the role of the urokinase plasminogen activation system. , 15(11):1685–1734, 2005. [](http://dx.doi.org/10.1142/S0218202505000947). T. Cieślak and P. Lauren[ç]{}ot. Finite time blow-up for a one-dimensional quasilinear parabolicparabolic chemotaxis system. , 27(1):437–446, 2010. [](http://dx.doi.org/10.1016/j.anihpc.2009.11.016). T. Cieślak and M. Winkler. Finite-time blow-up in a quasilinear system of chemotaxis. , 21(5):1057–1076, 2008. [](http://dx.doi.org/10.1088/0951-7715/21/5/009). J. Dockery, V. Hutson, K. Mischaikow, and M. Pernarowski. The evolution of slow dispersal rates: a reaction diffusion model. , 37(1):61–83, 1998. [](http://dx.doi.org/10.1007/s002850050120). A. Friedman. . , Huntington, N.Y, 1976. M. Fuest. Boundedness enforced by mildly saturated conversion in a chemotaxis-MayNowak model for virus infection. , 472(2):1729–1740, 2019. [](http://dx.doi.org/10.1016/j.jmaa.2018.12.020). X. He and W.-M. Ni. Global dynamics of the LotkaVolterra competition-diffusion system: diffusion and spatial heterogeneity I. , 69(5):981–1014, 2016. [](http://dx.doi.org/10.1002/cpa.21596). T. Hillen and K. J. Painter. A user’s guide to PDE models for chemotaxis. , 58(1-2):183–217, 2009. [](http://dx.doi.org/10.1007/s00285-008-0201-3). D. Horstmann and G. Wang. Blow-up in a chemotaxis model without symmetry assumptions. , 12(02), 2001. [](http://dx.doi.org/10.1017/S0956792501004363). W. Jäger and S. Luckhaus. On explosions of solutions to a system of partial differential equations modelling chemotaxis. , 329(2):819 – 824, 1992. [](http://dx.doi.org/10.2307/2153966). K. Kang and A. Stevens. Blowup and global solutions in a chemotaxisgrowth system. , 135:57–72, 2016. [](http://dx.doi.org/10.1016/j.na.2016.01.017). E. F. Keller and L. A. Segel. Traveling bands of chemotactic bacteria: A theoretical analysis. , 30(2):235–248, 1971. [](http://dx.doi.org/10.1016/0022-5193(71)90051-8). J. Lankeit. Chemotaxis can prevent thresholds on population density. , 20(5):1499–1527, 2015. [](http://dx.doi.org/10.3934/dcdsb.2015.20.1499). J. Lankeit. Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source. , 258(4):1158–1191, 2015. [](http://dx.doi.org/10.1016/j.jde.2014.10.016). G. M. Lieberman. Hölder continuity of the gradient of solutions of uniformly parabolic equations with conormal boundary conditions. , 148(1):77–99, 1987. [](http://dx.doi.org/10.1007/BF01774284). Y. Lou. On the effects of migration and spatial heterogeneity on single and multiple species. , 223(2):400–426, 2006. [](http://dx.doi.org/10.1016/j.jde.2005.05.010). Y. Lou, X.-Q. Zhao, and P. Zhou. Global dynamics of a LotkaVolterra competitiondiffusionadvection system in heterogeneous environments. , 2018. [](http://dx.doi.org/10.1016/j.matpur.2018.06.010). T. Nagai. Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis in two-dimensional domains. , 6(1):37–55, 2001. [](http://dx.doi.org/10.1155/S1025583401000042). K. Osaki, T. Tsujikawa, A. Yagi, and M. Mimura. Exponential attractor for a chemotaxis-growth system of equations. , 51(1):119–144, 2002. [](http://dx.doi.org/10.1016/S0362-546X(01)00815-X). P. Quittner and P. Souplet. . Birkhäuser Advanced Texts / Basler Lehrbücher. [Birkhäuser Basel]{}, Basel, 2007. [](http://dx.doi.org/10.1007/978-3-7643-8442-5). R. B. Salako and W. Shen. Parabolic-elliptic chemotaxis model with space-time dependent logistic sources on $\mathbb{R}^N$. I. Persistence and asymptotic spreading. , 28(11):2237–2273, 2018. [](http://dx.doi.org/10.1142/S0218202518400146). R. B. Salako and W. Shen. Parabolic-elliptic chemotaxis model with space-time dependent logistic sources on $\mathbb{R}^N$. II. Existence, uniqueness, and stability of strictly positive entire solutions. , 464(1):883–910, 2018. [](http://dx.doi.org/10.1016/j.jmaa.2018.04.034). R. B. Salako and W. Shen. Parabolic-elliptic chemotaxis model with space-time dependent logistic sources on $\mathbb{R}^N$. III. Transition fronts. preprint, https://arxiv.org/abs/1811.01525, 2018. T. Senba and T. Suzuki. Parabolic system of chemotaxis: Blowup in a finite and the infinite time. , 8(2):349–368, 2001. [](http://dx.doi.org/10.4310/MAA.2001.v8.n2.a9). N. Shigesada, K. Kawasaki, and E. Teramoto. Spatial segregation of interacting species. , 79(1):83–99, 1979. [](http://dx.doi.org/10.1016/0022-5193(79)90258-3). Y. Tao and M. Winkler. Critical mass for infinite-time aggregation in a chemotaxis model with indirect signal production. , 19(12):3641–3678, 2017. [](http://dx.doi.org/10.4171/JEMS/749). J. I. Tello and M. Winkler. A chemotaxis system with logistic source. , 32(6):849–877, 2007. [](http://dx.doi.org/10.1080/03605300701319003). M. Winkler. Aggregation vs. global diffusive behavior in the higher-dimensional KellerSegel model. , 248(12):2889–2905, 2010. [](http://dx.doi.org/10.1016/j.jde.2010.02.008). M. Winkler. Boundedness in the Higher-Dimensional Parabolic-Parabolic Chemotaxis System with Logistic Source. , 35(8):1516–1537, 2010. [](http://dx.doi.org/10.1080/03605300903473426). M. Winkler. Blow-up in a higher-dimensional chemotaxis system despite logistic growth restriction. , 384(2):261–272, 2011. [](http://dx.doi.org/10.1016/j.jmaa.2011.05.057). M. Winkler. Finite-time blow-up in the higher-dimensional parabolicparabolic KellerSegel system. , 100(5):748–767, 2013. [](http://dx.doi.org/10.1016/j.matpur.2013.01.020). M. Winkler. How far can chemotactic cross-diffusion enforce exceeding carrying capacities? , 24(5):809–855, 2014. [](http://dx.doi.org/10.1007/s00332-014-9205-x). M. Winkler. A critical blow-up exponent in a chemotaxis system with nonlinear signal production. , 31(5):2031–2056, 2018. [](http://dx.doi.org/10.1088/1361-6544/aaaa0e). M. Winkler. Finite-time blow-up in low-dimensional KellerSegel systems with logistic-type superlinear degradation. , 69(2), 2018. [](http://dx.doi.org/10.1007/s00033-018-0935-8). M. Winkler. How unstable is spatial homogeneity in Keller-Segel systems? A new critical mass phenomenon in two- and higher-dimensional parabolic-elliptic cases. , 2018. [](http://dx.doi.org/10.1007/s00208-018-1722-8). T. Xiang. Sub-logistic source can prevent blow-up in the 2D minimal Keller-Segel chemotaxis system. , 59(8):081502, 2018. [](http://dx.doi.org/10.1063/1.5018861). [^1]: fuestm@math.uni-paderborn.de
{ "pile_set_name": "ArXiv" }
--- abstract: 'Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data. This model is the foundation of robust principle component analysis [@Candes2011robust; @Chandrasekaran2009ranksparsity], and popularized by dynamic-foreground/static-background separation [@Bouwmans2016decomposition] amongst other applications. Compressed sensing, matrix completion, and their variants [@Eldar2012compressed; @Foucart2013a] have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension. This manuscript develops similar guarantees showing that $m\times n$ matrices that can be expressed as the sum of a rank-$r$ matrix and a $s$-sparse matrix can be recovered by computationally tractable methods from $\mathcal{O}(r(m+n-r)+s)\log(mn/s)$ linear measurements. More specifically, we establish that the restricted isometry constants for the aforementioned matrices remain bounded independent of problem size provided $p/mn$, $s/p$, and $r(m+n-r)/p$ reman fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operator’s RIC’s are sufficiently small. Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.' address: - 'Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK' - 'The Alan Turing Institute, The British Library, London NW1 2DB, UK' author: - Jared Tanner - Simon Vary bibliography: - 'library.bib' title: 'Compressed sensing of low-rank plus sparse matrices' --- matrix sensing ,low-rank plus sparse matrix ,restricted isometry property,non-convex methods ,robust PCA 15A29,41A29,62H25 ,65F10 ,65J20 ,68Q25 ,90C22 ,90C26 Acknowledgement {#acknowledgement .unnumbered} =============== We would like to thank Robert A. Lamb and David Humphreys for useful discussions around the applications of low-rank plus sparse model to multispectral imaging.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We calculate the spin-polarized electronic transport through a molecular bilayer spin valve from first principles, and establish the link between the magnetoresistance and the spin-dependent interactions at the metal-molecule interfaces. The magnetoresistance of a Fe$|$bilayer-C$_{70}|$Fe spin valve attains a high value of 70% in the linear response regime, but it drops sharply as a function of the applied bias. The current polarization has a value of 80% in linear response, and also decreases as a function of bias. Both these trends can be modelled in terms of prominent spin-dependent Fe$|$C$_{70}$ interface states close to the Fermi level, unfolding the potential of spinterface science to control and optimize spin currents.' author: - 'Deniz Çak[i]{}r' - 'Diana M. Otálvaro' - Geert Brocks title: 'From spin-polarized interfaces to giant magnetoresistance in organic spin valves' --- Introduction ============ Carbon-based materials have moved into the focus of spintronics research, because the weak spin-orbit coupling and hyperfine interactions in carbon-based semiconductors generate the prospect of stable spin currents and robust spin operations [@Tombros07; @Dediu09]. Giant magnetoresistance (MR) effects have been reported in vertical spin valves with layers of organic molecules such as tris(8-hydroxy-quinolinato)-aluminium (Alq$_3$) or fullerenes such as C$_{60}$ [@Xiong04; @Sun10; @Barraud10; @Schulz11; @Gobbi11; @Tran12; @Zhang13]. Barraud *et al.* [@Barraud10] have argued that spin-dependent interactions at the interfaces between molecular materials and ferromagnetic electrodes play a pivotal role in the magneto-transport properties of these molecular semiconductor devices. This has prompted the suggestion that highly spin-polarized currents in spintronic devices may be obtained by exploiting such interface interactions, which has been dubbed “spinterface science” [@Sanvito10]. Establishing a direct link between interface properties and spin-dependent transport would be a significant step forward in understanding organic spin valves. Photoemission spectroscopy, scanning tunnelling microscopy (STM), and first-principles calculations enable a detailed analysis of the spin-dependent electronic properties of metal-organic interfaces [@Atodiresei10; @Brede10; @Javaid10; @Tran11; @Tran13; @Djeghloul13], but a direct connection between these properties and the magneto-transport in organic spin valves is lacking so far. In the field of single molecule electronics, where MR effects have been demonstrated with STM [@Brede10; @Schmaus11; @Miyamachi12], first-principles transport calculations have provided detailed descriptions [@Rocha05; @Ning08; @Koleini12]. Two metal electrodes interacting through a single molecule are however generally not a good model for organic devices comprising molecular multilayers. In this paper we calculate the spin-dependent current through a prototype spin valve, which consists of a $\sim 2$ nm thick molecular bilayer sandwiched between two ferromagnetic metal electrodes, using a first-principles non-equilibrium Green’s function technique. We devise a model where the transmission through a molecular multilayer is factorized, based upon partitioning the system into right and left interface parts, each consisting of a molecular monolayer adsorbed on a metal surface. This allows for an analysis of the MR and the current polarization in terms of the spin-polarized interface states, both in linear response and at finite bias. ![(a) Side view of the Fe(001)$|$bilayer-C$_{70}|$Fe(001) structure. (b) Top view with the Fe electrodes removed \[.\][]{data-label="fig:structures"}](fig1){width="8.5cm"} We study Fe$|$bilayer-C$_{70}|$Fe spin valves, cf. Fig. \[fig:structures\]. The bcc-Fe(001) surface is a well-established substrate for organic spintronics that allows for a controlled growth of fullerene layers [@Tran11; @Tran13]. Fullerene molecules are particularly interesting candidates for applications in spintronics due to the absence of hydrogen atoms that lead to spin de-phasing via hyperfine interactions. In particular, we find that adsorption of C$_{70}$ on Fe(001) results in a favourable interface electronic structure, which gives a large current polarization of 78% and a large MR of 67% [@footnote:MR]. Theory {#sec:theory} ====== We start from the Landauer expression for the current at finite bias $V$ and zero temperature [@Landauer57] $$I^\sigma=\frac{e}{h}\int_{E_{F}-\frac{1}{2}eV}^{E_{F}+\frac{1}{2}eV}T^{\sigma}(E,V) dE,\label{eq:1}$$ with $\sigma=\uparrow$$(\downarrow)$ labelling the majority (minority) spin states, and $T^{\sigma}=\mathrm{Tr}\left[\mathbf{\boldsymbol{\Gamma}}_\mathrm{R}^{\sigma}\mathbf{G}_\mathrm{RL}^{\sigma,\mathrm{r}}\boldsymbol{\Gamma}_\mathrm{L}^{\sigma}\boldsymbol{G}_\mathrm{LR}^{\sigma,\mathrm{a}}\right]$ the transmission probability expressed in non-equilibrium Green’s functions (NEGF) [@Transiesta02]. $\mathbf{G}_\mathrm{RL}^{\sigma,\mathrm{r(a)}}$ is the retarded (advanced) Green’s function matrix block connecting the right and left leads via the scattering region, and $\mathbf{\boldsymbol{\Gamma}}^\sigma_\mathrm{R(L)}=-2\mathrm{Im}\boldsymbol{\Sigma}^\sigma_\mathrm{R(L)}$, with $\boldsymbol{\Sigma}^\sigma_\mathrm{R(L)}$ the self-energy matrix connecting the scattering region to the right (left) lead [@Transiesta02; @Khomyakov05]. Partitioning the Hamiltonian of the scattering region into a left and a right part, one can write[@Mingo96; @Mathon97] $$\mathbf{G}^\sigma_\mathrm{RL} = \mathbf{g}^\sigma_\mathrm{R}\mathbf{H}^\sigma_\mathrm{RL}\left(\mathbf{I}_\mathrm{L}-\mathbf{g}^\sigma_\mathrm{L}\mathbf{H}^\sigma_\mathrm{LR}\mathbf{g}^\sigma_\mathrm{R}\mathbf{H}^\sigma_\mathrm{RL}\right)^{-1}\mathbf{g}^\sigma_\mathrm{L},\label{eq:4}$$ where $\mathbf{g}^\sigma_\mathrm{R(L)}$ is the Green’s function matrix of the right (left) part uncoupled from the left (right) part, and $\mathbf{H}^\sigma_\mathrm{RL}=\left(\mathbf{H}^\sigma_\mathrm{LR}\right)^{\dagger}$ is the Hamilton matrix block that couples the two parts. Neglecting multiple internal reflections, one can approximate $\mathbf{G}^\sigma_\mathrm{RL}\approx\mathbf{g}^\sigma_\mathrm{R}\mathbf{H}^\sigma_\mathrm{RL}\mathbf{g}^\sigma_\mathrm{L}$. From this approximation and the relation $\mathbf{g}_\mathrm{R(L)}^{\sigma,a}\mathbf{\boldsymbol{\Gamma}}^\sigma_\mathrm{R(L)}\mathbf{g}_\mathrm{R(L)}^{\sigma,r}=2\pi\mathbf{n}^\sigma_\mathrm{R(L)}$, with $\mathbf{\boldsymbol{n}}^\sigma_\mathrm{R(L)}=-\pi^{-1}\mathrm{Im}\mathbf{g}_\mathrm{R(L)}^{\sigma,\mathrm{r}}$ the spectral density matrix of the right (left) part, it then follows[@Mingo96; @Meunier98] $$T^{\sigma}\approx\mathrm{4\pi^{2}Tr}\left[\mathbf{\boldsymbol{n}}_\mathrm{R}^{\sigma}\mathbf{H}_\mathrm{RL}^{\sigma}\boldsymbol{n}_\mathrm{L}^{\sigma}\mathbf{H}_\mathrm{LR}^{\sigma}\right].$$ In a representation where the spectral density matrix is diagonal, one of the matrix elements is much larger than the other ones, if a single molecular state is dominant (depending on the energy, the HOMO or LUMO, for instance). Setting all but one matrix element to zero in the density matrices of the left and the right parts, the transmission can be approximated by $T^{\sigma}\approx4\pi^{2}|H^\sigma|^{2}n_\mathrm{R}^{\sigma}n_\mathrm{L}^{\sigma}$, with $n^\sigma_\mathrm{R(L)}$ the projected density of states (PDOS), i.e., projected on the molecules at the right (left) electrode. Using this expression in Eq. (\[eq:1\]) in linear response ($V\rightarrow0$) leads to the Jullière expression for the MR [@Julliere75]. In the original Jullière model, bulk DOSs of the ferromagnetic electrodes are used to calculate the MR. It is more appropriate to use interface DOSs, but the local DOS in a metal$|$insulator$|$metal junction gradually changes from the metal into the insulator, making it difficult to pinpoint an exact interface DOS [@Moodera99]. For a metal-molecule interface the PDOS $n^\sigma_\mathrm{R(L)}$ provides a unique interface DOS. Expressing the transmission $T^{\sigma}$ in terms of a product of PDOSs of the right and left interface, means that the transmission through an asymmetric system, where right and left interfaces are different from one another, can be approximated by a geometrical average $T^{\sigma}=\sqrt{T_\mathrm{R}^{\sigma}}\sqrt{T_\mathrm{L}^{\sigma}}$ [@Belashchenko04; @Xu06]. Here $T_\mathrm{R(L)}^{\sigma}$ is the transmission through a symmetric system with identical right and left interfaces, i.e., characterized by the same PDOS, so $\sqrt{T_\mathrm{R(L)}^{\sigma}} \propto n^\sigma_\mathrm{R(L)}$. If in addition we assume that the PDOSs are not affected by the bias $V$ except for a rigid shift, then similar factorization arguments lead to the expressions $$T_\mathrm{P}^{\sigma}\left(E,V\right)=\sqrt{T_\mathrm{P}^{\sigma}\left(E-\frac{eV}{2},0\right)}\sqrt{T_\mathrm{P}^{\sigma}\left(E+\frac{eV}{2},0\right)},\label{eq:11}$$ $$T_\mathrm{AP}^{\sigma}\left(E,V\right)=\sqrt{T_\mathrm{P}^{\sigma}\left(E-\frac{eV}{2},0\right)}\sqrt{T_\mathrm{P}^{-\sigma}\left(E+\frac{eV}{2},0\right)},\label{eq:10}$$ where P (AP) describes the situation with the magnetizations of the two ferromagnetic electrodes parallel (anti-parallel). With these approximations one can construct the P transmission spectrum at finite bias, or the AP transmission at any bias, starting from the P spectrum at zero bias, which greatly facilitates the interpretation of the MR effect and of the $I$-$V$ curves. Computational details. ====================== We optimize the structure of the Fe(001)$|$C$_{70}$ interface, using density functional theory (DFT) at the spin-polarized generalized gradient approximation (GGA/PBE) level, as implemented in VASP[@vasp-1; @vasp-2]. The same computational parameters are used as in Ref. . The interface is modeled using a 4$\times$4 Fe(001) surface unit cell (cell parameter 11.32 Å), containing one C$_{70}$ molecule. The distance between nearest neighbor molecules is then slightly larger than the 10-11 Å observed in the C$_{70}$ molecular crystal.[@Verheijen:cp92] A slab of seven atomic layers represents the Fe(001) substrate. The molecules and the uppermost three Fe atomic layers are relaxed. A structure for the bilayer junction, Fe$|$C$_{70}$-C$_{70}|$Fe, is generated by mirroring the optimized Fe(001)$|$C$_{70}$ structure, rotating the mirror image by 90$^\mathrm{o}$, and translating it in plane by half a cell, see Fig. \[fig:structures\]. The spacing between the C$_{70}$ layers is such that the shortest intermolecular C–C distance is 3.2 Å, which is a typical value for close-packed fullerenes or carbon nanotubes. Electronic transport in the bilayer junction is calculated using the self-consistent NEGF technique as implemented in TranSIESTA [@siesta; @Transiesta02]. Single-$\zeta$ and double-$\zeta$ (plus polarization) numerical orbital basis sets are used for Fe and C, respectively. We employ the GGA/PBE functional and norm-conserving pseudo-potentials [@tm; @footnote:SIESTA]. A 6$\times$6 in-plane $k$-point mesh is adequate to obtain sufficiently accurate transport results. For instance, the total conductance at small bias is then converged on a scale of 2%. Results ======= Fe$|$C$_{70}$ interface ----------------------- From a number of possible adsorption geometries, we have identified a structure as most stable where the long axis of the C$_{70}$ molecules is parallel to the surface. Two neighbouring C$_{70}$ hexagons are nearly parallel to the surface and the edge shared by these two hexagons is on top of a surface Fe atom. The shortest Fe–C bonds are in the range 2.0-2.3 Å, which is indicative of a strong (chemisorption) interaction between C$_{70}$ and the Fe substrate, as confirmed by the calculated binding energy of 3.0 eV. Nevertheless, the geometry of the C$_{70}$ molecule is only mildly affected by the adsorption. ![(a) P(rojected)DOS $n^\uparrow$ of majority (blue) and $n^\downarrow$ of minority (red) spin states of the Fe(001)$|$C$_{70}$ interface, summed over the C$_{70}$ atoms, as calculated with VASP. Gaussian smearing with a smearing parameter of 0.05 eV is applied. The zero of energy is at the Fermi level $E_\mathrm{F}$. The black lines indicate the DFT energy levels of the isolated C$_{70}$ molecule. (b) M(agnetization)DOS $\Delta n=n^\uparrow-n^\downarrow$, in states/eV. The inset to (a) shows the local MDOS at the Fermi level, illustrating its minority spin character. []{data-label="fig:DOS"}](fig2){width="8.5cm"} Figure \[fig:DOS\] shows the PDOS summed over all atoms of the molecule. The DFT levels of an isolated C$_{70}$ molecule are given for comparison, aligned with the PDOS through the lowest $\sigma$-levels, which are unperturbed by adsorption. In contrast, adsorption significantly broadens and shifts the molecular $\pi$-states, due to hybridization with the substrate. Despite the large perturbation, it is still possible to assign molecular labels to the peaks in the PDOS. The peaks at $-1.2$ eV and $+0.6$ eV with respect to $E_\mathrm{F}$ have molecular HOMO and LUMO character, respectively, and the peak at $E_\mathrm{F}$ in the minority spin states also has LUMO character. The spin-polarized states of the substrate interact differently with the molecule, resulting in a markedly different PDOS for the two spin states. Around the Fermi level the interaction with the minority spin states is particularly strong, consistent with the fact that the Fe(001) surface has prominent minority spin surface resonances in this energy range [@Stroscio95]. The interaction between molecule and surface induces a magnetic moment of 0.26 $\mu_B$ on the C$_{70}$ molecule in the minority spin direction, which is similar to the induced moment ofn C$_{60}$ on Fe(001) [@Tran13]. The difference between the PDOSs of the two spin states gives a magnetization density of states (MDOS) $\Delta n(E)=n^\uparrow(E)-n^\downarrow(E)$, shown in Fig. \[fig:DOS\](b). A MDOS that oscillates similarly as a function of the energy has been observed at the C$_{60}|$Fe(001) interface [@Tran11]. For transport the energy region around the Fermi level is most relevant, where the MDOS has a (negative) peak. This peak gives a spin polarization $\Delta n/(n^\uparrow+n^\downarrow) \approx -40$% at $E = E_\mathrm{F}$, which according to the Julli[è]{}re model then gives a MR $\approx 40$%. ![(a) Transmissions $T_\mathrm{P}^\uparrow$ of majority (blue) and $T_\mathrm{P}^\downarrow$ of minority (red) spin channels of Fe$|$C$_{70}$-C$_{70}|$Fe at zero bias, as calculated with TranSIESTA. The zero of energy is at the Fermi level $E_\mathrm{F}$. (b) Transmissions $T_\mathrm{AP}^\uparrow = T_\mathrm{AP}^\downarrow$ (blue). The yellow dashed line represents the factorization approximation of Eq. (\[eq:10\]). (c) The MR spectrum as a function of energy.[]{data-label="fig:transmission"}](fig3){width="8.5cm"} Fe$|$C$_{70}$-C$_{70}|$Fe, linear response ------------------------------------------ Figure \[fig:transmission\] (a) shows the transmission spectra $T_\mathrm{P}^\sigma(E,V=0)$ at zero bias, calculated for the bilayer junction Fe$|$C$_{70}$-C$_{70}|$Fe with the magnetizations of both Fe electrodes parallel (P). The peaks in the transmission correspond to those observed in the PDOS, see Fig. \[fig:DOS\], wich suggests that the factorization approximation discussed in Sec. \[sec:theory\] may be applied. Of particular interest is the peak around the Fermi energy in the minority spin channel, as at low bias this peak dominates the conductance. The corresponding state has substantial LUMO character, and is delocalized over the whole molecule, so that the bilayer C$_{70}$ junction presents a relatively thin barrier. The conductance polarization, defined as $(T^\uparrow - T^\downarrow)/(T^\uparrow + T^\downarrow)$, is $-78$% at $E=E_\mathrm{F}$ and $V=0$, which is also the value of the current polarization $\mathrm{CP}=(I^\downarrow - I^\uparrow)/(I^\uparrow + I^\downarrow)$ in the linear response regime. The current has a remarkably large spin-polarization, and it is negative because the minority spin dominates. Figure \[fig:transmission\] (b) shows the transmission spectra at zero bias, calculated for the bilayer junction with the magnetizations of both Fe electrodes anti-parallel (AP). The factorization approximation of Eq. (\[eq:10\]) implies that the transmission in the AP case can be constructed as a geometric average of the transmission of the two spin channels in the P case. Figure \[fig:transmission\] (b) demonstrates that this approximation works very well. The MR in the linear response regime can be calculated replacing the currents $I$ by the corresponding transmissions $T$ at $E=E_\mathrm{F}$ and $V=0$. From the calculated $T_\mathrm{P(AP)}$ the MR is 67%, and from the factorization approximation, the MR is 70%. From the PDOSs and the Julli[è]{}re model we obtained a smaller MR of 40%. One should note however that the MR is very sensitive to the shapes and positions of the peaks in the transmission spectra. Figure \[fig:transmission\] (c) shows the MR as a function of the energy, calculated from the transmission spectra. The position of the Fermi level is in a narrow peak of the MR spectrum. The maximum of this peak is $\sim110$% at $E_\mathrm{F}-0.04$ eV. ![(a) Transmissions $T_\mathrm{P}^\uparrow$ of majority (blue) and $T_\mathrm{P}^\downarrow$ of minority (red) spins of Fe$|$C$_{70}$-C$_{70}|$Fe at bias $V=0.4$V. (b) $T_\mathrm{AP}^\uparrow$ of majority (blue) and $T_\mathrm{AP}^\downarrow$ of minority (red) spins. The dashed lines indicate the factorization approximations of Eqs. (\[eq:11\]) and (\[eq:10\]).[]{data-label="fig:finitebias"}](fig4){width="8.5cm"} Fe$|$C$_{70}$-C$_{70}|$Fe, finite bias -------------------------------------- Figure \[fig:finitebias\] shows transmission spectra $T_\mathrm{P}^\sigma(E,V)$ at a bias $V=0.4$ V, calculated self-consistently. To obtain the current, Eq. (\[eq:1\]), one has to integrate the transmission from $E=-0.2$ to $E=0.2$ eV, see the insets of Fig. \[fig:finitebias\]. The currents for the P and AP cases become very similar, resulting in a small MR. The transmission can be interpreted starting from the zero bias transmission using Eqs. (\[eq:11\]) and (\[eq:10\]). $T_\mathrm{P}^\sigma(E,V=0)$ has a prominent peak in the minority spin channel close to $E_\mathrm{F}$ corresponding to the LUMO derived state at $E_\mathrm{F}$, cf. Figs. \[fig:DOS\](a) and \[fig:transmission\](a). Factorization according to Eqs. (\[eq:11\]) and (\[eq:10\]) splits this peak and shifts the factors by $\pm eV/2$, such that two peaks appear at $E_\mathrm{F} \pm eV/2$, respectively. This construction is shown as the dotted lines in Fig. \[fig:finitebias\]. For the P case, Eq. (\[eq:11\]), both these peaks appear in the minority spin channel $T_\mathrm{P}^\downarrow$. The CP should therefore be still significant at finite bias (albeit smaller than at zero bias). For the AP case at finite bias, Eq. (\[eq:10\]), one peak appears in $T_\mathrm{AP}^\downarrow$ and the other in $T_\mathrm{AP}^\uparrow$. As we integrate over these peaks, the MR at finite bias should therefore be much smaller than at zero bias. One expects that the MR drops sharply with increasing bias, as the peak in the minority spin channel moves away from the Fermi level. ![(a) Magnetoresistance (MR) as function of the bias $V$. The inset show the total currents $I_\mathrm{P}$ and $I_\mathrm{AP}$ as function of $V$. (b) Current polarization (CP) of $I_\mathrm{P}$ as function of $V$. []{data-label="fig:MR"}](fig5){width="8.5cm"} This is confirmed by the self-consistent calculations shown in Fig. \[fig:MR\](a). At a bias $V=0.1$ V the MR is roughly halved, and it reaches small (negative) values, $-10$% $<$ MR $< 0$%, for biases $V\geq0.25$ V. A similar drop of the MR as function of bias has been observed in Alq$_3$ tunnel barriers [@Barraud10]. Because of the delocalized nature of the hybridized Fe(001)$|$C$_{70}$ interface states, a bilayer of C$_{70}$ molecules is still quite transparent, however, which means that the currents do not show the exponential dependence on bias that is characteristic of tunnel barriers. The absolute value of the CP decreases monotonically with increasing applied bias, see Fig. \[fig:MR\](b), which agrees with the analysis given in the previous paragraph. Summary ======= We calculate from first principles the spin-polarized transport in Fe$|$bilayer-C$_{70}|$Fe devices as a function of applied bias. We show that transport in such organic spin valves can be analyzed with a factorization model, which enables us to interpret the transmission in terms of the Fe$|$C$_{70}$ interface states. This opens a route toward exploiting spin-dependent metal-molecule interactions to optimize spin currents. In particular we show that adsorption of C$_{70}$ on Fe(001) results in a sizeable spin-polarization at the Fermi level. The current spin-polarization has a maximum value of 78% in the linear response regime, and it decreases monotonically as function of the applied bias. The magnetoresistance has a value of $\sim 67$% at linear response, and it decreases rapidly with the applied bias. We thank Michel de Jong and Zhe Yuan for useful discussions. Computational resources were provided through the Physical Sciences division of the Netherlands Organization for Scientific Research (NWO-EW) and by TUBITAK ULAKBIM, High Performance and Grid Computing Center (TR-Grid e-Infrastructure). [40]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson’s strong negation. *Under consideration for acceptance in TPLP.*' author: - 'FELICIDAD AGUADO$^1$, PEDRO CABALAR$^1$, JORGE FANDINNO$^2$' - | DAVID PEARCE$^3$, GILBERTO P[É]{}REZ$^1$, CONCEPCI[Ó]{}N VIDAL$^1$\ \ $^1$ Information Retrieval Lab, Centro de Investigación en Tecnoloxías da Información e as Comunicacións (CITIC),\ Universidade da Coruña, Spain\ \ \ $^2$ IRIT, University of Toulouse, CNRS, France\ \ Universität Potsdam, Germany\ \ \ $^3$ Universidad Polit[é]{}cnica de Madrid, Spain\ bibliography: - 'refs.bib' title: 'Revisiting Explicit Negation in Answer Set Programming[^1]' --- \[firstpage\] Answer set programming; Non-monotonic reasoning; Equilibrium logic; Explicit negation. Introduction ============ Although the introduction of *stable models* [@GL88] in logic programming was motivated by the search of a suitable semantics for default negation, their early application to knowledge representation revealed the need of a second negation to represent explicit falsity. This second negation was already proposed in [@GelfondL91] under the name of *classical negation*, an operator only applicable on atoms that, when present in the syntax, led to a change in the name of stable models to become *answer sets*. Classical negation soon became common in applications for commonsense reasoning and action theories [@GL93] and was also extrapolated to the Well-Founded Semantics [@Per92] under the name of *explicit negation*. Later on, it was incorporated to the paradigm of *Answer Set Programming* [@Nie99; @MT99] (ASP), being nowadays present in the input language of most ASP solvers. [ To understand the difference for knowledge representation between default negation (in this paper, written as $\neg$) and explicit negation (represented as $\sneg$ ), a typical example is to distinguish the rule $\neg {\mathit{train}} \to {\mathit{cross}}$, that captures the criterion “you can cross if you have no information on a train coming,” from the (safier) encoding $\sneg {\mathit{train}} \to {\mathit{cross}}$ that means “you can cross if you have evidence that no train is coming.” In ASP, this explicit negation can only be used in front of atoms[^2] so it is not seen as a real connective. In an attempt of providing more flexibility to logic program connectives, introduced programs with *nested expressions* where conjunction, disjunction and default negation could be arbitrarily nested both in the heads and bodies of rules, but classical negation was still restricted to an application on atoms. To see an example, suppose that a given moment, three trains should be crossing, and we have an alarm that fires if one of them is known to be missing. Using nested expressions, we can rewrite the program: $$\begin{aligned} \sneg {\mathit{train}}_1 & \to & {\mathit{alarm}} \\ \sneg {\mathit{train}}_2 & \to & {\mathit{alarm}} \\ \sneg {\mathit{train}}_3 & \to & {\mathit{alarm}}\end{aligned}$$ as a single rule with a disjunction in the body: $$\begin{aligned} \sneg {\mathit{train}}_1 \vee \sneg {\mathit{train}}_2 \vee \sneg {\mathit{train}}_3 & \to & {\mathit{alarm}}\end{aligned}$$ but we cannot further apply De Morgan to rewrite the rule above as: $$\begin{aligned} \sneg ({\mathit{train}}_1 \wedge {\mathit{train}}_2 \wedge {\mathit{train}}_3) & \to & {\mathit{alarm}}\end{aligned}$$ It is easy to imagine that providing a semantics for this kind of expressions would be interesting if we plan to jump from the propositional case to programs with variables and aggregates (where, for instance, the number of trains is some arbitrary value $n \geq 0$). ]{} An important breakthrough that meant a purely logical treatment, was the characterisation of stable models in terms of *Equilibrium Logic* proposed by . This non-monotonic formalism is defined in terms of a models selection criterion on top of the (monotonic) intermediate logic of *Here-and-There* (HT) [@Hey30] and captures default negation $\neg \varphi$ as a derived operator in terms of implication $\varphi \to \bot$, as usual in intuitionistic logic. The definition of Equilibrium Logic also included a second, constructive negation ‘$\sneg$’ corresponding to Nelson’s *strong negation* [@Nel49] for intermediate logics. In the case of HT, this extension yields a five-valued logic called $\N5$ where, although ‘$\sneg$’ can now be nested as the rest of connectives, there exists a reduction for shifting it in front of atoms, obtaining a *negative normal form* (NNF). Once in NNF, the obtained equilibrium models actually coincide with answer sets for the syntactic fragments of nested expressions [@LTT99] or for regular programs [@GL93]. For this reason, most papers on Equilibrium Logic for ASP assumed a reduction to NNF from the very beginning, and little attention was paid to the behaviour of formulas in the scope of strong negation under a logic programming perspective. There are, however, cases in which this behaviour is not aligned with the reduct-based understanding of nested expressions in ASP. Take, for instance, the formula: $$\begin{aligned} \sneg\neg p \to p \label{f:snegneg}\end{aligned}$$ Its NNF reduction removes the combination of negations $\sneg \neg$ and produces the tautological rule whose unique equilibrium model is $\emptyset$, i.e., neither $p$ nor $\sneg p$ hold. However, if we start instead from the formula $\sneg\neg \neg \neg p \to p$, the NNF reduction removes again the first pair of negations producing the rule $\neg \neg p \to p$ with a second answer set $\{p\}$. This illustrates that we cannot replace $\neg p$ by $\neg \neg \neg p$ in the scope of strong negation, even though they would produce the same effect in any reduct of the style of [@LTT99] for nested expressions. In this paper, we consider a different characterisation of ‘$\sneg$’  in HT and Equilibrium Logic. We call this variant *explicit negation* to differentiate it from Nelson’s strong negation. To test its adequacy, we start generalising the definition of nested expression by introducing an arbitrary nesting of ‘$\sneg$’, adapting the definitions of reduct and answer set from [@LTT99] to that context. After that, we prove that equilibrium models (with explicit negation) capture the answer sets for these extended nested expressions and, in fact, preserve the strong equivalences from [@LTT99] even for arbitrary formulas (including implication). We also prove several properties of HT with explicit negation and provide a reduction to NNF that produces a different effect from $\N5$ when applied on implications or default negation. The rest of the paper is organised as follows. In the next section, we introduce the extended definition of answer sets for programs with nested expressions, where explicit negation can be arbitrarily combined both in the rule bodies and the rule heads. In Section \[sec:eqx\], we present Equilibrium Logic with explicit negation and in particular, its new monotonic basis, $\X5$, since the selection of equilibrium models is the same one as in [@Pearce96]. Section \[sec:fiveval\] provides a five-valued characterisation of $\X5$ and studies different types of equivalence relations, including variants of strong equivalence. In Section \[sec:related\], we briefly explain the main differences between explicit ($\X5$) and strong ($\N5$) negations. Finally, Section \[sec:conc\] concludes the paper. Nested expressions with explicit negation {#sec:nested} ========================================= We begin describing the syntax of nested expressions, starting from a set of atoms $\At$. A *nested expression* $F$ is defined with the following grammar: $$F ::= \top \mid \bot \mid p \mid F \vee F \mid F \wedge F \mid \neg F \mid \ \sneg F$$ where $p$ is any atom $p\in \At$. The two negations $\neg$ and $\sneg$  are respectively called *default* and *explicit* negation (the latter is also called *classical* in the ASP literature). An *explicit literal* is either an atom $p$ or its explicit negation $\sneg p$. A *default literal* is either an explicit literal $A$ or its default negation $\neg A$. Thus, given atom $p$, we can form the default literals $p, \sneg p, \neg p$ and $\neg\!\!\sneg p$. As we can see, the main difference with respect to [@LTT99] is that, in that case, the explicit negation[^3] operator $\sneg$  was only used for explicit literals, whereas in this definition, it can be arbitrarily nested. For instance, $\sneg(p \vee \neg q)$ is a nested expression under this new definition, but it is not under [@LTT99]. A *rule* is an implication of the form $F \to G$ where $F$ and $G$ are nested expressions respectively called the *body*  and the *head* of the rule. A rule of the form $\top \to G$ is sometimes abbreviated as $G$ and is further called a *fact* if $G$ is an explicit literal. A *logic program* is a set of rules. We say that a nested expression, a rule or a program is *explicit* if it does not contain default negation. A program rule $F \to G$ is said to be *regular* if the body $F=B_1 \wedge \dots \wedge B_n$ is a conjunction of default literals and the head $G=H_1 \vee \dots \vee H_m$ is a disjunction of default literals. In a regular rule, we allow an empty body $n=0$ and write $F=\top$ or an empty head $m=0$ and $G=\bot$ but not both. A program is *regular* if all its rules are regular. An *interpretation* is a set of explicit literals that is consistent, that is, it does not contain both $p$ and $\sneg p$ for any atom $p$. We define when an interpretation $T$ *satisfies* (resp. *falsifies*) a nested expression $F$, written $T \models F$ (resp. $T \falsif F$) providing the following recursive conditions: $$\begin{array}{r@{\,}c@{\,}ll@{\hspace{40pt}}r@{\,}c@{\,}ll} T & \models & \top & & T & \not\falsif & \top \\ T & \not\models & \bot & & T & \falsif & \bot \\ T & \models & p & \mbox{if } p \in T & T & \falsif & p & \mbox{if } \sneg p \in T \\ T & \models & \varphi \wedge \psi & \mbox{if } T\models \varphi \mbox{ and } T \models \psi& T & \falsif & \varphi \wedge \psi & \mbox{if } T\falsif \varphi \mbox{ or } T \falsif \psi \\ T & \models & \varphi \vee \psi & \mbox{if } T\models \varphi \mbox{ or } T \models \psi& T & \falsif & \varphi \vee \psi & \mbox{if } T\falsif \varphi \mbox{ and } T \falsif \psi \\ T & \models & \sneg \varphi & \mbox{if } T \falsif \varphi & T & \falsif & \sneg \varphi & \mbox{if } T \models \varphi\\ T & \models & \neg \varphi & \mbox{if } T \not\models \varphi & T & \falsif & \neg \varphi & \mbox{if } T \models \varphi \end{array}$$ As an example, given $\At=\{p,q\}$ and $T=\{\sneg p\}$ we have $T \models \sneg p \vee q$ because $T \models\, \sneg p$ (i.e. $T \falsif p$) although neither $T \models q$ nor $T \falsif q$, that is, $q$ is undefined. The latter can be expressed as $T \models \neg q \wedge \neg\!\sneg q$ (i.e., $q$ is neither true nor false). As another example, $T \falsif p \wedge q$ because $T \falsif p$ even though, as we said, $q$ is undefined. We say that $\varphi$ is *valid* if we have $T \models \varphi$ for every interpretation $T$. The logic induced by these valid expressions precisely corresponds to *classical logic with strong negation* as studied by . Note that, as usual in classical logic, $\varphi \to \psi$ is definable as $\neg\varphi \vee \psi$ in this context. Let $\Pi$ be an explicit program. A consistent set of literals $T$ is a *model* of $\Pi$ if, for every rule $F \to G$ in $\Pi$, $T \models G$ whenever $T \models F$. The reduct of a nested expression $F$ with respect to an interpretation $T$ is denoted as $F^T$ and defined recursively as follows: $$\begin{array}{rcll} p^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& p & \mbox{for any atom } p \in \At \\ (F \wedge G)^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& F^T \wedge G^T \\ (F \vee G)^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& F^T \vee G^T \\ (\sneg F)^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \sneg (F^T)\\ (\neg F)^T & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \left\{ \begin{array}{rl} \bot & \mbox{if } T \models F \\ \top & \mbox{otherwise} \end{array} \right.\\ \end{array}$$ The *reduct* of a program $\Pi$ with respect to $T$ corresponds to the explicit program:\ $\Pi^T {\mathbin{\stackrel{\mathrm{def}}{=}}}\{ \ (F^T \to G^T) \mid (F \to G) \in \Pi\ \}$. \[prop:total\_model\_reduct\] For any consistent set of literals $T$ and any nested formula $F$: - $T \models F$ iff $T \models F^T$; - $T \falsif F$ iff $T \falsif F^T$. A consistent set of literals $T$ is an *answer set* of a program $\Pi$ if it is a $\subseteq$-minimal model of the reduct $\Pi^T$. Notice that the definitions of reduct and answer set for the case of regular programs directly coincide with the standard definitions in ASP without nested expressions [@GelfondL91]. They also coincide with [@LTT99], defined on the case of programs with nested expressions where ‘$\sneg$’  is only in front of atoms. \[ex:nonot\] Take the program consisting of the single rule . For $\At=\{p\}$, we have three possible interpretations $T_1=\{p\}$, $T_2=\{\sneg p\}$ and $T_3=\emptyset$. This yields two possible reducts $\Pi^{T_1}=\{\sneg \bot \to p\}$ and $\Pi^{T_2}=\Pi^{T_3}=\{\sneg \top \to p\}$. It is easy to see that their corresponding minimal models are $T_1$ and $T_3$ which constitute the two answer sets of $\Pi$. \[ex:bird\] Take the program consisting of the single rule: $$\begin{aligned} \neg ({\mathit{bird}} \wedge \sneg {\mathit{flies}}) \to \ \sneg ({\mathit{bird}} \wedge \sneg {\mathit{flies}}) \label{f:bird}\end{aligned}$$ capturing the idea that “being a bird that does not fly” should be false by default. If we choose any interpretation $T$ such that $T \models {\mathit{bird}} \wedge \sneg {\mathit{flies}}$ then the reduct will have a single rule with $\bot$ in the body and the minimal model will be $\emptyset$ which does not satisfy ${\mathit{bird}} \wedge \sneg {\mathit{flies}}$. If $T \not\models {\mathit{bird}} \wedge \sneg {\mathit{flies}}$ instead, the reduct becomes $\top \to \ \sneg ({\mathit{bird}} \wedge \sneg {\mathit{flies}})$ and the minimal models of this program are $\{\sneg {\mathit{bird}}\}$ and $\{{\mathit{flies}}\}$ that, as they are both compatible with the assumption for $T$, they become the two answer sets of . Suppose we extend now with the fact $bird$. Doing so, it is easy to see that the only answer set becomes $\{{\mathit{flies}}\}$. Analogously, if we take plus the fact $\sneg {\mathit{flies}}$ the only answer set becomes $\{\sneg {\mathit{bird}}\}$. Finally, if we add the facts ${\mathit{bird}}$ and $\sneg {\mathit{flies}}$ to , the default is deactivated and we get the unique answer set $\{{\mathit{bird}},\sneg {\mathit{flies}}\}$. Equilibrium logic with explicit negation {#sec:eqx} ======================================== We start defining the monotonic logic of *Here-and-There with explicit negation*, $\X5$. Let $\At$ be a set of atoms. A *formula* $\varphi$ is an expression built with the grammar: $$\varphi ::= p \mid \bot \mid \varphi \wedge \varphi \mid \varphi \vee \varphi \mid \varphi \to \varphi \mid \ \sneg \varphi$$ for any atom $p\in \At$. We also use the abbreviations: [2]{} $$\begin{aligned} \neg \varphi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& (\varphi \to \bot)\\ \top & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \neg \bot\\\end{aligned}$$ $$\begin{aligned} \\ \varphi \leftrightarrow \psi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& (\varphi \to \psi) \wedge (\psi \to \varphi)\\ \varphi \Leftrightarrow \psi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& (\varphi \leftrightarrow \psi) \wedge (\sneg \varphi \leftrightarrow \sneg \psi)\end{aligned}$$ Given a pair of formulas $\varphi$ and $\alpha$, we write $\varphi[\alpha/p]$ to denote the uniform substitution of all occurrences of atom $p$ in $\varphi$ by $\alpha$. As usual, a *theory* is a set of formulas. We sometimes understand finite theories (or subtheories) as the conjunction of their formulas. Notice that programs with nested expressions are also theories under this definition. An $\X5$-*interpretation* is a pair ${\langle H,T \rangle}$ of consistent sets of explicit literals (respectively standing for “here” and “there”) satisfying $H \subseteq T$. We say that the interpretation is *total* when $H=T$. \[def:satfals\] We say that ${\langle H,T \rangle}$ *satisfies* (resp. *falsifies*) a formula $\varphi$, written ${\langle H,T \rangle} \models \varphi$ (resp. ${\langle H,T \rangle} \falsif \varphi$), when the following recursive conditions hold: $$\begin{array}{r@{\,}c@{\,}l@{\;}l@{\hspace{10pt}}r@{\,}c@{\,}l@{\;}l} {\langle H,T \rangle} & \models & \top & & {\langle H,T \rangle} & \not\falsif & \top \\ {\langle H,T \rangle} & \not\models & \bot & & {\langle H,T \rangle} & \falsif & \bot \\ {\langle H,T \rangle} & \models & p & \mbox{if } p \in H & {\langle H,T \rangle} & \falsif & p & \mbox{if } \sneg p \in H \\ {\langle H,T \rangle} & \models & \varphi \wedge \psi & \mbox{if } {\langle H,T \rangle} \models \varphi \mbox{ and } {\langle H,T \rangle} \models \psi& {\langle H,T \rangle} & \falsif & \varphi \wedge \psi & \mbox{if } {\langle H,T \rangle} \falsif \varphi \mbox{ or } {\langle H,T \rangle} \falsif \psi \\ {\langle H,T \rangle} & \models & \varphi \vee \psi & \mbox{if } {\langle H,T \rangle} \models \varphi \mbox{ or } {\langle H,T \rangle} \models \psi& {\langle H,T \rangle} & \falsif & \varphi \vee \psi & \mbox{if } {\langle H,T \rangle} \falsif \varphi \mbox{ and } {\langle H,T \rangle} \falsif \psi \\ {\langle H,T \rangle} & \models & \sneg \varphi & \mbox{if } {\langle H,T \rangle} \falsif \varphi & {\langle H,T \rangle} & \falsif & \sneg \varphi & \mbox{if } {\langle H,T \rangle} \models \varphi\\ {\langle H,T \rangle} & \models & \varphi\! \to \! \psi & \mbox{if both} & {\langle H,T \rangle} & \falsif & \varphi \! \to \! \psi & \mbox{if } {\langle T,T \rangle} \models \varphi \mbox{ and } {\langle H,T \rangle} \falsif \psi\\ & & & (i) {\langle H,T \rangle} \not\models \varphi \mbox{ or } {\langle H,T \rangle} \models \psi \\ & & & (ii) {\langle T,T \rangle} \not\models \varphi \mbox{ or } {\langle T,T \rangle} \models \psi & & & & \hfill\Box \end{array}$$ A formula $\varphi$ is a *tautology* (or is *valid*), written $\models \varphi$, if it is satisfied by every possible interpretation. We say that an $\X5$-interpretation ${\langle H,T \rangle}$ is a *model* of a theory $\Gamma$, written ${\langle H,T \rangle} \models \Gamma$, if ${\langle H,T \rangle}\models \varphi$ for all $\varphi \in \Gamma$. The next observation about Definition \[def:satfals\] connects satisfaction ‘$\models$’ with standard HT. \[obs:ht\] The satisfaction relation ‘$\models$’ (left column in Def. \[def:satfals\]) of any formula corresponds to regular HT satisfaction up to the first occurrence of ‘$\sim$’, where the falsification ‘$\falsif$’ comes into play. As a result, any tautology from HT can be shifted to $\X5$, even if its atoms are uniformly replaced by subformulas containing explicit negation. \[th:httaut\] If formula $\varphi$ is HT valid (and so, it does not contain $\sneg$ ) then $\varphi[\alpha/p]$ is also $\X5$ valid, for any formula $\alpha$ and any atom $p$. If we choose any $p$ not occurring in $\varphi$, then $\varphi[\alpha/p]=\varphi$ and the theorem above is just saying that $\X5$ is a conservative extension of HT. But it can also be exploited further by replacing, in the HT tautology, any atom by an arbitrary formula containing negation. For instance, if explicit negation only occurs in front of atoms, we essentially get HT with explicit literals playing the role of atoms (disregarding inconsistent models). However, when we combine explicit negation in an arbitrary way, some usual properties of HT need to be checked in the new context. \[lem:satisfaction\_total\_models\] Let $T$ be a consistent set of literals and $F$ a nested expression. Then: - ${\langle T,T \rangle} \models F$ iff $T \models F$; - ${\langle T,T \rangle} \falsif F$ iff $T \falsif F$. \[th:persistence\] For any $\X5$-interpretation ${\langle H,T \rangle}$ and any formula $\varphi$ then both: - ${\langle H,T \rangle} \models \varphi$ implies ${\langle T,T \rangle} \models \varphi$; - ${\langle H,T \rangle} \falsif \varphi$ implies ${\langle T,T \rangle} \falsif \varphi$. \[prop:default\_negation\] For any $\X5$-interpretation ${\langle H,T \rangle}$, any formula $\varphi$: - ${\langle H,T \rangle} \models \neg \varphi$ iff ${\langle T,T \rangle} \not\models \varphi$; - ${\langle H,T \rangle} \falsif \neg \varphi$ iff ${\langle T,T \rangle} \models \varphi$. The following results establish a connection between $\X5$ and the reduct of a nested expression or a program. \[lem:aux\_reduct\] Let ${\langle H,T \rangle}$ be an $\X5$-interpretation and $F$ a nested expression. Then: - ${\langle H,T \rangle} \models F$ iff $H \models F^T$; - ${\langle H,T \rangle} \falsif F$ iff $H \falsif F^T$. \[cor:equivalence\_for\_total\_model\] For any consistent set of literals $T$ and any program $\Pi$: ${\langle T,T \rangle} \models \Pi$ iff $T \models \Pi$. \[prop:htreduct\] For any $\X5$-intepretation ${\langle H,T \rangle}$ and any program $\Pi$: ${\langle H,T \rangle} \models \Pi$ iff $H$ is a model of $\Pi^T$ and $T$ is a model of $\Pi$. \[def:eqmodel\] A total $\X5$-interpretation ${\langle T,T \rangle}$ is an *equilibrium model* of a theory $\Gamma$ if ${\langle T,T \rangle}$ is a model of $\Gamma$ and there is no other model ${\langle H,T \rangle}$ of $\Gamma$ with $H \subset T$. *Equilibrium logic (with explicit negation)* is the non-monotonic logic induced by equilibrium models. The following theorem guarantees that equilibrium models and answer sets coincide for the syntax of programs with nested expressions. \[th:answersets\] An interpretation $T$ is an answer set of a program $\Pi$ iff ${\langle T,T \rangle}$ is an equilibrium model of $\Pi$. o conclude this section, we provide an alternative reduct definition for arbitrary formulas (and not just nested expressions) obtained as a generalisation of Ferraris’ reduct [@Fer05]. This generalisation introduces a main feature[^4] with respect to [@Fer05]: it actually uses two dual transformations, $\varphi^T_+$ and $\varphi^T_-$, to obtain a symmetric behaviour depending on the number of explicit negations in the scope. \[def:Ferraris\_reduct\] Given a formula $\varphi$ and an interpretation $T$ (a consistent set of explicit literals) we define the following pair of mutually recursive transformations: $$\begin{array}{cc} \varphi^T_+ {\mathbin{\stackrel{\mathrm{def}}{=}}}\left\{ \begin{array}{cl} \bot & \text{if } T \not\models \varphi \\ p & \text{if } \varphi=p \in \At, p \in T \\ \alpha^T_+ \otimes \beta^T_+ & \text{if } T \models \varphi, \varphi=\alpha \otimes \beta, \\ & \text{ for } \otimes \in\{\vee,\wedge \}\\ \neg (\alpha^T_+) \vee \beta^T_+ & \text{if } T \models \varphi, \varphi=\alpha \to \beta \\ \neg (\alpha^T_+) & \text{if } T \models \varphi, \varphi=\neg \alpha, \\ \sneg (\alpha^T_-) & \text{if } T \models \varphi, \varphi=\sneg \alpha \end{array} \right. & \varphi^T_- {\mathbin{\stackrel{\mathrm{def}}{=}}}\left\{ \begin{array}{cl} \top & \text{if } T \not\falsif \varphi\\ p & \text{if } \varphi=p \in \At, \sneg p \in T \\ \alpha^T_- \otimes \beta^T_- & \text{if } T \falsif \varphi, \varphi=\alpha \otimes \beta, \\ & \text{ for } \otimes \in\{\vee,\wedge \}\\ \beta^T_- & \text{if } T \falsif \varphi, \varphi=\alpha \to \beta\\ \bot & \text{if } T \falsif \varphi, \varphi=\neg \alpha\\ \sneg (\alpha^T_+) & \text{if } T \falsif \varphi, \varphi=\sneg \alpha \end{array} \right. \end{array}$$ The reduct $\Gamma^T_+$ of a theory $\Gamma$ is just defined as the set $\{\varphi^T_+ \mid \varphi \in \Gamma\}$. For instance, given $\varphi=\eqref{f:bird}$ and $T=\{\sneg {\mathit{bird}}\}$, the reader can check that the application of the definition above eventually produces the formula $\varphi^T_+ = \neg \neg \bot \vee \sneg ({\mathit{bird}} \wedge \top)$ which is equivalent to $\sneg {\mathit{bird}}$. If we take $T=\{{\mathit{flies}}\}$ instead, the result is $\varphi^T_+=\neg \neg \bot \vee \sneg (\top \wedge \sneg {\mathit{flies}})$ that is equivalent to ${\mathit{flies}}$. As a third example, if we take $T=\{{\mathit{bird}}\}$ then we directly get $\varphi^T_+=\bot$. \[th:Ferraris\_reduct\] For any formula $\varphi$ and any pair of interpretations $H \subseteq T$: 1. \(i)  $H \models \varphi^T_+$ iff ${\langle H,T \rangle} \models \varphi$; 2. \(ii) $H \falsif \varphi^T_-$ iff ${\langle H,T \rangle} \falsif \varphi$. From Lemma \[lem:aux\_reduct\] and Theorem \[th:Ferraris\_reduct\] we immediately conclude: \[cor:reducts\] For any nested expression $F$ and any pair of interpretations $H \subseteq T$: 1. $H \models F^T$ iff $T\models F$ and $H \models F^T_+$; 2. $H \falsif F^T$ iff $T\falsif F$ and $H \falsif F^T_-$. \[cor:reducteq\] ${\langle T,T \rangle}$ is an equilibrium model of $\Gamma$ iff $T$ is a minimal model of $\Gamma^T_+$. Back to the example formula $\varphi=$, taking $T=\{\sneg {\mathit{bird}}\}$ we saw that $\varphi^T_+$ is equivalent to $\sneg {\mathit{bird}}$ whose minimal model is obviously $T$. Therefore, ${\langle T,T \rangle}$ is an equilibrium model. [\[rev1.1\]]{} Multivalued characterisation and equivalence relations {#sec:fiveval} ====================================================== An alternative way of characterising $\X5$ is as a five-valued logic defined as follows. Given any $\X5$-interpretation $M={\langle H,T \rangle}$ we define its corresponding 5-valued mapping $M: \At \to \{-2,-1,0,1,2\}$ so that, for any atom $p\in \At$: $$M(p) {\mathbin{\stackrel{\mathrm{def}}{=}}}\left\{ \begin{array}{rl} 2 & \mbox{if } p \in H\\ -2 & \mbox{if } \sneg p \in H\\ 1 & \mbox{if } p \in T\setminus H\\ -1 & \mbox{if } \sneg p \in T\setminus H\\ 0 & \mbox{otherwise, i.e., } p \not\in T, \sneg p \not\in T \end{array} \right.$$ We can read these five values as follows: $2$ = *proved to be true*; $-2$ = *proved to be false*; $1$ = *true by default*; $-1$ = *false by default*; and $0$ = *undefined*. Notice that values $1$ and $-1$ are used for explicit literals in $T \setminus H$. As a consequence: An $\X5$-interpretation $M={\langle H,T \rangle}$ is total (i.e. $H=T$) iff $M(p) \in \{-2,0,2\}$ for all $p \in \At$. \[def:valuation\] This 5-valuation can be extended to arbitrary formulas in the following way: [lCl+x\*]{} M() & & -2\ M() & & 2\ M() & & (M(),M())\ M() & & (M(),M())\ M() & & { [cl]{} 2 & M() (M(),0)\ M() & .\ M() & & -M() & The designated value is $2$, that is, we will understand that a formula is satisfied when $M(\varphi)=2$. Moreover, a complete correspondence with the satisfaction/falsification of formulas given in the previous section is fixed by the following theorem: \[th:corresp\] For any $\X5$-interpretation $M={\langle H,T \rangle}$ and any formula $\varphi$: [2]{} - ${\langle H,T \rangle} \models \varphi$ iff $M(\varphi)=2$; - ${\langle T,T \rangle} \models \varphi$ iff $M(\varphi)>0$; - ${\langle H,T \rangle} \falsif \varphi$ iff $M(\varphi)=-2$; - ${\langle T,T \rangle} \falsif \varphi$ iff $M(\varphi)<0$. The equilibrium condition given in Definition \[def:eqmodel\] can be rephrased in 5-valued terms as follows. Given two $\X5$-interpretations $M={\langle H,T \rangle}$ and $M'={\langle H',T' \rangle}$ we say that $M$ is *smaller* than $M'$, written $M \leq M'$, when $T=T'$ and $H \subseteq H'$. \[prop:leq\] Let $M$ and $M'$ be a pair of $\X5$-interpretations. Then $M \leq M'$ iff, for any atom $p \in \At$, the following three conditions hold: 1. $M(p)=0$ iff $M'(p)=0$; 2. If $M(p) >0$, then $M(p) \leq M'(p)$; 3. If $M(p) <0$, then $M'(p) \leq M(p)$. A total interpretation $M={\langle T,T \rangle}$ is an equilibrium model of a theory $\Gamma$ iff $M(\varphi)=2$ for all $\varphi \in \Gamma$ and there is no $M' < M$ such that $M'(\varphi)=2$ for all $\varphi \in \Gamma$. It follows from Theorem \[th:corresp\] and the definition of $\leq$ relation. The truth tables derived from Definition \[def:valuation\] are depicted in Figure \[fig:tables\], including the tables for derived operators ‘$\neg$’, ‘$\leftrightarrow$’ and ‘$\Leftrightarrow$’. Note that the table for $\neg \varphi=(\varphi \to \bot)$ is just the first column of the table for ‘$\to$’ since the evaluation of ‘$\bot$’ is fixed to $-2$. $$\begin{array}{c@{\hspace{5pt}}c} \begin{array}{r|rrrrr} \wedge & -2 & -1 & 0 & 1 & 2\\ \hline -2 & -2 & -2 & -2 & -2 & -2 \\ -1 & -2 & -1 & -1 & -1 & -1 \\ 0 & -2 & -1 & 0 & 0 & 0 \\ 1 & -2 & -1 & 0 & 1 & 1 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{r|rrrrr} \vee & -2 & -1 & 0 & 1 & 2\\ \hline -2 & -2 & -1 & 0 & 1 & 2 \\ -1 & -1 & -1 & 0 & 1 & 2 \\ 0 & 0 & 0 & 0 & 1 & 2 \\ 1 & 1 & 1 & 1 & 1 & 2 \\ 2 & 2 & 2 & 2 & 2 & 2 \end{array} \\ \\ \begin{array}{r|rrrrr} \to & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 2 & 2 & 2 & 2 \\ -1 & 2 & 2 & 2 & 2 & 2 \\ 0 & 2 & 2 & 2 & 2 & 2 \\ 1 & -2 & -1 & 0 & 2 & 2 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{c@{\hspace{20pt}}c} \begin{array}{r|r} \varphi & \sneg \varphi\\ \hline -2 & 2 \\ -1 & 1 \\ 0 & 0 \\ 1 & -1 \\ 2 & -2 \end{array} & \begin{array}{r|r} \varphi & \neg \varphi\\ \hline -2 & 2 \\ -1 & 2 \\ 0 & 2 \\ 1 & -2 \\ 2 & -2 \end{array} \end{array} \\ \\ \begin{array}{r|rrrrr} \leftrightarrow & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 2 & 2 & -2 & -2 \\ -1 & 2 & 2 & 2 & -1 & -1 \\ 0 & 2 & 2 & 2 & 0 & 0 \\ 1 & -2 & -1 & 0 & 2 & 1 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{r|rrrrr} \Leftrightarrow & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 1 & 0 & -2 & -2 \\ -1 & 1 & 2 & 0 & -1 & -2 \\ 0 & 0 & 0 & 2 & 0 & 0 \\ 1 & -2 & -1 & 0 & 2 & 1 \\ 2 & -2 & -2 & 0 & 1 & 2 \end{array} \end{array}$$ It is easy to check, for instance, that the following implication is valid: $$\begin{aligned} \sneg \varphi \to \neg \varphi \label{f:coher}\end{aligned}$$ expressing that explicit negation is stronger than default negation[^5]. Moreover, default negation is definable in terms of implication and explicit negation (without resorting to $\bot$) since, with some effort, it can be checked that the table for $\neg \varphi$ can be equally obtained through the expression: $$\begin{aligned} \sneg ((\varphi \to \ \sneg \varphi) \to \ \sneg (\varphi \to \ \sneg \varphi))\end{aligned}$$ An important remark regarding equivalence is that to express that this (or any) pair of formulas are equivalent, double implication does not suffice. This is because, as we can see in the tables, $M(\varphi \leftrightarrow \psi)=2$ does not imply that $M(\varphi)=M(\psi)$. To get such a correspondence, we must resort instead to the stronger ‘$\Leftrightarrow$’ for which $M(\varphi \Leftrightarrow \psi)=2$ holds if and only if $M(\varphi)=M(\psi)$. This lack of the ‘$\leftrightarrow$’ equivalence (we call it *weak* equivalence) has an important consequence: it does not define a congruence relation since $\models \alpha \leftrightarrow \beta$ no longer implies that we can freely replace subformula $\alpha$ by $\beta$ in any arbitrary context: it may be the case that $\not\models \sneg \alpha \leftrightarrow \sneg \beta$. For instance, we can easily check that $\models p \wedge \neg p \leftrightarrow \bot$ because $\min(M(p),M(\neg p)) \leq 0$ and $M(\bot)=-2$, so $M(p \wedge \neg p \leftrightarrow \bot)=2$ for any $M$. However, we cannot replace $p \wedge \neg p$ by $\bot$ in any context. Take the program $\Pi$ consisting of the unique rule $$\begin{aligned} \sneg (p \wedge \neg p) \label{f:pnotp}\end{aligned}$$ with empty body. Interpretation $T=\{\sneg p\}$ is an answer set because $\Pi^T=\{\sneg (p \wedge \top)\}$ has $\{\sneg p\}$ as minimal model (in fact, it is the unique answer set) but if we replace $p \wedge \neg p$ by $\bot$ in $\Pi$ we get the trivial program $\{\sneg \bot\}$ whose unique answer set is $\emptyset$. Although weak equivalence does not guarantee arbitrary replacements, it can be used to replace formulas in a theory, as stated below: \[prop:replace\] Let $\alpha$, $\beta$ be a pair of formulas such that $\models \alpha \leftrightarrow \beta$. Then, $M \models \Gamma \cup \{\alpha\}$ iff $M \models \Gamma \cup \{\beta\}$ for any theory $\Gamma$ and $\X5$-interpretation $M$. As we mentioned before, for obtaining a congruence relation we can use validity of ‘$\Leftrightarrow$’ instead, which guarantees the following substitution theorem. \[th:subst\] Let $\alpha$, $\beta$ be a pair of formulas satisfying $\models \alpha \Leftrightarrow \beta$. Then, for any formula $\varphi$, we also obtain $\models \varphi[\alpha/p] \Leftrightarrow \varphi[\beta/p]$. Still, there are some cases in which $\leftrightarrow$ can be used for substitution, provided that the replaced formulas are not in the scope of explicit negation. \[th:replace\] Let $\varphi$ be a formula where atom $p$ only occurs outside the scope of explicit negation, and let $\alpha, \beta$ be two formulas satisfying $\models \alpha \leftrightarrow \beta$. Then, $\models \varphi[\alpha/p] \leftrightarrow \varphi[\beta/p]$. An important property of ASP related to HT equivalence is *strong equivalence*. We say that two programs (resp. theories) $\Gamma$ and $\Gamma'$ are *strongly equivalent* iff $\Gamma \cup \Delta$ and $\Gamma' \cup \Delta$ have the same answer sets (resp. equilibrium models), for any additional program (resp. theory) $\Delta$. When we talk about strong equivalence of formulas $\alpha$ and $\beta$ we assume they correspond to the singleton theories $\{\alpha\}$ and $\{\beta\}$. As shown in [@LPV01] (for the case without explicit negation), two programs or theories are strongly equivalent if and only if they are HT equivalent. Since the ‘$\leftrightarrow$’ relation in HT is congruent, there is no difference between strong equivalence (replacing formulas in a theory) and substitution (replacing subformulas in a formula). However, as explained in [@Ortiz07], once congruence is lost, we can further refine strong equivalence in the following way. We say that two formulas $\alpha$ and $\beta$ are *strongly equivalent on substitutions* if $\Delta \cup \{ \varphi[\alpha/p] \} $ and $\Delta \cup \{ \varphi[\beta/p] \}$ have the same equilibrium models, for any formula $\varphi$ and theory $\Delta$. The proof of the next lemma can be obtained following similar steps to the proof of the main theorem in [@LPV01] replacing atoms in that case by explicit literals in ours. \[lem:strong.equivalence.aux\] Let $\alpha$ and $\beta$ be two formulas and be an interpretation such that ${\langle H,T \rangle} \models\alpha$ but ${\langle H,T \rangle} \not\models\beta$. Then, there is a finite theory $\Delta$ such that ${\langle T,T \rangle} $ is an equilibrium model of one of $\Delta \cup {\ensuremath{\{\beta\}}}$, $\Delta \cup {\ensuremath{\{ \alpha \}}}$ but not of both. \[thm:strong.equivalence\] Formulas $\alpha$ and $\beta$ are strongly equivalent iff $\models \alpha \leftrightarrow \beta$. \[th:substeq\] Formulas $\alpha$ and $\beta$ are strongly equivalent on substitutions iff $\models \alpha \Leftrightarrow \beta$. The following set of valid equivalences allow us reducing any nested expression with explicit negation to an *explicit negation normal form* (NNF) where $\sneg$  is only applied on atoms. $$\begin{aligned} \sneg \top & \Leftrightarrow & \bot \label{f:nnf1}\\ \sneg \bot & \Leftrightarrow & \top \label{f:nnf2}\\ \sneg (\varphi \wedge \psi) & \Leftrightarrow & \sneg \varphi \,\,\vee \sneg \psi \label{f:nnf3}\\ \sneg (\varphi \vee \psi) & \Leftrightarrow & \sneg \varphi \,\,\wedge \sneg \psi \label{f:nnf4}\\ \sneg \ \sneg \varphi & \Leftrightarrow & \varphi \label{f:nnf5}\\ \sneg \neg \varphi & \Leftrightarrow & \neg \neg \varphi \label{f:nnf6}\end{aligned}$$ For instance, we can reduce the nested expression  to NNF as follows: $$\begin{array}{rcll} \sneg (p \wedge \neg p) & \Leftrightarrow & \sneg p \vee \sneg \neg p & \mbox{ by } \eqref{f:nnf3}\\ & \Leftrightarrow & \sneg p \vee \neg \neg p & \mbox{ by } \eqref{f:nnf6} \end{array}$$ Programs in NNF correspond to the original syntax in [@LTT99]. That paper provided several transformations that allowed reducing any program in NNF to a regular program. These transformations included commutativity and associativity of conjunction and disjunction (which are obviously satisfied in $\X5$) plus the equivalences in the following proposition. The following formulas are $\X5$ tautologies: $$\begin{aligned} \varphi \wedge (\psi \vee \gamma) \Leftrightarrow (\varphi \wedge \psi) \vee (\varphi \wedge \gamma) & & \varphi \vee (\psi \wedge \gamma) \Leftrightarrow (\varphi \vee \psi) \wedge (\varphi \vee \gamma) \label{f:distrib} \\ \varphi \wedge \bot \Leftrightarrow \bot & & \varphi \vee \top \Leftrightarrow \top \label{f:anhil} \\ \varphi \wedge \top \Leftrightarrow \varphi & & \varphi \vee \bot \Leftrightarrow \varphi \label{f:neut} \\ \neg (\varphi \wedge \psi) \Leftrightarrow \neg \varphi \vee \neg \psi & & \neg (\varphi \vee \psi) \Leftrightarrow \neg \varphi \wedge \neg \psi \label{f:demorgan} \\ \neg \top \Leftrightarrow \bot & & \neg \bot \Leftrightarrow \top \label{f:notconst}\end{aligned}$$ $$\begin{aligned} \neg \neg \neg \varphi & \Leftrightarrow & \neg \varphi \label{f:triplenot}\\ \varphi \to \psi \wedge \gamma & \Leftrightarrow & (\varphi \to \psi) \wedge (\varphi \to \gamma) \label{f:andhead}\\ \varphi \vee \psi \to \gamma & \Leftrightarrow & (\varphi \to \gamma) \wedge (\psi \to \gamma) \label{f:orbody}\\ \varphi \wedge \neg \neg \psi \to \gamma & \Leftrightarrow & \varphi \to \gamma \vee \neg \psi \label{f:notbody}\\ \varphi \to \gamma \vee \neg \neg \psi & \Leftrightarrow & \varphi \wedge \neg \psi \to \gamma \label{f:nothead}\end{aligned}$$ and correspond to the transformations in [@LTT99]. For instance, as we saw, was equivalent to $\sneg p \vee \neg \neg p$ but this can be further transformed into the regular rule $\neg p \to \sneg p$ commonly used to assign falsity of $p$ by default. Rule can be transformed as follows: $$\begin{array}{rcl@{\ \ \ }l} \eqref{f:bird} & \Leftrightarrow & \neg {\mathit{bird}} \vee \neg\!\!\sneg {\mathit{flies}} \to \ \sneg ({\mathit{bird}} \wedge \sneg {\mathit{flies}}) & \mbox{by } \eqref{f:demorgan}\\ & \Leftrightarrow & \neg {\mathit{bird}} \vee \neg\!\!\sneg {\mathit{flies}} \to \ \sneg {\mathit{bird}} \vee \sneg \ \sneg {\mathit{flies}} & \mbox{by } \eqref{f:nnf3} \\ & \Leftrightarrow & \neg {\mathit{bird}} \vee \neg\!\!\sneg {\mathit{flies}} \to \ \sneg {\mathit{bird}} \vee {\mathit{flies}} & \mbox{by } \eqref{f:nnf5} \\ & \Leftrightarrow & (\neg {\mathit{bird}} \to \ \sneg {\mathit{bird}} \vee {\mathit{flies}})\\ & & \wedge (\neg\!\!\sneg {\mathit{flies}} \to \ \sneg {\mathit{bird}} \vee {\mathit{flies}}) & \mbox{by } \eqref{f:orbody} \\ \end{array}$$ and the last step is a conjunction of two regular rules as in standard ASP solvers. Reduction to NNF is also possible on arbitrary formulas. For that purpose, we can combine - with the following valid (weak) equivalence: $$\begin{aligned} \sneg (\varphi \to \psi) & \leftrightarrow & \neg \neg \varphi \wedge \sneg \psi \label{f:nnf7}\end{aligned}$$ However, the reduction must be done with some care, because this last equivalence cannot be shifted to $\Leftrightarrow$. Indeed, the left and right expressions have different valuations when $M(\varphi)=M(\psi)=1$, obtaining $M(\sneg (\varphi \to \psi))=-2 \neq -1 = M(\neg \neg \varphi \wedge \sneg \psi)$. Fortunately, Theorem \[th:replace\] allows us applying from the outermost occurrence of $\sim$ and then recursively combining with - until $\sim$ is only applied to atoms. For any formula $\varphi$ there exists a formula $\psi$ in NNF such that $\models \varphi \leftrightarrow \psi$. For instance, we can reduce the following formula into NNF as follows: $$\begin{aligned} \sim (a \to \ \sneg b \wedge (c \to d)) & \leftrightarrow & \neg \neg a \wedge \sim( \sneg b \wedge (c \to d)) \\ & \leftrightarrow & \neg \neg a \wedge (\sneg \ \sneg b \vee \sneg (c \to d)) \\ & \leftrightarrow & \neg \neg a \wedge (b \vee \neg \neg c \wedge \sneg d)\end{aligned}$$ However, we cannot apply  making a replacement in the scope of explicit negation. A clear counterexample is the formula $\sneg \ \sneg (p \to q)$ that, due to , is strongly equivalent to $p \to q$, but applying inside would incorrectly lead to the nested expression $\sneg (\neg \neg p \wedge \sneg q)$ that can be transformed into the strongly equivalent expression $\neg p \vee q$, different from $p \to q$ in ASP. Related work {#sec:related} ============ As explained in the introduction, this work is obviously related to the characterisation of ‘$\sim$’ as Nelson’s *strong negation* [@Nel49] for intermediate logics. In particular, the addition of strong negation to HT produces the five-valued logic $\N5$ already present in the original definition of Equilibrium Logic [@Pearce96]. In fact, the interpretations and the truth values we have chosen for $\X5$ coincide with those for $\N5$, and their evaluation of (non-derived) connectives $\top, \wedge, \vee$ and $\to$ from Figure \[fig:tables\] also coincide in both logics, except for one difference in the table of implication: the value for $M(\varphi)=1$ and $M(\psi)=-2$ changes from $-2$ to $-1$ in $\N5$. This change and its result on derived operators is shown in Figure \[fig:tablesn5\] where the different values are framed in rectangles. $$\begin{array}{c@{\hspace{20pt}}c} \begin{array}{r|rrrrr} \to & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 2 & 2 & 2 & 2 \\ -1 & 2 & 2 & 2 & 2 & 2 \\ 0 & 2 & 2 & 2 & 2 & 2 \\ 1 & \minusone & -1 & 0 & 2 & 2 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{c@{\hspace{20pt}}c} \begin{array}{r|r} \varphi & \neg \varphi\\ \hline -2 & 2 \\ -1 & 2 \\ 0 & 2 \\ 1 & \minusone \\ 2 & -2 \end{array} \end{array} \\ \\ \begin{array}{r|rrrrr} \leftrightarrow & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 2 & 2 & \minusone & -2 \\ -1 & 2 & 2 & 2 & -1 & -1 \\ 0 & 2 & 2 & 2 & 0 & 0 \\ 1 & \minusone & -1 & 0 & 2 & 1 \\ 2 & -2 & -1 & 0 & 1 & 2 \end{array} & \begin{array}{r|rrrrr} \Leftrightarrow & -2 & -1 & 0 & 1 & 2\\ \hline -2 & 2 & 1 & 0 & \minusone & -2 \\ -1 & 1 & 2 & 0 & -1 & -2 \\ 0 & 0 & 0 & 2 & 0 & 0 \\ 1 & \minusone & -1 & 0 & 2 & 1 \\ 2 & -2 & -2 & 0 & 1 & 2 \end{array} \end{array}$$ As a result, $\N5$ ceases to satisfy and whose role in the reduction to NNF is respectively replaced by the $\N5$-valid weak equivalences: $$\begin{aligned} \sneg \neg \varphi & \leftrightarrow & \varphi \label{f:N1}\\ \sneg (\varphi \to \psi) & \leftrightarrow & \varphi \wedge \sneg \psi \label{f:N2}\end{aligned}$$ The difference between and also reveals the effect on falsification of implication in both logics. While ${\langle H,T \rangle} \falsif \varphi \to \psi$ requires ${\langle T,T \rangle} \models \varphi$ in $\X5$, this is replaced by condition ${\langle H,T \rangle}\models \varphi$ in $\N5$. Curiously, although these two logics provide a different behaviour for $\sneg$  as strong versus explicit negation, they actually have the same evaluation for that connective, while their real technical difference lies on falsity of implication. The reason why $\N5$ does not capture the extended reduct for nested expressions proposed in this paper is that is not valid in that logic. This is because, when $M(\varphi)=1$, we get $M(\neg \varphi)=-1 \neq -2 = M(\neg \neg \neg \varphi)$. It is still possible to define $\N5$ operators in $\X5$ as follows: $$\begin{aligned} \varphi \stackrel{\N5}{\to} \psi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \varphi \to \ \sneg \varphi \vee \psi \\ \stackrel{\N5}{\neg} \varphi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \varphi \to \ \sneg \varphi\end{aligned}$$ using here the $\X5$ interpretation for implication. Analogously, we can also define the $\X5$ operators in $\N5$ in the following way: $$\begin{aligned} \varphi \stackrel{\X5}{\to} \psi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& (\varphi \to \psi) \wedge (\sneg \psi \to \ \neg \neg \neg \varphi) \\ \stackrel{\X5}{\neg} \varphi & {\mathbin{\stackrel{\mathrm{def}}{=}}}& \neg \neg \neg \varphi\end{aligned}$$ assuming that we interpret implication and $\neg$ under $\N5$ instead. An interesting connection between both variants is that the addition of the excluded middle axiom schemata $\varphi \vee \neg \varphi$ imposes the restriction of total models ${\langle T,T \rangle}$ both in $\X5$ and in $\N5$. This means that all atoms and formulas are evaluated in the set $\{-2,0,2\}$, for which the truth tables coincide in these two logics and actually collapse to classical logic with strong negation [@vakarelov1977notes] introduced in Section \[sec:nested\]. This coincidence is important since equilibrium models (and so, answer sets) are total models. To conclude the section on related work, another possibility for interpreting a second negation ‘$\sneg$’ inside intuitionistic logic was provided by [@FH96] using a *classical* negation interpretation. Although the idea seems closer to Gelfond and Lifschitz’ original terminology for a second negation, it actually provides undesired effects from an ASP point of view. Classical negation in HT means keeping only the satisfaction relation ‘$\models$’ in Definition \[def:satfals\] (falsification ‘$\falsif$’ is not needed) but replacing the condition for ‘$\sneg$’ so that ${\langle H,T \rangle} \models \sneg \varphi$ if ${\langle H,T \rangle} \not\models \varphi$. One important effect of this change is that HT with classical negation ceases to satisfy the persistence property (Theorem \[th:persistence\]). But perhaps a more important problem from the ASP perspective is that $\neg p$ implies $\sneg p$ for any atom $p$. Thus, the rule $\neg p \to \sneg p$ becomes a tautology in this context, whereas it is normally used in ASP to conclude that $p$ is explicitly false by default. Conclusions {#sec:conc} =========== We have introduced a variant of constructive negation in Equilibrium Logic (and its monotonic basis, HT) we called *explicit negation*. This variant shares some similarities with the previous formalisation based on Nelson’s strong negation, but changes the interpretation for falsity of implication. We have also introduced a reduct-based definition of answer sets for programs with nested expressions extended with explicit negation, proving the correspondence with equilibrium models. For future work, we will study a possible axiomatisation. To this aim, it is interesting to observe that the formulas - (in their weak equivalence versions) plus and actually correspond to Vorob’ev axiomatisation [@Vo52a; @Vo52b] of strong negation in intuitionistic logic. As we saw, the role of and in $\N5$ is replaced in $\X5$ by and , so an interesting question is whether this replacement may become a complete axiomatisation for explicit negation in $\X5$ or intuitionistic logic in the general case. We also plan to explore the effect of explicit negation on extensions of equilibrium logic, revisiting the use of strong negation in paraconsistent [@OdintsovP05] and partial [@COP06] equilibrium logic, or considering its combination with partial functions [@Cab11; @CabalarCPV14], and temporal [@ACD+13] or epistemic [@CerroHS15; @CFF19] reasoning. \[lastpage\] [^1]: This work was partially supported by MINECO, Spain, grant , Xunta de Galicia, Spain (GPC ED431B 2019/03 and 2016-2019 ED431G/01, CITIC). The third author is funded by the Centre International de Mathématiques et d’Informatique de Toulouse (CIMI) through contract ANR-11-LABEX-0040-CIMI within the programme ANR-11-IDEX-0002-02 and the Alexander von Humboldt Foundation.. [^2]: In fact, the construct “$\sneg {\mathit{train}}$” is normally treated in ASP as a new atom ${\mathit{train}}'$ and an implicit constraint ${\mathit{train}} \wedge {\mathit{train}}' \to \bot$ is used to guarantee that both atoms cannot be true simultaneously. [^3]: To be precise, [@LTT99] used a different notation and names for operators: $\wedge$, $\vee$ and $\neg$ were respectively denoted as comma, semicolon and ‘not’ in [@LTT99], whereas explicit negation $\sneg$ was denoted as $\neg$ and called *classical negation*. [^4]: We also provide a translation for implications $\alpha \to \beta$ but this is not strictly necessary: for computing the reduct, they can be previously replaced by $\neg \alpha \vee \beta$. [^5]: This property is called the *coherence* principle in [@Per92].
{ "pile_set_name": "ArXiv" }
--- abstract: 'The X-ray binary population of the SMC is very different from that of the Milky Way consisting, with one exception, entirely of transient pulsating Be/neutron star binaries. We have now been monitoring these SMC X-ray pulsars for over 10 years using the Rossi X-ray Timing Explorer with observations typically every week. The RXTE observations have been complemented with surveys made using the Chandra observatory. The RXTE observations are non-imaging but enable detailed studies of pulsing sources. In contrast, Chandra observations can provide precise source locations and detections of sources at lower flux levels, but do not provide the same timing information or the extended duration light curves that RXTE observations do. We summarize the results of these monitoring programs which provide insights into both the differences between the SMC and the Milky Way, and the details of the accretion processes in X-ray pulsars.' --- Introduction ============ Mass transfer in high-mass X-ray binaries (HMXBs) may occur in 3 different ways from the OB star component. (i) The mass-donor primary star may fill its Roche lobe. These systems are very luminous ($\sim$10$^{38}$ ) but are very rare. (ii) If the system contains a supergiant primary with an extensive stellar wind then accretion from the wind may take place. These systems have modest luminosity ($\sim$10$^{36}$ - 10$^{37}$ ) but are rather more common. (iii) For systems containing a Be star accretion takes place from the circumstellar envelope. These have a wide range of luminosities (10$^{34}$ - 10$^{39}$ ) and are very common, but are transient. In most HMXBs the accreting object is a highly magnetized neutron star. Accretion is funneled onto the magnetic poles of the neutron star and we see pulsations at the neutron star spin period. If the pulse periods of HMXBs are plotted against their corresponding orbital periods then it is seen that sources divide into three groups in this diagram which correspond to the three modes of mass transfer (Corbet 1986). In particular there is strong correlation between pulse period and orbital period for the Be star systems. The positions of sources in this diagram is thought to depend on the accretion torques experienced by the neutron stars and hence on the circumstellar environments around the primary stars. These classes of HMXB are well-studied in the Galaxy and we wish to know how the HXMB populations compare in other galaxies. Because of their proximity, the SMC and LMC make them the easiest external galaxies to investigate. Initial estimates of the estimated HMXB population of the SMC were based on the mass of the SMC. The SMC is a few percent of the mass of the Galaxy and about 65 Galactic X-ray pulsars are known. Therefore, 1 or 2 X-ray pulsars would be expected in the SMC. The larger fraction of Be stars in the SMC increased the estimate to 3. The first X-ray pulsar discovered in the SMC was SMC X-1 in 1970s. Its luminosity can reach 10$^{39}$ and it has a 0.71s pulse period and a 3.89 day orbital period. The mass-donating companion is a Roche-lobe filling B0I star. In 1978 two transients, SMC X-2 and SMC X-3, were found (Clark 1978). The three pulsars then known agreed with the simple prediction, although all three were surprisingly bright. RXTE Observations of the SMC ============================ RXTE was launched in 1995 and its primary instrument is the Proportional Counter Array (PCA). The RXTE PCA has a 2 FWZI, 1 FWHM field of view. The PCA is non-imaging, but it has a large collecting area of up to 7,000 cm$^2$. The RXTE observing program is extremely flexible and almost all observations are time constrained. These include monitoring, phase constrained, and target of opportunity observations as well as observations coordinated with other observatories both in space and ground-based. Serendipitous RXTE PCA slew observations in 1997 showed a possible outburst from SMC X-3 (Marshall  1997). A follow-up pointed RXTE observation showed a complicated power spectrum with several harmonic, almost-harmonic, and non-harmonic peaks. Imaging ASCA observations were then made of this region and they showed the presence of two separate pulsars. However, neither of these pulsars coincided with the position of SMC X-3. A revised look at the RXTE power spectrum revealed three pulsars simultaneously active with periods of 46.6, 91.1, and 74.8 s (Corbet  1998). Since 1997 we have monitored one or more positions weekly using the RXTE PCA. The flexible observing program of RXTE has enabled us to carry out a regular monitoring program that would not have been possible with other satellites. The typical observation duration has been about 10,000 seconds. We use power spectra of the light curves to extract pulsed flux from any X-ray pulsars in the FOV. The sensitivity to pulsed flux is $\sim$10$^{36}$  at the distance of the SMC. From this program we have detected many transient sources and all identified optical counterparts have been found to be Be stars. The SMC HMXB pulsar population has now been found by ourselves and other investigators to be much larger than originally thought. Our naming convention for SMC pulsars is SXPx, where “x” is the pulse period, for [*SMC X-ray Pulsar*]{}. This convention is particularly useful for X-ray pulsars discovered with RXTE for which a precise position is not yet available. For detailed light curves and analyses see [@Laycock05] and [@Galache08]. In addition, we have recently been able to measure orbital parameters from Doppler modulation of the pulse period of SXP18.3 (Schurch  2008). ![HI Image of the SMC. Large circles = PCA FOV (FWHM and FWZI) at different monitoring positions. Small circles show locations of X-ray pulsars.[]{data-label="fig2"}](corbet_fig2.ps){width="3.0in"} ![The extended outburst from SXP 18.3. The top panel shows the amplitude of the pulsed flux. The two lower panels show two possible timing solutions. The middle panel shows the preferred solution with the orbital period fixed at the photometric period. (Schurch  2008).[]{data-label="fig3"}](corbet_fig3.ps){width="3.4in"} The Be pulsar spin period/orbital period correlation is believed to be related to the structure of the extended envelopes of Be stars. SMC and Milky Way Be stars have differences, for example, the SMC metallicity is far lower and the Be phenomenon is more common in the SMC. Is this reflected in the P$_s$/P$_{orb}$ relation? That is, are there significant differences between Be star envelopes in the SMC and the Galaxy? For a linear fit (to the log-log diagram) the intercept is related to Be star mass loss rates and the gradient is related to the radial structure of Be star envelopes. Currently 23 SMC Be X-ray pulsars now have measured orbital periods. The periods have been measured by several techniques. These include: X-ray flux monitoring with RXTE, pulse timing with RXTE (one system) and optical observations from MACHO and OGLE. In comparison, 24 Galactic Be X-ray pulsars now have measured orbital periods. We find that for the SMC and Galactic systems the intercepts are the same, the gradients are the same, and the scatter about the fits are the same. Thus, the metallicity difference between the two galaxies gives no measurable effect on the spin period/orbital period relationship and the Be star envelopes in SMC and Galaxy are apparently similar. ![[*Left:*]{} A comparison of the P$_s$/P$_{orb}$ relationships for the SMC and the Galaxy. [*Right:*]{} The relationship between outburst density and orbital period proposed by [@Galache06].[]{data-label="fig4"}](corbet_fig4a.ps "fig:"){width="2.5in"} ![[*Left:*]{} A comparison of the P$_s$/P$_{orb}$ relationships for the SMC and the Galaxy. [*Right:*]{} The relationship between outburst density and orbital period proposed by [@Galache06].[]{data-label="fig4"}](corbet_fig4b.ps "fig:"){width="2.5in"} [@Galache06] proposes that the frequency of outbursts per orbit (X-ray “outburst density” or X$_{od}$) depends on the orbital period. Long period systems are more likely to show an outburst at periastron. The reason for this correlation is not yet clear. Chandra SMC Wing Survey ======================= A possible connection between hydrogen column density (N$_H$) and HMXB location was proposed by [@Coe05]. To investigate this we undertook a survey of the SMC wing using Chandra. We observed 20 fields with $\sim$10ks observation time per field. 523 sources were detected,([@McGowan08] but only $\sim$5 of these were HMXBs ([@McGowan07]) and the majority of sources are probably background AGNs. There thus appear to be fewer X-ray pulsars in the wing than the bar. This is despite that fact that the most luminous SMC HMXB, SMC X-1 is located in the wing. ![[*Left:*]{} The location of SMC pulsars superimposed on an H I contour map. [*Right:*]{} Histogram of SMC H I distributions and corresponding histogram of H I columns at the location of the X-ray pulsars (Coe  2005).[]{data-label="fig5"}](corbet_fig5a.ps "fig:"){width="2.5in"} ![[*Left:*]{} The location of SMC pulsars superimposed on an H I contour map. [*Right:*]{} Histogram of SMC H I distributions and corresponding histogram of H I columns at the location of the X-ray pulsars (Coe  2005).[]{data-label="fig5"}](corbet_fig5b.ps "fig:"){width="2.5in"} RXTE Monitoring of the LMC ========================== The SMC appears to be very abundant in Be X-ray pulsars. This was only known after regular observations of the SMC started. The known LMC X-ray pulsar population is more modest. There is one Roche lobe overflow source, and a few Be systems. To investigate the LMC population in more detail we undertook an RXTE monitoring program similar to the one used for the SMC. However, the angular size of the LMC is larger so we restricted the program to monitoring one position that was already know to contain several X-ray sources. We analyzed data from our one year monitoring program, together with archival data from other programs (Townsend et al., in preparation). In the monitoring region 4 of the 5 known X-ray pulsars were detected. However, no new X-ray pulsars were discovered. This implies that the X-ray pulsar content of the LMC is more like that of the Galaxy than the SMC. ![The PCA monitoring position for the LMC. Known pulsar positions are marked.[]{data-label="fig6"}](corbet_fig6.ps){width="2.5in"} Conclusion ========== The current census of SMC X-ray pulsars is: 1 supergiant Roche lobe filler (SMC X-1); $\sim$50 transients (likely all Be star systems); 1 possible Crab-like pulsar (P = 0.087s) from ASCA (Yokogawa & Koyama 2000); 1 Anomalous X-ray Pulsar (AXP) candidate (P = 8.02s) from Chandra and XMM; no supergiant wind accretion systems and no low-mass X-ray binaries.. Supergiant wind systems should easily be detectable at our $\sim$10$^{36}$ pulsed flux sensitivity. An obvious question is: why are there so many SMC X-ray pulsars? The current star formation rate in the SMC is reported not to be extremely high. The lifetime of HMXBs is short which implies an enhanced star formation rate in the recent past. However, supergiant wind systems, which have even shorter lifetimes than Be star systems, have not been found. Models of historic star formation rates in the SMC and LMC must be compatible with the observed X-ray binary populations, and they most also account for the differences between the SMC and LMC. There are also similarities between the SMC and Galactic pulsar populations. The SMC and Galactic Be star systems have identical (within errors) P$_s$/P$_{orb}$ relationships. The LMC X-ray pulsar population also appears to be more similar to that of the Galaxy. The large and growing SMC X-ray pulsar database has considerable potential for understanding the astrophysics of accretion processes. It facilitates comparative studies, such as pulse profile morphology, as a function of luminosity. Or, luminosity effects can be removed and we can examine the effects of other parameters such as magnetic field strength. The SMC is nearby and optical counterparts can be observed with modest size telescopes. In particular, MACHO and OGLE lightcurves exist for many counterparts (e.g. Coe  2008, McGowan  2008b). The overall X-ray pulsar properties can tell us about the evolutionary similarities and differences of a very nearby galaxy compared to our own. 1978, *ApJ*, 221, L37 2005, *MNRAS*, 356, 502 2008, *IAU Symposium*, 256 1986, *MNRAS*, 220, 1047 1998, *IAUC,* 6803 2006, PhD thesis, University of Southampton 2008, *ApJS*, 177, 189 2005, *ApJS*, 161, 96 1997, *IAUC*, 6777 2007, *MNRAS*, 376, 759 2008a, *MNRAS*, 383, 330 2008b, *MNRAS*, 384, 821 2008, *MNRAS*, in press 2000, *IAUC*, 7361
{ "pile_set_name": "ArXiv" }
--- abstract: 'Recent results for the coexistence of ferromagnetism and unconventional superconductivity with spin-tiplet Cooper pairing are reviewed on the basis of the quasi-phenomenological Ginzburg-Landau theory. New results are presented for the properties of phases and phase transitions in such ferromagnetic superconductors. The superconductivity, in particular, the mixed phase of coexistence of ferromagnetism and unconventional superconductivity is triggered by the spontaneous magnetization. The mixed phase is stable whereas the other superconducting phases that usually exist in unconventional superconductors are either unstable, or, for particular values of the parameters of the theory, some of these phases are metastable at relatively low temperatures in a quite narrow domain of the phase diagram. The phase transitions from the normal phase to the phase of coexistence is of first order while the phase transition from the ferromagnetic phase to the coexistence phase can be either of first or second order depending on the concrete substance. The Cooper pair and crystal anisotropies are relevant to a more precise outline of the phase diagram shape and reduce the degeneration of the ground states of the system but they do not drastically influence the phase stability domains and the thermodynamic properties of the respective phases. The results are discussed in view of application to metallic ferromagnets as UGe$_2$, ZrZn$_2$, URhGe.' --- [**Phases and phase transitions in spin-triplet ferromagnetic superconductors**]{} [**D. V. Shopova and D. I. Uzunov$^{\ast, \dag}$**]{} [*CPCM Laboratory, G. Nadjakov Institute of Solid State Physics,\ Bulgarian Academy of Sciences, BG-1784 Sofia, Bulgaria.*]{}\ $^{\ast}$ Also, Max-Planck-Institut für Physik komplexer Systeme, Nöthnitzer Str. 38, 01187 Dresden, Germany. $^{\dag}$ Corresponding author: uzun@issp.bas.bg [**Key words**]{}: superconductivity, ferromagnetism, phase diagram,\ order parameter profile. [**PACS**]{}: 74.20.De, 74.20.Rp [**1. Introduction**]{} [**1.1. Notes about unconventional superconductivity**]{} The phenomenon of unconventional Cooper pairing of fermions, i.e. the formation of Cooper pairs with nonzero angular momentum was theoretically predicted [@Pitaevskii:1959] in 1959 as a mechanism of superfluidity of Fermi liquids. In 1972 the same phenomenon - unconventional superfluidity due to a $p$-wave (spin triplet) Cooper pairing of $^3$He atoms, was experimentally discovered in the mK range of temperatures; for details and theoretical description, see Refs. [@Leggett:1975; @Vollhardt:1990; @Volovik:2003]. Note that, in contrast to the standard $s$-wave pairing in usual (conventional) superconductors, where the electron pairs are formed by an attractive electron-electron interaction due to a virtual phonon exchange, the widely accepted mechanism of the Cooper pairing in superfluid $^3$He is based on an attractive interaction between the fermions ($^3$He atoms) due to a virtual exchange of spin fluctuations. Certain spin fluctuation mechanisms of unconventional Cooper pairing of electrons have been assumed also for the discovered in 1979 heavy fermion superconductors (see, e.g., Refs. [@Stewart:1984; @Sigrist:1991; @Mineev:1999]) as well as for some classes of high-temperature superconductors (see, e.g., Refs. [@Sigrist:1987; @Annett:1988; @Volovik:1988; @Blagoeva:1990; @Uzunov:1990; @Uzunov:1993; @Annett:1995; @Harlingen:1995; @Tsuei:2000]). The possible superconducting phases in unconventional superconductors are described in the framework of the general Ginzburg-Landau (GL) effective free energy functional [@Uzunov:1993] with the help of the symmetry groups theory. Thus a variety of possible superconducting orderings were predicted for different crystal structures [@Volovik0:1985; @Volovik:1985; @Ueda:1985; @Blount:1985; @Ozaki:1985; @Ozaki:1986]. A detailed thermodynamic analysis [@Blagoeva:1990; @Volovik:1985] of the homogeneous (Meissner) phases and a renormalization group investigation [@Blagoeva:1990] of the superconducting phase transition up to the two-loop approximation have been also performed (for a three-loop renormalization group analysis, see Ref. [@Antonenko:1994]; for effects of magnetic fluctuations and disorder, see [@Busiello:1991; @Busiello:1990]). We shall essentially use these results in our present consideration. In 2000, experiments [@Saxena:2000] at low temperatures ($T \sim 1$ K) and high pressure ($T\sim 1$ GPa) demonstrated the existence of spin triplet superconducting states in the metallic compound UGe$_2$. This superconductivity is triggered by the spontaneous magnetization of the ferromagnetic phase which exists at much higher temperatures and coexists with the superconducting phase in the whole domain of existence of the latter below $T \sim 1$ K; see also experiments published in Refs. [@Huxley:2001; @Tateiwa:2001], and the discussion in Ref. [@Coleman:2000]. Moreover, the same phenomenon of existence of superconductivity at low temperatures and high pressure in the domain of the $(T,P)$ phase diagram where the ferromagnetic order is present has been observed in other ferromagnetic metallic compounds (ZrZn$_2$ [@Pfleiderer:2001] and URhGe [@Aoki:2001]) soon after the discovery [@Saxena:2000] of superconductivity in UGe$_2$. In contrast to other superconducting materials, for example, ternaty and Chevrel phase compounds, where the effects of magnetic order on superconductivity are also substantial (see, e.g., [@Vonsovsky:1982; @Maple:1982; @Sinha:1984; @Kotani:1984]), in these ferromagnetic compounds the phase transition temperature ($T_f$) to the ferromagnetic state is much higher than the phase transition temperature ($T_{FS})$ from ferromagnetic to a (mixed) state of coexistence of ferromagnetism and superconductivity. For example, in UGe$_2$ we have $T_{FS} = 0.8$ K whereas the critical temperature of the phase transition from paramagnetic to ferromagnetic state in the same material is $T_f =35 $K [@Saxena:2000; @Huxley:2001]. One may reliably assume that in such kind of materials the material parameter $T_s$ defined as the (usual) critical temperature of the second order phase transition from normal to uniform (Meissner) supercondicting state in zero external magnetic field is quite lower than the phase transition temperature $T_{FS}$. Note, that the mentioned experiments on the compounds UGe$_{2}$, URhGe, and ZrZn$_2$ do not give any evidence for the existence of a standard normal-to-superconducting phase transition in zero external magnetic field. Moreover, it seems that the superconductivity in the metallic compounds mentioned above, always coexists with the ferromagnetic order and is enhanced by the latter. As claimed in Ref. [@Saxena:2000] in these systems the superconductivity seems to arise from the same electrons that create the band magnetism, and is most naturally understood as a triplet rather than spin-singlet pairing phenomenon. Note that all three metallic compounds, mentioned so far, are itinerant ferromagnets. Besides, the unconventional superconductivity has been suggested [@Saxena:2001] as a possible outcome of recent experiments in Fe [@Shimizu:2001], in which a superconducting phase was discovered at temperatures below $2$ K at pressures between 15 and 30 GPa. Note, that both vortex and Meissner superconductivity phases [@Shimizu:2001] are found in the high-pressure crystal modification of Fe which has a hexagonal close-packed lattice. In this hexagonal lattice the strong ferromagnetism of the usual bcc iron crystal probably disappears [@Saxena:2001]. Thus one can hardly claim that there is a coexistence of ferromagnetism and superconductivity in Fe but the clear evidence for a superconductivity is also a remarkable achievement. [**1.2. Ferromagnetism versus superconductivity**]{} The important point in all discussions of the interplay of superconductivity and ferromagnetism is that a small amount of magnetic impurities can destroy superconductivity in conventional ($s$-wave) superconductors by breaking up the ($s$-wave) electron pairs with opposite spins (paramagnetic impurity effect [@Abrikosov:1960]). In this aspect the phenomenological arguments [@Ginzburg:1956] and the conclusions on the basis of the microscopic theory of magnetic impurities in $s$-wave superconductors [@Abrikosov:1960] are in a complete agreement with each other; see, e.g., Refs. [@Vonsovsky:1982; @Maple:1982; @Sinha:1984; @Kotani:1984]. In fact, a total suppression of conventional ($s$-wave) superconductivity should occur in the presence of an uniform spontaneous magnetization $\mbox{\boldmath$M$}$, i.e. in a standard ferromagnetic phase [@Ginzburg:1956]. The physical reason for this suppression is the same as in the case of magnetic impurities, namely, the opposite electron spins in the $s$-wave Cooper pair turn over along the vector $\mbox{\boldmath$M$}$ in order to lower their Zeeman energy and, hence, the pairs break down. Therefore, the ferromagnetic order can hardly coexist with conventional superconducting states. In particular, this is the case of coexistence of uniform superconducting and ferromagnetic states when the superconducting order parameter $\psi(\mbox{\boldmath$x$})$ and the magnetization $\mbox{\boldmath$M$}$ do not depend on the spatial vector $\mbox{\boldmath$x$}$. But yet a coexistence of $s$-wave superconductivity and ferromagnetism may appear in uncommon materials and under quite special circumstances. Furthermore, let us emphasize that the conditions for the coexistence of nonuniform (“vertex”, “spiral”, “spin-sinosoidal” or “helical”) superconducting and ferromagnetic states are less restrictive than that for the coexistence of uniform superconducting and ferromagnetic orders. Coexistence of nonuniform phases has been discussed in details, in particular, experiment and theory of ternary and Chevrel-phase compounds, where such a coexistence seems quite likely; for a comprehensive review, see, for example, Refs.  [@Vonsovsky:1982; @Maple:1982; @Sinha:1984; @Kotani:1984; @Buzdin:1983]. In fact, the only two superconducting systems for which the experimental data allow assumptions in a favor of a coexistence of superconductivity and ferromagnetism are the rare earth ternary boride compound ErRh$_4$B$_4$ and the Chervel phase compound HoMo$_6$S$_8$; for a more extended review, see Refs. [@Maple:1982; @Machida:1984]. In these compounds the phase of coexistence most likely appears in a very narrow temperature region just below the Curie temperature $T_f$ of the ferromagnetic phase transition. At lower temperatures the magnetic moments of the rare earth 4$f$ electrons become better aligned, the magnetization increases and the $s$-wave superconductivity pairs formed by the conduction electrons disintegrate. [**1.3. Unconventional superconductivity triggered by ferromagnetic order**]{} We shall not extend our consideration over all important aspects of the long standing problem of coexistence of superconductivity and ferromagnetism rather we shall concentrate our attention on the description of the newly discovered coexistence of ferromagnetism and unconventional (spin-triplet) superconductivity in the itinerant ferromagnets UGe$_2$, ZrZn$_2$, and URhGe. Here we wish to emphasize that the main object of our discussion is the superconductivity of these compounds and, at a second place in the rate of importance we put the problem of coexistence. The reason is that the existence of superconductivity in such itinerant ferromagnets is a highly nontrivial phenomenon. As noted in Ref. [@Machida:2001] the superconductivity in these materials seems difficult to explain in terms of previous theories [@Vonsovsky:1982; @Maple:1982; @Kotani:1984] and seems to require new concepts to interpret the experimental data. We have already mentioned that in ternary compounds the ferromagtetism comes from the localized 4$f$ electrons whereas the s-wave Cooper pairs are formed by conduction electrons. In UGe$_2$ and URhGe the 5$f$ electrons of U atoms form both superconductivity and ferromagnetic order [@Saxena:2000; @Aoki:2001]. In ZrZn$_2$ the same double role is played by the 4$d$ electrons of Zr. Therefore the task is to describe this behavior of the band electrons at a microscopic level. One may speculate about a spin-fluctuation mediated unconventional Cooper pairing as is in case of $^3$He and heavy fermion superconductors. These important issues have not yet a reliable answer and for this reason we shall confine our consideration to a phenomenological level. In fact, a number of reliable experimental data for example, the data about the coherence length and the superconducting gap [@Saxena:2000; @Huxley:2001; @Aoki:2001; @Pfleiderer:2001], are in favor of the conclusion about a spin-triplet Cooper pairing in these metallic compounds, although the mechanism of this pairing remains unclear. We shall essentially use this reliable conclusion. Besides, this point of view is consistent with the experimental observation of coexistence of superconductivity only in a low temperature part of the ferromagnetic domain of the phase diagram ($T,P$), which means that a pure (non ferromagnetic) superconducting phase has not been observed. This circumstance is also in favor of the assumption of a spin-triplet superconductivity. Our investigation leads to results which confirm this general picture. Besides, on the basis of the experimental data and conclusions presented for the first time in Refs. [@Saxena:2000; @Coleman:2000] and shortly afterwards confirmed in Refs. [@Huxley:2001; @Tateiwa:2001; @Pfleiderer:2001; @Aoki:2001] one may reliably accept the point of view that the the superconductivity in these magnetic compounds is considerably enhanced by the ferromagnetic order parameter $\mbox{\boldmath$M$}$ and, perhaps, it could not exist without this “mechanism of ferromagnetic trigger,” or, in short, “$\mbox{\boldmath$M$}$-trigger.” Such a phenomenon is possible for spin-triplet Cooper pairs, where the electron spins point parallel to each other and their turn along the vector of the spontaneous magnetization $\mbox{\boldmath$M$}$ does not produce a break down of the spin-triplet Cooper pairs but rather stabilizes them and, perhaps, stimulates their creation. We shall describe this phenomenon at a phenomenological level. [**1.4. Phenomenological studies**]{} Recently, the phenomenological theory which explains the coexistence of ferromagnetism and unconventional spin-triplet superconductivity of Landau-Ginzburg type was developed  [@Machida:2001; @Walker:2002]. The possible low-order couplings between the superconducting and ferromagnetic order parameters were derived with the help of general symmetry group arguments and several important features of the superconducting vortex state in the ferromagnetic phase of unconventional ferromagnetic superconductors were established [@Machida:2001; @Walker:2002]. In this article we shall use the approach presented in Refs. [@Machida:2001; @Walker:2002] to investigate the conditions for the occurrence of the Meissner phase and to demonstrate that the presence of ferromagnetic order enhances the $p$-wave superconductivity. Besides, we shall establish the phase diagram corresponding to model ferromagnetic superconductors in a zero external magnetic field. We shall show that the phase transition to the superconducting state in ferromagnetic superconductors can be either of first or second order depending on the particular substance. We confirm the predictions made in Refs. [@Machida:2001; @Walker:2002] about the symmetry of the ordered phases. Our investigation is based on the mean-field approximation [@Uzunov:1993] as well as on familiar results about the possible phases in nonmagnetic superconductors with triplet ($p$-wave) Cooper pairs [@Volovik:1985; @Blagoeva:1990; @Uzunov:1990]. Results from Refs. [@Shopova1:2003; @Shopova2:2003; @Shopova3:2003] will be reviewed and extended. In our preceding investigation  [@Shopova1:2003; @Shopova2:2003; @Shopova3:2003] both Cooper pair anisotropy and crystal anisotropy have been neglected in order to clarify the main effect of the coupling between the ferromagnetic and superconducting order parameters. The phenomenological GL free energy is quite complex and the inclusion of these anisotropies is related with lengthy formulae and a multivariant analysis which obscures the final results. Here we shall take into account essential anisotropy effects, in particular, the effect of the Cooper pair anisotropy on the existence and stability of the mixed phase, namely the phase of coexistence of superconductivity and ferromagnetic order. We demonstrate that the anisotropy of the spin-triplet Cooper pairs modifies but does not drastically change the thermodynamic properties of this coexistence phase, in particular, in the most relevant temperature domain above the superconducting critical temperature $T_s$. The same is valid for the crystal anisotropy, but we shall not present a thorough thermodynamic analysis of this problem. The crystal anisotropy effect can be considered for concrete systems with various crystal structures [@Sigrist:1991; @Volovik:1985]. Here we find enough to demonstrate that the anisotropy is not crucial for the description of the coexistence phase. Of course, our investigation confirms the general concept [@Volovik:1985] that the anisotropy reduces the degree of degeneration of the ground state and, hence, stabilizes the ordering along the main crystal directions. There exists a formal similarity between the phase diagram obtained in our investigation and the phase diagram of certain improper ferroelectrics [@Gufan:1980; @Gufan:1981; @Latush:1985; @Toledano:1987; @Gufan:1987; @Cowley:1980]. The variants of the theory of improper ferroelectrics, known before 1980, were criticized in Ref. [@Cowley:1980] for their oversimplification and inconsistency with the experimental results. But the further development of the theory has no such disadvantage (see, e.g., Ref. [@Toledano:1987; @Gufan:1987]). We use the advantage of the theory of improper ferroelectrics, where the concept of a “primary” order parameter triggered by a secondary order parameter (the electric polarization $\mbox{\boldmath$P$}_e$) has been initially introduced and exploited (see Ref. [@Toledano:1987; @Gufan:1987; @Cowley:1980]). The mechanism of the M-triggered superconductivity in itinerant ferromagnets is formally identical to the mechanism of appearance of structural order triggered by the electric polarization $\mbox{\boldmath$P$}_e$ in improper ferroelectrics ($P$-trigger). Recently, the effect of $M$-trigger has been used in a theoretical treatment of ferromagnetic Bose condensates [@Gu:2003]. [**1.5. Aims of the paper**]{} In the remainder of this paper we shall consider the GL free energy functional of unconventional ferromagnetic superconductors. Our aim is to establish the uniform phases which are described by the GL free energy presented in Section 2.1. More information about the justification of this investigation is presented in Section 2.2. Note, as also mentioned in Section 2.2, that we investigate a quite general GL model in a situation of a lack of a concrete information about the values of the parameters of this model for concrete compounds (UGe$_2$, URhGe, ZrZn$_2$) where the ferromagnetic superconductivity has been discovered. On one side this lack of information makes impossible a detailed comparison of the theory to the available experimental data but on the other side our results are not bound to one or more concrete substances but can be applied to any unconventional ferromagnetic superconductor. In Section 3 we discuss the phases in nonmagnetic unconventional superconductors. In Section 4 the M-trigger effect will be described in the simple case of a single coupling (interaction) between the magnetization $\mbox{\boldmath$M$}$ and the superconducting order parameter $\psi$ in an isotropic model of ferromagnetic superconductors, where the anisotropy effects are ignored. In Section 5 the effect of another important coupling between the magnetization and the superconducting order parameter on the thermodynamics of the ferromagnetic superconductors is taken into account. In Section 6 the anisotropy effects are considered. In Section 7 we summarize and discuss our findings. [**2. Ginzburg-Landau free energy**]{} Following Refs. [@Volovik:1985; @Machida:2001; @Walker:2002] in this Chapter we discuss the phenomenological theory of spin-triplet ferromagnetic superconductors and justify our consideration in Sections 3–6. [**2.1. Model**]{} Consider the GL free energy functional $$\label{eq1} F[\psi,\mbox{\boldmath$M$}]=\int d^3 x f(\psi, \mbox{\boldmath$M$})\:,$$ where the free energy density $f(\psi,\mbox{\boldmath$M$})$ (for short hereafter called “free energy”) of a spin-triplet ferromagnetic superconductor is a sum of five terms: $$\label{eq2} f(\psi, \mbox{\boldmath$M$}) = f_{\mbox{\scriptsize S}}(\psi) + f^{\prime}_{\mbox{\scriptsize F}}(\mbox{\boldmath$M$}) + f_{\mbox{\scriptsize I}}(\psi,\mbox{\boldmath$M$}) + \frac{\mbox{\boldmath$B$}^2}{8\pi} - \mbox{\boldmath$B.M$}\:.$$ In Eq. (2) $\psi = \left\{\psi_j;j=1,2,3\right\}$ is the three-dimensional complex vector describing the superconducting order and $\mbox{\boldmath$B$} = (\mbox{\boldmath$H$} + 4\pi\mbox{\boldmath$M$}) = \nabla \times \mbox{\boldmath$A$}$ is the magnetic induction; $\mbox{\boldmath$H$}$ is the external magnetic field, $\mbox{\boldmath$A$} = \left\{A_j; j=1,2,3\right\}$ is the magnetic vector potential. The last two terms on the r.h.s. of Eq. (2) are related with the magnetic energy which includes both diamagnetic and paramagnetic effects in the superconductor (see, e.g., [@Vonsovsky:1982; @Ginzburg:1956; @Blount:1979]). In Eq. (2), the term $f_{\mbox{\scriptsize S}}(\psi)$ describes the superconductivity for $\mbox{\boldmath$H$} = \mbox{\boldmath$M$} \equiv 0$. This free energy part can be written in the form [@Volovik:1985] $$\label{eq3} f_{\mbox{\scriptsize S}}(\psi)= f_{grad}(\psi) + a_s|\psi|^2 +\frac{b_s}{2}|\psi|^4 + \frac{u_s}{2}|\psi^2|^2 + \frac{v_s}{2}\sum_{j=1}^{3}|\psi_j|^4 \;,$$ with $$\begin{aligned} \label{eq4} f_{grad}(\psi)& = & K_1(D_i\psi_j)^{\ast}(D_iD_j) +K_2\left[ (D_i\psi_i)^{\ast}(D_j\psi_j) + (D_i\psi_j)^{\ast}(D_j\psi_i)\right] \\ \nonumber && + K_3(D_i\psi_i)^{\ast}(D_i\psi_i),\end{aligned}$$ where a summation over the indices $i,j$ $(=1,2,3)$ is assumed and the symbol $$\label{eq5} D_j = - i\hbar\frac{\partial}{\partial x_i} + \frac{2|e|}{c}A_j$$ of covariant differentiation is introduced. In Eq. (3), $b_s > 0$ and $a_s = \alpha_s(T-T_s)$, where $\alpha_s$ is a positive material parameter and $T_s$ is the critical temperature of a standard second order phase transition which may take place at $H = {\cal{M}} = 0$; $H =|\mbox{\boldmath$H$}|$, and ${\cal{M}} = |\mbox{\boldmath$M$}|$. The parameter $u_s$ describes the anisotropy of the spin-triplet Cooper pair whereas the crystal anisotropy is described by the parameter $v_s$ [@Blagoeva:1990; @Volovik:1985]. In Eq. (3) the parameters $K_j$, $(j = 1,2,3)$ are related with the effective mass tensor of anisotropic Cooper pairs [@Volovik:1985]. The term $f^{\prime}_{\mbox{\scriptsize F}}(\mbox{\boldmath$M$})$ in Eq. (2) is the following part of the free energy of a standard isotropic ferromagnet: $$\label{eq6} f^{\prime}_{\mbox{\scriptsize F}}(\mbox{\boldmath$M$}) = c_f\sum_{j=1}^{3}|\nabla_j\mbox{\boldmath$M$}_j|^2 + a_f(T^{\prime}_f)\mbox{\boldmath$M$}^2 + \frac{b_f}{2}\mbox{\boldmath$M$}^4$$ where $\nabla_j = \partial/\partial x_j$, $b_f > 0$, and $a_f(T^{\prime}_f) = \alpha_f(T-T^{\prime}_f)$ is represented by the material parameter $\alpha_f > 0$ and the temperature $T^{\prime}_f$; the latter differs from the critical temperature $T_f$ of the ferromagnet and this point will be discussed below. In fact, through Eq. (2) we have already added a negative term ($-2\pi {\cal{M}}^2$) to the total free energy $f(\psi,\mbox{\boldmath$M$})$. This is obvious when we set $H = 0$ in Eq. (2). Then we obtain the negative energy ($-2\pi{\cal{M}}^2$) which should be added to $f^{\prime}_{\mbox{\scriptsize F}}(\mbox{\boldmath$M$})$. In this way one obtains the total free energy $f_{\mbox{\scriptsize F}} (\mbox{\boldmath$M$})$ of the ferromagnet in a zero external magnetic field, which is given by a modification of Eq. (6) according to the rule $$\label{eq7} f_{\mbox{\scriptsize F}} (a_f) = f^{\prime}_{\mbox{\scriptsize F}} \left[a_f(T^{\prime}_f) \rightarrow a_f(T_f) \right],$$ where $a_f = \alpha_f (T - T_f)$ and $$\label{eq8} T_f = T^{\prime}_f + \frac{2\pi}{\alpha_f}$$ is the critical temperature of a standard ferromagnetic phase transition of second order. This scheme was used in studies of rare earth ternary compounds [@Vonsovsky:1982; @Blount:1979; @Greenside:1981; @Ng:1997]. Alternatively [@Kuper:1980], one may work from the beginning with the total ferromagnetic free energy $f_{\mbox{\scriptsize F}}(a_f,\mbox{\boldmath$M$})$ as given by Eqs. (6) - (8) but in this case the magnetic energy included in the last two terms on the r.h.s. of Eq. (2) should be replaced with $H^2/8\pi$. Both ways of work are equivalent. Finally, the term $$\label{eq9} f_{\mbox{\scriptsize I}}(\psi, \mbox{\boldmath$M$}) = i\gamma_0 \mbox{\boldmath$M$}.(\psi\times \psi^*) + \delta \mbox{\boldmath$M$}^2 |\psi|^2\;.$$ in Eq. (2) describes the interaction between the ferromagnetic order parameter $M$ and the superconducting order parameter $\psi$ [@Machida:2001; @Walker:2002]. The $\gamma_0$-term is the most substantial for the description of experimentally found ferromagnetic superconductors [@Walker:2002] while the $\delta \mbox{\boldmath$M$}^2 |\psi|^2$–term makes the model more realistic in the strong coupling limit because it gives the opportunity to enlarge the phase diagram including both positive and negative values of the parameter $a_s$. This allows for an extension of the domain of the stable ferromagnetic order up to zero temperatures for a wide range of values of the material parameters and the pressure $P$. Such a picture corresponds to the real situation in ferromagnetic compounds. In Eq. (9) the coupling constant $\gamma_0 >0$ can be represented in the form $\gamma_0 = 4\pi J$, where $J > 0$ is the ferromagnetic exchange parameter [@Walker:2002]. In general, the parameter $\delta$ for ferromagnetic superconductors may take both positive and negative values. The values of the material parameters ($T_s$, $T_f$, $\alpha_s$, $\alpha_f$, $b_s$, $u_s$, $v_s$, $b_f$, $K_j$, $\gamma_0$ and $\delta$) depend on the choice of the concrete substance and on intensive thermodynamic parameters, such as the temperature $T$ and the pressure $P$. [**2.2. Way of treatment**]{} The total free energy (2) is a quite complex object of theoretical investigation. The various vortex and uniform phases described by this complex model cannot be investigated within a single calculation but rather one should focus on concrete problems. In Ref. [@Walker:2002] the vortex phase was discussed with the help of the criterion [@Abrikosov:1957] for a stability of this state near the phase transition line $T_{c2}(H)$; see also, Ref. [@Lifshitz:1980]. In case of $H = 0$ one should apply the same criterion with respect to the magnetization ${\cal{M}}$ for small values of $|\psi|$ near the phase transition line $T_{c2}({\cal{M}})$ as performed in Ref. [@Walker:2002]. Here we shall be interested in the uniform phases, namely, when the order parameters $\psi$ and $\mbox{\boldmath$M$}$ do not depend on the spatial vector $\mbox{\boldmath$x$}\in V$ ($V$ is the volume of the superconductor). Thus our analysis will be restricted to the consideration of the coexistence of uniform (Meissner) phases and ferromagnetic order. We shall perform this investigation in details and, in particular, we shall show that the main properties of the uniform phases can be given within an approximation in which the crystal anisotropy is neglected. Moreover, some of the main features of the uniform phases in unconventional ferromagnetic superconductors can be reliably outlined when the Cooper pair anisotropy is neglected, too. The assumption of a uniform magnetization $\mbox{\boldmath$M$}$ is always reliable outside a quite close vicinity of the magnetic phase transition and under the condition that the superconducting order parameter $\psi$ is also uniform, i.e. that vortex phases are not present at the respective temperature domain. This conditions are directly satisfied in type I superconductors but in type II superconductors the temperature should be sufficiently low and the external magnetic field should be zero. Moreover, the mentioned conditions for type II superconductors may turn insufficient for the appearance of uniform superconducting states in materials with quite high values of the spontaneous magnetization. In such cases the uniform (Meissner) superconductivity and, hence, the coexistence of this superconductivity with uniform ferromagnetic order may not appear even at zero temperature. Up to now type I unconventional ferromagnetic superconductors have not been yet found whereas the experimental data for the recently discovered compounds UGe$_2$, URhGe, and ZrZn$_2$ are not enough to conclude definitely either about the lack or the existence of uniform superconducting states at low and ultra-low temperatures. In all cases, if real materials can be described by the general GL free energy (1) - (9), the ground state properties will be described by uniform states, which we shall investigate. The problem about the availability of such states in real materials at finite temperatures is quite subtle at the present stage of research when the experimental data are not enough. We shall assume that uniform phases may exist in some unconventional ferromagnetic superconductors. Moreover, we find convenient to emphasize that these phases appear as solutions of the GL equations corresponding to the free energy (1) - (9). These arguments completely justify our study. In case of a strong easy axis type of magnetic anisotropy, as is in UGe$_2$ [@Saxena:2000], the overall complexity of mean-field analysis of the free energy $f(\psi, \mbox{\boldmath$M$})$ can be avoided by performing an “Ising-like” description: $\mbox{\boldmath$M$} = (0,0,{\cal{M}})$, where ${\cal{M}} = \pm |\mbox{\boldmath$M$}|$ is the magnetization along the “$z$-axis." Further, because of the equivalence of the “up” and “down” physical states $(\pm \mbox{\boldmath$M$})$ the thermodynamic analysis can be performed within the “gauge" ${\cal{M}} \geq 0$. But this stage of consideration can also be achieved without the help of crystal anisotropy arguments. When the magnetic order has a continuous symmetry one may take advantage of the symmetry of the total free energy $f(\psi, \mbox{\boldmath$M$})$ and avoid the consideration of equivalent thermodynamic states that occur as a result of the respective symmetry breaking at the phase transition point but have no effect on thermodynamics of the system. In the isotropic system one may again choose a gauge, in which the magnetization vector has the same direction as $z$-axis ($|\mbox{\boldmath$M$}| = M_z = {\cal{M}}$) and this will not influence the generality of thermodynamic analysis. Here we shall prefer the alternative description within which the ferromagnetic state may appear through two equivalent “up” and “down” domains with magnetizations $ {\cal{M}}$ and ($ -{\cal{M}}$), respectively. We shall perform the mean-field analysis of the uniform phases and the possible phase transitions between such phases in a zero external magnetic field ($\mbox{\boldmath$H$}=0)$, when the crystal anisotropy is neglected ($v_s \equiv 0$). The only exception will be the consideration in Sec. 3, where we briefly discuss the nonmagnetic superconductors (${\cal{M}} \neq 0$). For our aims we use notations in which the number of parameters is reduced. Introducing the parameter $$\label{eq10} b = (b_s + u_s + v_s)$$ we redefine the order parameters and the other parameters in the following way: $$\begin{aligned} \label{eq11} &&\varphi_j =b^{1/4}\psi_j = \phi_je^{\theta_j}\:,\;\;\; M = b_f^{1/4}{\cal{M}}\:,\\ \nonumber && r = \frac{a_s}{\sqrt{b}}\:,\;\;\; t =\frac{a_f}{\sqrt{b_f}}\:, \;\;\; w = \frac{u_s}{b}\:, \;\;\; v =\frac{v_s}{b}\:, \\ \nonumber &&\gamma= \frac{\gamma_0}{b^{1/2}b_f^{1/4}}\:,\;\;\; \gamma_1= \frac{\delta}{(bb_f)^{1/2}}\:.\end{aligned}$$ Having in mind our approximation of uniform $\psi$ and $\mbox{\boldmath$M$}$ and the notations (10) - (11), the free energy density $f(\psi,M) = F(\psi,M)/V$ can be written in the form $$\begin{aligned} \label{eq12} f(\psi,M)& = & r\phi^2 + \frac{1}{2}\phi^4 + 2\gamma\phi_1\phi_2 M \mbox{sin}(\theta_2-\theta_1) + \gamma_1 \phi^2 M^2 + tM^2 + \frac{1}{2}M^4\\ \nonumber && -2w \left[\phi_1^2\phi_2^2\mbox{sin}^2(\theta_2-\theta_1) +\phi_1^2\phi_3^2\mbox{sin}^2(\theta_1-\theta_3) + \phi_2^2\phi_3^2\mbox{sin}^2(\theta_2-\theta_3)\right] \\ \nonumber && -v[\phi_1^2\phi_2^2 + \phi_1^2\phi_3^2 + \phi_2^2\phi_3^2].\end{aligned}$$ Note, that in this free energy the order parameters $\psi$ and $\mbox{\boldmath$M$}$ are defined per unit volume. The equilibrium phases are obtained from the equations of state $$\label{eq13} \frac{\partial f(\mu_0)}{\partial \mu_{\alpha}} = 0\:,$$ where the series of symbols $\mu$ can be defined as, for example, $\mu = \left\{\mu_\alpha\right\}= (M, \phi_1,..., \phi_3,$ $ \theta_1,..., \theta_3)$; $\mu_0$ denotes an equilibrium phase. The stability matrix $\tilde{F}$ of the phases $\mu_0$ is defined by $$\label{eq14} \hat{F}(\mu_0)= \left\{F_{\alpha\beta}(\mu_0)\right\} = \frac{\partial^2f(\mu_0)} {\partial\mu_{\alpha}\partial\mu_{\beta}}\;.$$ An alternative treatment can be done in terms of the real ($\psi^{\prime}_j$) and imaginary ($\psi^{\prime\prime}_j$) parts of the complex numbers $\psi_j = \psi_j^{\prime} + i\psi_j^{\prime\prime}$. The calculation with the moduli $\phi_j$ and phase angles $\theta_j$ of $\psi_j$ has a minor disadvantage in cases of strongly degenerate phases when some of the angles $\theta_j$ remain unspecified. Then one should consistently use the properties of the respective broken continuous symmetry. Alternatively, one may do an alternative analysis with the help of the components $\psi_j^{\prime}$ and $\psi_j^{\prime\prime}$. In order to avoid any ambiguity in our discussion let us note that we often use the term “existence” of a phase in order to indicate that it appears in experiments. This means that the phase, we consider, is either stable or metastable, in quite rare cases, when certain special special experimental conditions allow the observation of metastable states in equilibrium. When a solution (phase) of Eq. (13) is obtained it is said that the respective phase “exists”, of course, under some “existence conditions” that are imposed on the parameters $\left\{ \mu_{\alpha} \right\}$ of the theory. But this is just a registration of the fact that a concrete phase satisfies Eq. (13). The problem about the thermodynamic stability of the phases that are solutions of Eq. (13) is solved with the help of the matrix (14) and, if necessary, with an additional analysis including the comparison of the free energies of phases which correspond to minima of the free energy in one and the same domain of parameters $\{\mu_{\alpha}\}$. Then the stable phase will be the phase that corresponds to a global minimum of the free energy. Therefore, when we discuss experimental situation in which some phase exists according to the experimental data, this means that it is a global minimum of the free energy, a fact determined by a comparison of free energies of the phases. If other minima of the free energy exist in a certain domain of parameters $\{\mu_{\alpha}\}$ then these minima are metastable equilibria, i.e. metastable phases. If a solution of Eq. (13) is not a minimum, it corresponds to an (absolutely) unstable equilibrium and the matrix (14) corresponding to this unstable phase is negatively definite. When we determine the minima of the free energy by the requirement for a positive definiteness of the stability matrix (14), we are often faced with the problem of a “marginal” stability, i.e. the matrix is neither positively nor negatively definite. This is often a result of the degeneration of the states (phases) with broken continuous symmetry, and one should distinguish these cases. If the reason for the lack of a clear positive definiteness of the stability matrix is precisely the mentioned degeneration of the ground state, one may reliably conclude that the respective phase is stable. If there is another reason, the analysis of the matrix (14) turns insufficient for our aims to determine the respective stability property. These cases are quite rare and happen for very particular values of the parameters $\{\mu_{\alpha}\}$. [**3. Pure superconductivity**]{} Let us set $M\equiv 0$ in Eq. (12) and briefly summarize the known results [@Volovik:1985; @Blagoeva:1990] for the “pure superconducting case” when the magnetic order cannot appear and magnetic effects do not affect the stability of the normal and uniform (Meissner) superconducting phases. The possible phases can be classified by the structure of the complex vector order parameter $\psi = (\psi_1,\psi_2,\psi_2)$. We shall often use the moduli vector $(\phi_1, \phi_2,\phi_3)$ with magnitude $\phi = (\phi_1^2+\phi_2^2+\phi_3^2)^{1/2}$ but we must not forget the values of the phase angles $\theta_j$. The normal phase (0,0,0) is always a solution of the Eqs. (13). It is stable for $r\geq 0$, and corresponds to a free energy $f=0$. Under certain conditions, six ordered phases [@Volovik:1985; @Blagoeva:1990] occur for $r<0$. Here we shall not repeat the detailed description of these phases [@Volovik:1985; @Blagoeva:1990] but we shall briefly mention their structure. The simplest ordered phase is of type $(\psi_1,0,0)$ with equivalent domains: $(0,\psi_2,0)$ and $(0,0,\psi_3)$. Multi- domain phases of more complex structure also occur, but we shall not always enumerate the possible domains. For example, the “two-dimensional” phases can be fully represented by domains of type $(\psi_1,\psi_2,0)$ but there are also other two types of domains: $(\psi_1,0,\psi_3)$ and $(0,\psi_2,\psi_3)$. As we consider the general case when the crystal anisotropy is present $(v \neq 0)$, this type of phases possesses the property $|\psi_i| = |\psi_j|$. The two-dimensional phases are two and have different free energies. To clarify this point let us consider, for example, the phase $(\psi_1,\psi_2,0)$. The two complex numbers, $\psi_1$ and $\psi_2$ can be represented either as two-component real vectors, or, equivalently, as rotating vectors in the complex plane. One can easily show that Eq. (12) yields two phases: a collinear phase, when $(\theta_2-\theta_1) = \pi k (k = 0,\pm1,...)$, i.e. when the vectors $\psi_1$ and $\psi_2$ are collinear, and another (noncollinear) phase when the same vectors are perpendicular to each other: $(\theta_2-\theta_1) = \pi(k + 1/2)$. Having in mind that $|\phi_1| = |\phi_2| = \phi/\sqrt{2}$, the domain $(\psi_1,\psi_2,0)$ of the collinear phase is given by $(\pm1,1,0)\phi/\sqrt{2}$, and the same domain for the noncollinear phase is given by $(\pm i,1,0)\phi/\sqrt{2}$. Similar representations can be given for the other two domains of these phases. In addition to the mentioned three ordered phases, three other ordered phases exist. For these phases all three components $\psi_j$ have nonzero equilibrium values. Two of them have equal to one another moduli $\phi_j$, i.e., $\phi_1=\phi_2=\phi_3$. The third phase is of the type $\phi_1=\phi_2 \neq \phi_3$ and is unstable so it cannot occur in real systems. The two three-dimensional phases with equal moduli of the order parameter components have different phase angles and, hence, different structure. The difference between any couple of angles $\theta_j$ is given by $\pm \pi/3$ or $\pm 2\pi/3$. The characteristic vectors of this phase can be of the form $(e^{i\pi/3}, e^{-i\pi/3},1)\phi/\sqrt{3}$ and $(e^{2i\pi/3}, e^{-i2\pi/3},1)\phi/\sqrt{3}$. The second stable three dimensional phase is “real”, i.e. the components $\psi_j$ lie on the real axis; $(\theta_j-\theta_j) = \pi k$ for any couple of angles $\theta_j$ and the characteristic vectors are $(\pm 1, \pm 1, 1)\phi/\sqrt{3}$. The stability properties of these five stable ordered phases were presented in details in Refs. [@Volovik:1985; @Blagoeva:1990]. When the crystal anisotropy is not present ($v =0$) the picture changes. The increase of the level of degeneracy of the ordered states leads to an instability of some phases and to a lack of some noncollinear phases. Both two- and three-dimensional real phases, where $(\theta_j -\theta_j) = \pi k$, are no more constrained by the condition $\phi_i=\phi_j$ but rather have the freedom of a variation of the moduli $\phi_j$ under the condition $\phi^2 = -r >0$. The two-dimensional noncollinear phase exists but has a marginal stability [@Blagoeva:1990]. All other noncollinear phases even in the presence of a crystal anisotropy $(v\neq 0)$ either vanish or are unstable; for details, see Ref. [@Blagoeva:1990]. This discussion demonstrates that the crystal anisotropy stabilizes the ordering along the main crystallographic directions, lowers the level of degeneracy of the ordered state related with the spontaneous breaking of the continuous symmetry and favors the appearance of noncollinear phases. The crystal field effects related to the unconventional superconducting order were established for the first time in Ref. [@Volovik:1985]. In our consideration of unconventional ferromagnetic superconductors in Sec. 4–7 we shall take advantage of these effects of the crystal anisotropy. In both cases $v=0$ and $v \neq 0$ the matrix (14) indicates an instability of three-dimensional phases (all $\phi_j \neq 0)$ with an arbitrary ratios $\phi_i/\phi_j$. As already mentioned, for $v \neq 0$ the phases of type $\phi_1= \phi_2\neq \phi_3$ are also unstable whereas for $v=0$, even the phase $\phi_1=\phi_2=\phi_3 > 0$ is unstable. [**4. Simple case of M-triggered superconductivity**]{} Here we consider the Walker-Samokhin model [@Walker:2002] when only the $M\phi_1\phi_2-$coupling between the order parameters $\psi$ and $M$ is taken into account ($\gamma > 0$, $\gamma_1 = 0$). Besides, we shall neglect the anisotropies $(w=v=0)$. The uniform phases and the phase diagram in this case were investigated in Refs. [@Shopova1:2003; @Shopova2:2003; @Shopova3:2003]. Here we summarize the main results in order to make a clear comparison with the new results presented in Sections 5 and 6. In this Section we set $\theta_3 \equiv 0$ and use the notation $\theta \equiv \Delta\theta = (\theta_2 - \theta_1)$. The symmetry of the system allows to introduce the notations without a loss of generality of the consideration. [**4.1. Phases**]{} The possible (stable, metastable and unstable) phases are given in Table 1 together with the respective existence and stability conditions. The stability conditions define the domain of the phase diagram where the respective phase is either stable or metastable [@Uzunov:1993]. The normal (disordered) phase, denoted in Table 1 by $N$ always exists (for all temperatures $T \geq 0)$ but is stable for $t >0$, $r > 0$. The superconductivity phase denoted in Table 1 by SC1 is unstable. The same is valid for the phase of coexistence of ferromagnetism and superconductivity denoted in Table 1 by CO2. The N–phase, the ferromagnetic phase (FM), the superconducting phases (SC1–3) and two of the phases of coexistence (CO1–3) are generic phases because they appear also in the decoupled case $(\gamma\equiv 0)$. When the $M\phi_1\phi_2$–coupling is not present, the phases SC1–3 are identical and represented by the order parameter $\phi$ where the components $\phi_j$ participate on equal footing. The asterisk attached to the stability condition of “the second superconductivity phase"(SC2), indicates that our analysis is insufficient to determine whether this phase corresponds to a minimum of the free energy. As we shall see later the phase SC2, as well as the other two purely superconducting phases and the coexistence phase CO1, have no chance to become stable for $\gamma \neq 0$. This is so, because the non-generic phase of coexistence of superconductivity and ferromagnetism (FS in Table 1), which does not exist for $\gamma = 0$ is stable and has a lower free energy in their domain of stability. Note, that a second domain $(M < 0)$ of the FS phase exists and is denoted in Table 1 by FS$^*$. Here we shall describe only the first domain (FS). The domain FS$^{\ast}$ is considered in the same way. The cubic equation for $M$ corresponding to FS (see Table 1) is shown in Fig. 1 for $\gamma = 1.2$ and $t = -0.2$. For any $\gamma > 0$ and $t$, the stable FS thermodynamic states are given by $r (M) < r_m = r(M_m)$ for $M > M_m > 0$, where $M_m$ corresponds to the maximum of the function $r(M)$. Functions $M_m(t)$ and $M_0(t) = (-t + \gamma^2/2)^{1/2} = \sqrt{3}M_m(t)$ are drawn in Fig. 2 for $\gamma = 1.2$. Functions $r_m(t) = 4M_m^3(t)/\gamma$ for $t < \gamma^2/2$ (the line of circles in Fig. 3) and $r_e(t) = \gamma|t|^{1/2}$ for $t < 0$ define the borderlines of stability and existence of FS. TABLE 1. Phases and their existence and stability properties \[$\theta = (\theta_2-\theta_1)$, $k = 0, \pm 1,...$\].\ Phase order parameter existence stability domain ------------- ------------------------------------------------------------------ ----------------- -------------------------- N $\phi_j = M = 0$ always $t > 0, r > 0$ FM $\phi_j = 0$, $M^2 = -t$ $t < 0$ $r>0$, $r^2 > \gamma^2t$ SC1 $\phi_1=M=0$, $\phi^2 = -r$ $r<0$ unstable SC2 $\phi^2 = -r$, $\theta = \pi k$, $M = 0$ $r<0$ $(t > 0)^*$ SC3 $\phi_1=\phi_2=M=0$, $\phi^2_3 = -r$ $r<0$ $r<0$, $t>0$ CO1 $\phi_1= \phi_2=0$, $\phi^2_3 = -r$, $M^2=-t$ $r<0$, $t<0$ $r<0$, $t<0$ CO2 $\phi_1=0$, $\phi^2 = -r$, $\theta=\theta_2=\pi k$, $M^2=-t$ $r<0$, $t<0$ unstable FS $2\phi_1^2 = 2\phi_2^2 = \phi^2 = -r + \gamma M$, $\phi_3 = 0$ $\gamma M > r$ $3M^2>(-t +\gamma^2/2)$ $\theta= 2\pi(k - 1/4) $, $\gamma r = (\gamma^2-2t)M - 2M^3$ $M > 0$ FS$^{\ast}$ $2\phi_1^2 = 2\phi_2^2 = \phi^2 = -(r + \gamma M)$, $\phi_3 = 0$ $-\gamma M > r$ $3M^2>(-t +\gamma^2/2)$ $\theta= 2\pi(k + 1/4) $, $\gamma r = (2t -\gamma^2)M + 2M^3$ $M < 0$ [**4.2. Phase diagram**]{} We have outlined the domain in the ($t$, $r$) plane where the FS phase exists and is a minimum of the free energy. For $r < 0$ the cubic equation for $M$ (see Table 1) and the existence and stability conditions are satisfied for any $M \geq 0$ provided $t \geq \gamma^2 $. For $ t < \gamma^2$ the condition $M \geq M_0$ have to be fulfilled, here the value $M_0 = (-t + \gamma^2/2)^{1/2}$ of $M$ is obtained from $r(M_0) = 0$. Thus for $r = 0$ the N-phase is stable for $t \geq \gamma^2/2$, on the other hand FS is stable for $t \leq \gamma^2/2$. For $r > 0$, the requirement for the stability of FS leads to the inequalities $$\label{eq15} max\left(\frac{r}{\gamma}, M_m\right) < M < M_0\;,$$ where $M_m = (M_0/\sqrt{3})$ and $M_0$ should be the positive solution of the cubic equation of state from Table 1; $M_m > 0$ gives a maximum of the function $r(M)$; see also Figs. 1 and 2. \ The further analysis leads to the existence and stability domain of FS below the line AB given by circles (see Fig. 3). In Fig. 3 the curve of circles starts from the point A with coordinates ($\gamma^2/2$, $0$) and touches two other (solid and dotted) curves at the point B with coordinates ($-\gamma^2/4$, $\gamma^2/2$). Line of circles represents the function $r(M_m) \equiv r_m(t)$ where $$\label{eq16} r_m(t) = \frac{4}{3\sqrt{3}\gamma} \left (\frac{\gamma^2}{2} - t\right)^{3/2}.$$ Dotted line is given by $r_e(t) = \gamma\sqrt{|t|}$. The inequality $r < r_m(t)$ is a condition for the stability of FS, whereas the inequality $r \leq r_e(t)$ for $ (-t) \geq \gamma^2/4$ is a condition for the existence of FS as a solution of the respective equation of state. This existence condition for FS has been obtained from $\gamma M > r$ (see Table 1). In the region on the left of the point B in Fig. 3, the FS phase satisfies the existence condition $\gamma M > r$ only below the dotted line. In the domain confined between the lines of circles and the dotted line on the left of the point B the stability condition for FS is satisfied but the existence condition is broken. The inequality $r \geq r_e(t)$ is the stability condition of FM for $ 0 \leq (-t) \leq \gamma^2/4$. For $(-t) > \gamma^2/4$ the FM phase is stable for all $r \geq r_e(t)$. In the region confined by the line of circles AB, the dotted line for $ 0 < (-t) < \gamma^2/4$, and the $t-$axis, the phases N, FS and FM have an overlap of stability domains. The same is valid for FS, the SC phases and CO1 in the third quadrant of the plane ($t$, $r$). The comparison of the respective free energies for $r < 0$ shows that the stable phase is FS whereas the other phases are metastable within their domains of stability. The part of the $t$-axis given by $r=0$ and $t > \gamma^2/2$ is a phase transition line of second order which describes the N-FS transition. The same transition for $0 < t < \gamma^2/2$ is represented by the solid line AC which is the equilibrium transition line of a first order phase transition. This equilibrium transition curve is given by the function $$\label{eq17} r_{eq}(t) = \frac{1}{4}\left[3\gamma - \left(\gamma^2 + 16t \right))^{1/2}\right]M_{eq}(t),$$ where $$\label{eq18} M_{eq}(t) = \frac{1}{2\sqrt{2}}\left[\gamma^2 - 8t + \gamma\left(\gamma^2 + 16t \right)^{1/2}\right]^{1/2}$$ is the equilibrium value (jump) of the magnetization. The order of the N-FS transition changes at the tricritical point A. The domain above the solid line AC and below the line of circles for $ t > 0$ is the region of a possible overheating of FS. The domain of overcooling of the N-phase is confined by the solid line AC and the axes ($t > 0$, $r >0$). At the triple point C with coordinates \[0, $r_{eq}(0) = \gamma^2/4$\] the phases N, FM, and FS coexist. For $t < 0$ the straight line $$\label{eq19} r_{eq}^* (t) = \frac{\gamma^2}{4} + |t|,\;\;\;\;\;\; -\gamma^2/4 < t < 0,$$ describes the extension of the equilibrium phase transition line of the N-FS first order transition to negative values of $t$. For $t < (-\gamma^2/4)$ the equilibrium phase transition FM-FS is of second order and is given by the dotted line on the left of the point B (the second tricritical point in this phase diagram). Along the first order transition line $r_{eq}^{\ast}(t)$ given by  Eq. (\[eq19\]) the equilibrium value of $M$ is $M_{eq} =\gamma/2$, which implies an equilibrium order parameter jump at the FM-FS transition equal to ($\gamma/2 - \sqrt{|t|}$). On the dotted line of the second order FM-FS transition the equilibrium value of $M$ is equal to that of the FM phase ($M_{eq} = \sqrt{|t|}$). Note, that the FM phase does not exists below $T_s$ and this seems to be a disadvantage of the model (\[eq12\]) with $\gamma_1 = 0$. The equilibrium phase transition lines of the FM-FS and N-FS transition lines in Fig. 3 can be expressed by the respective equilibrium phase transition temperatures $T_{eq}$ defined by the equations $r_e = r(T_{eq})$, $r_{eq} = r(T_{eq})$, $r^{\ast}_{eq} = r(T_{eq})$, and with the help of the relation $M_{eq} = M(T_{eq})$. This leads to some limitations on the possible variations of the parameters of the theory. For example, the critical temperature ($T_{eq} \equiv T_c$) of the FM-FS transition of second order ($\gamma^2/4 < -t$) is obtained in the form $T_{c} = (T_s + 4\pi J{\cal{M}}/\alpha_s)$, or, using ${\cal{M}} = (-a_f/\beta)^{1/2}$, $$\label{eq20} T_{c} = T_s -\frac{T^{\ast}}{2} + \left[ \left(\frac{T^{\ast}}{2}\right)^2 + T^{\ast}(T_f-T_s)\right]^{1/2}\;,$$ where $T_f > T_s$, and $T^{\ast} = (4\pi J)^2\alpha_f/\alpha_s^2\beta$ is a characteristic temperature of the model (\[eq12\]) with $\gamma_1=w=v=0$. The investigation of the conditions for the validity of Eq. (\[eq20\]) leads to the conclusion that the FM-FS continuous phase transition (at $\gamma^2 < -t)$ is possible only if the following condition is satisfied: $$\label{eq21} T_{f} - T_s > \ = (\varsigma + \sqrt{\varsigma})T^{\ast}\;,$$ where $\varsigma = \beta\alpha_s^2/4b\alpha_f^2$. This means that the second order FM-FS transition should disappear for a sufficiently large $\gamma$–coupling. Such a condition does not exist for the first order transitions FM-FS and N-FS. Taking into account the gradient term (4) in the free energy (\[eq2\]) should lead to a depression of the equilibrium transition temperature. As the magnetization increases with the decrease of the temperature, the vortex state should occur at temperatures which are lower than the equilibrium temperature $T_{eq}$ of the homogeneous (Meissner) state. For example, the critical temperature ($\tilde{T}_c$) corresponding to the inhomogeneous (vortex) phase of FS-type has been evaluated [@Walker:2002] to be lower than the critical temperature  (\[eq20\]): $(T_c - \tilde{T}_c) = 4\pi \mu_B{\cal{M}}/\alpha_s$ ($\mu_B = |e|\hbar/2mc$ - Bohr magneton). For $J \gg \mu_B$, we have $T_c \approx \tilde{T}_c$. For $ r > 0$, namely, for temperatures $T > T_s$ the superconductivity is triggered by the magnetic order through the $\gamma$-coupling. The superconducting phase for $T > T_s$ is entirely in the $(t,r)$ domain of the ferromagnetic phase. Therefore, the uniform supeconducting phase can occur for $T > T_s$ only through a coexistence with the ferromagnetic order. In the next Sections we shall focus on the temperature range $T > T_s$ which seems to be of main practical interest. We shall not dwell on the superconductivity in the fourth quadrant $(t >0,r<0)$ of the $(t,r)$ diagram where pure superconductivity phases are possible in systems with $T_s > T_f$ (this is not the case for UGe$_2$, URhGe, and ZrZn$_2$). Besides, we shall not discuss the possible metastable phases in the third quadrant $(t<0,r<0)$ of the $(t,r)$ diagram. [**4.3. Magnetic susceptibility**]{} Consider the longitudinal magnetic susceptibility $\chi_1 = (\chi_{\mbox{\scriptsize V}}/V)$ per unit volume [@Shopova3:2003]. The external magnetic field $\mbox{\boldmath$H$} = (0,0,H)$ with $ H = \left(\partial f/\partial {\cal{M}}\right)$ has the same direction as the magnetization $\mbox{\boldmath$M$}$. We shall calculate the quantity $\chi = \sqrt{\beta_f}\chi_1$ for the equilibrium thermodynamic states $\mu_0$ given by Eq. (13). Having in mind the relations (11) between $M$ and ${\cal{M}}$, and between $\psi$ and $\varphi$ we can write $$\label{eq22} \chi^{-1} = \frac{d}{d M_0}\left[\left(\frac{\partial f} {\partial M}\right)_{T,\varphi_j} \right]_{\mu_0}\:,$$ where the equilibrium magnetization $M_0$ and equilibrium superconducting order parameter components $\varphi_{0j}$ should be taken for the respective equilibrium phase (see Table 1, where the suffix “0” of $\phi$, $\theta$, and $M$ has been omitted; hereafter the same suffix will be often omitted, too). Note that the value of the equilibrium magnetization $M$ in FS is the maximal nonnegative root of the cubic equation in $M$ given in Table 1. Using Eq. (22) we obtain the susceptibility $\chi$ of the FS phase in the form $$\label{eq23} \chi^{-1} = -\gamma^2 + 2t + 6M^2\;.$$ The susceptibility of the other phases has the usual expression $$\label{eq24} \chi^{-1} = 2t + 6M^2\;.$$ Eq. (24) yields the known results for the paramagnetic susceptibility ($\chi_P = 1/2t$; $t>0$) , corresponding to the normal phase, and for the ferromagnetic susceptibility ($\chi_F = 1/4|t|$; $t <0$), corresponding to FM. These susceptibilities can be compared with the susceptibility $\chi$ of FS. As the susceptibility $\chi$ of FS cannot be analytically calculated for the whole domain of stability of FS, we shall consider the close vicinity of the N-FS and FM-FS phase transition lines. Near the second order phase transition line on the left of the point $B$ ($t < -\gamma^2/4$), the magnetization has a smooth behaviour and the magnetic susceptibility does not exhibit any singularities (jump or divergence). For $t > \gamma^2/2$, the magnetization is given by $M = (s_- + s_+)$, where $$\label{eq25} s_{\pm} =\left\{- \frac{\gamma r}{4} \pm \left[ \frac{(t-\gamma^2/2)^3}{27} + \left( \frac{\gamma r}{4}\right)^2\right]^{1/2} \right\}^{1/3}\;.$$ For $r = 0$, $M = 0$, whereas for $|\gamma r| \ll (t - \gamma^2/2)$ and $r=0$ one may obtain $M \approx -\gamma r/ (2t-\gamma^2) \ll 2t$. This means that in a close vicinity $(r < 0)$ of $r = 0$ along the second order phase transition line $(r=0, t>\gamma^2)$ the magnetic susceptibility is well described by the paramagnetic law $\chi_P = (1/2t)$. For $r< 0$ and $t \rightarrow \gamma^2/2$, we obtain $M = -(\gamma r/2)^{1/3}$ which yields $$\label{eq26} \chi^{-1} = 6\left(\frac{\gamma |r|}{2}\right)^{2/3}\:.$$ On the phase transition line $AC$ we have $$\label{eq27} M_{eq}(t) = \frac{1}{2\sqrt{2}}\left[\gamma^2 - 8t + \gamma\left(\gamma^2 + 16t \right)^{1/2}\right]^{1/2}$$ and, hence, $$\label{eq28} \chi^{-1} = -4t - \frac{\gamma^2}{4}\left[1 -3 \left( 1 + \frac{16t}{\gamma^2}\right)^{1/2}\right]\:.$$ At the tricritical point $A$ this result yields $\chi^{-1}(A) = 0$, whereas at the triple point $C$ with coordinates ($0$, $\gamma^2/4$) we have $\chi(C) = (2/\gamma^2)$. On the line $BC$ we obtain $M=\gamma/2$ and, hence, $$\label{eq29} \chi^{-1} = 2t + \frac{\gamma^2}{2}\:.$$ At the tricritical point $B$ with coordinates ($-\gamma^2/4$, $\gamma^2/2$) this result yields $\chi^{-1}(B)= 0$. In order to investigate the magnetic susceptibility tensor we shall slightly extend the framework of out treatment by considering arbitrary orientations of the vectors $\mbox{\boldmath$H$}$ and $\mbox{\boldmath$M$}$. We shall denote the spatial directions $(\mbox{\boldmath$x$},\mbox{\boldmath$y$},\mbox{\boldmath$z$})$ as $(1,2,3)$. The components of the inverse magnetic susceptibility tensor $$\label{eq30} \hat{\chi}^{-1}_1 =\hat{\chi}^{-1}\sqrt{b_f} = \left\{\chi^{-1}_{ij}\right\} \sqrt{b_f}$$ can be represented in the form $$\label{eq31} \chi^{-1}_{ij} = 2(t + M^2)\delta_{ij} + 4M_iM_j + i\gamma\frac{\partial}{\partial M_j}(\varphi\times\varphi^{\ast})_i\:,$$ where $M$ and $\varphi_j$ are to be taken at their equilibrium values: $M_0$, $\varphi_{0j}$, $\theta_{0j}$. The last term in the r.h.s. of Eq. (28) is equal to zero for all phases in Table 1 except for FS (and FS$^{\ast})$. When the last term in Eq. (29) is equal to zero we obtain the known result the susceptibility tensor for second order phase transitions (see, e.g., Ref. [@Uzunov:1993]). Consider the FS phase, where $\phi_{j}$ depends on $M_j$. Now we can choose again $\mbox{\boldmath$M$} = (0,0,M)$ and use our results for the equilibrium values of $\phi_j$, $\theta$ and $M$ (see Table 1). Then the components $\chi^{-1}_{ij}$ corresponding to FS are given by $$\label{eq32} \chi^{-1}_{ij} = 2(t + M^2)\delta_{ij} + 4M_iM_j -\gamma^2\delta_{i3}\:.$$ Thus we have $\chi^{-1}_{i\neq j}= 0$, $$\label{eq33} \chi^{-1}_{11} = \chi^{-1}_{22} = 2(t + M^2)\:,$$ and $\chi^{-1}_{33}$ coincides with the inverse longitudinal susceptibility $\chi^{-1}$ given by Eq. (23). [**4.4. Entropy and specific heat**]{} The entropy $S(T) \equiv (\tilde{S}/V) = -V\partial (f/\partial T) $ and the specific heat $C(T) \equiv (\tilde{C}/V) = T(\partial S/\partial T)$ per unit volume $V$ are calculated in a standard way [@Uzunov:1993]. We are interested in the jumps of these quantities on the N-FM, FM-FS, and N-FS transition lines. The behaviour of $S(T)$ and $C(T)$ near the N-FM phase transition and near the FM-FS phase transition line of second order on the left of the point $B$ (Fig. 3) is known from the standard theory of critical phenomena (see, e.g., Ref. [@Uzunov:1993] and for this reason we focus our attention on the phase transitions of type FS-FM and FS-N for $(t>-\gamma^2/4$), i.e., on the right of the point $B$ in Fig. 3. Using the equations for the order parameters $\psi$ and $M$ (Table 1) and applying the standard procedure for the calculation of $S$, we obtain the general expression $$\label{eq34} S(T) = - \frac{\alpha_s}{\sqrt{b}}\phi^2 - \frac{\alpha_f}{\sqrt{\beta}}M^2\:.$$ The next step is to calculate the entropies $S_{ FS}(T)$ and $S_{FM}$ of the ordered phases FS and FM. Note, that use the usual convention $F_{N} = Vf_{N}=0$ for the free energy of the N-phase and, hence, we must set $S_{N}(T)=0$. Consider the second order phase transition line ($r=0$, $t>\gamma^2/2$). Near this line $S_{FS}(T)$ is a smooth function of $T$ and has no jump but the specific heat $C_{FS}$ has a jump at $T=T_s$, i.e. for $r=0$. This jump is given by $$\label{eq35} \Delta C_{FS}(T_s) = \frac{\alpha_s^2T_s}{b}\left[ 1 - \frac{1}{1 - 2t(T_s)/\gamma^2}\right]\:.$$ The jump $\Delta C_{FS}(T_s)$ is higher than the usual jump $\Delta C(T_c) = T_c\alpha^2/b$ known from the Landau theory of standard second order phase transitions [@Uzunov:1993]. The entropy jump $\Delta S_{AC}(T) \equiv S_{FS}(T) $ on the line $AC$ is obtained in the form $$\label{eq36} \Delta S_{AC}(T) = -M_{eq}\left\{\frac{\alpha_s\gamma}{4\sqrt{b}}\left[1 + \left(1 + \frac{16t}{\gamma^2}\right)^{1/2}\right] - \frac{\alpha_f}{\sqrt{\beta}}M_{eq}\right\}\:,$$ where $M_{eq}$ is given by Eq. (18). From Eqs. (18) and (36), we have $\Delta S(t=\gamma^2/2) = 0$, i.e., $\Delta S(T)$ becomes equal to zero at the tricritical point $A$. Besides we find from Eqs. (18) and (36) that at the triple point $C$ the entropy jump is given by $$\label{eq37} \Delta S(t=0) = -\frac{\gamma^2}{4}\left(\frac{\alpha_s}{\sqrt{b}} + \frac{\alpha_f}{\sqrt{\beta}} \right)\:.$$ On the line $BC$ the entropy jump is defined by $\Delta S_{BC}(T) = [S_{FS}(T)-S_{FM}(T)]$. We obtain $$\label{eq38} \Delta S_{BC}(T) = \left( |t| -\frac{\gamma^2}{4}\right)\left(\frac{\alpha_s}{\sqrt{b}} + \frac{\alpha_f}{\sqrt{\beta}} \right)\:.$$ At the tricritical point $B$ this jump is equal to zero as it should be. The calculation of the specific heat jump on the first order phase transition lines $AC$ and $BC$ is redundant for two reasons. Firstly, the jump of the specific heat at a first order phase transition differs from the entropy by a factor of order of unity. Secondly, in caloric experiments where the relevant quantity is the latent heat $Q = T \Delta S(T)$, the specific heat jump can hardly be distinguished. [**4.5. Note about a simplified theory**]{} The consideration in this Section as well as in Sections 5 and 6 can be performed within an approximate scheme, known from the theory of improper ferroelectrics (see, e.g., Ref. [@Cowley:1980]). The idea of the approximation is in the supposition that the order parameter $M$ is small enough so that one can neglect $M^4$-term in the free energy. Within this approximation one easily obtains from the data for FS presented in Table 1 or by a direct calculation of the respective reduced free energy that the order parameters $\phi$ and $M$ of FS are described by the simple equalities $r = (\gamma M -\phi^2)$ and $M = (\gamma/2t)\phi^2$. Of course, one may perform this simple analysis from the very beginning. For ferroelectrics this approximation gives a substantial departure of theory from experiment [@Cowley:1980]. In general, the domain of reliability of such an approximation should be the close vicinity of the ferromagnetic phase transition, i.e. temperatures near to the critical temperature $T_f$. On the other hand, this discussion is worthwhile only if the “primary” order parameter also exists in the same (narrow) temperature domain ($\phi > 0$). Therefore this approximation has some application in systems, where $T_s \ge T_f$. For $T_s<T_f$, one may simplify our thorough analysis by a supposition for a relatively small value of the modulus $\phi$ of the superconducting order parameter. This approximation should be valid in some narrow temperature domain near the line of second order phase transition from FM to FS. [**5. Effect of symmetry conserving coupling**]{} Here we consider the case when both coupling parameters $\gamma$ and $\gamma_1$ are different from zero. In this way we shall investigate the effect of the symmetry conserving $\gamma_1$-term in the free energy on the thermodynamics of the system. Note that when $\gamma$ is equal to zero the analysis is quite easy and the results are known from the theory of bicritical and tetracritical points [@Uzunov:1993; @Toledano:1987; @Liu:1973; @Imry:1975]. For the problem of coexistence of conventional superconductivity and ferromagnetic order this analysis $(\gamma = 0, \gamma_1 \neq 0)$ was made in Ref. [@Vonsovsky:1982]. Once again we postpone the consideration of anisotropy effects by setting $w = v = 0$. The present analysis is much more difficult than that in Sec. 4, and cannot be performed only by analytical calculations; rather, some complementary numerical analysis is needed. Our investigation is based to a great extent on analytical calculations but a numerical analysis has been also performed in order to obtain concrete conclusions. [**5.1. Phases**]{} The calculations show that for temperatures $T > T_s$, i.e., for $r > 0$, we have three stable phases. Two of them are quite simple: the normal ($N$-) phase with existence and stability domains shown in Table 1, and the FM phase with the existence condition $ t<0$ as shown in Table 1, and a stability domain defined by the inequalities $r > \gamma_1t$ and $$\label{eq39} r > \gamma_1t + \gamma\sqrt{-t}\:.$$ The third stable phase for $r>0$ is a more complex variant of the mixed phase FS and its domain FS$^*$, discussed in Section 4. The symmetry of the FS phase coincides with that found in  [@Walker:2002] Let us also mention that for $r<0$ five pure superconducting ($M =0$, $\phi > 0$) phases exist. Two of these phases, $(\phi_1 > 0, \phi_2 = \phi_3 =0)$ and $(\phi_1 =0, \phi_2>0, \phi_3>0)$ are unstable. Two other phases, $(\phi_1>0, \phi_2>0, \phi_3 =0, \theta_2 = \theta_1 + \pi k)$ and $(\phi_1>0,\phi_2>0, \phi_3>0, \theta_2 = \theta_1 + \pi k, \theta_3$ – arbitrary; $k=0,\pm1,...)$ show a marginal stability for $ t > \gamma_1 r$. Only one of the five pure superconducting phases, namely, the phase SC3, given in Table 1, is stable. In the present case of $\gamma_1 \neq 0$ the values of $\phi_j$ and the existence domain of SC3 are the same as shown in Table  1 for $\gamma_1 =0$ but the stability domain is different and is given by $t > \gamma_1 r$. When the anisotropy effects are taken into account the phases exhibiting marginal stability within the present treatment may receive a further stabilization. Besides, three other mixed phases $(M \neq, \phi >0)$ exist for $r < 0$ but one of them is metastable (for $\gamma_1^2 >1, t < \gamma_1 r$, and $r < \gamma_1 t$) and the other two are absolutely unstable. Here the thermodynamic behaviour for $r < 0$ is much more abundant in phases than in the case of improper ferroelectrics with two component primary order parameter  [@Toledano:1987]. However, at this stage of experimental needs about the properties of unconventional ferromagnetic superconductors the investigation of the phases for temperatures $T < T_s$ is not of primary interest and for this reason we shall focus on the relatively higher temperature domain $r > 0$. The FS phase is described by the following equations: $$\label{eq40} \phi_1 = \phi_2=\frac{\phi}{\sqrt{2}}\:, \;\;\; \phi_3 = 0\:,$$ $$\label{eq41} \phi^2= (\pm \gamma M-r-\gamma_1 M^2)\:,$$ $$\label{eq42} (1-\gamma_1^2)M^3\pm \frac{3}{2} \gamma \gamma_1 M^2 +\left(t-\frac{\gamma^2}{2}-\gamma_1 r\right)M \pm \frac{\gamma r}{2}=0\:,$$ and $$\label{eq43} (\theta_2 - \theta_1) = \mp \frac{\pi}{2} + 2\pi k\:,$$ ($k = 0, \pm 1,...$). The upper sign in Eqs. (41) - (43) corresponds to the FS domain in which $\mbox{sin}(\theta_2-\theta_1) = -1$ and the lower sign corresponds to the FS$^{*}$ domain. Here we have a generalization of the two-domain phase FS discussed in Section 4 and for this reason we use the same notations. The analysis of the stability matrix (14) for these phase domains shows that FS is stable for $M > 0$ and FS$^{*}$ is stable for $M<0$, just like our result in Section 4. As these domains belong to the same phase, namely, have the same free energy and are thermodynamically equivalent, we shall consider one of them, for example, FS. Besides, our analysis of Eqs. (40) - (43) shows that FS exists and is stable in a broad domain of the $(t,r)$ diagram, including substantial regions corresponding to $r>0$. [**5.2. Phase stability and phase diagram**]{} In order to outline the phase diagram ($t,r$) we shall use the information given above for the other three phases which have their own domains of stability in the $(t,r)$ plane: N, FM, and FS. \ The phase diagram for concrete parameters of $\gamma$ and $\gamma_1$ is shown in Fig. 4. The phase transition between the normal and FS phases is of first order and goes along the equilibrium line AC. It is given by the equation: $$\label{eq44} r_{eq}(t)= \frac{M_{eq}}{(\gamma_1 M_{eq}-\gamma/2)}\left[(1-\gamma_1^2)M_{eq}^2+ \frac{3}{2} \gamma \gamma_1 M_{eq} +(t-\frac{\gamma^2}{2})\right].$$ The equilibrium value $M_{eq}$ on the line AC is found by setting the equilibrium free energy $f_{FS}(\mu_0)$ of FS equal to zero, i.e. equal to the free energy ($f_N = 0$) of the N-phase. We have obtained the equilibrium energy $f_N$ as a function of the magnetization: $$\begin{aligned} \label{eq45} f_{FS} & = & -\frac{M^2}{2(M\gamma_1-\gamma/2)^2}\\ \nonumber &&\times\left\{(1-\gamma_1^2)M^4 + \gamma\gamma_1 M^3 + 2\left[t(1-\gamma_1^2)- \frac{\gamma^2}{8}\right] M^2 - 2\gamma\gamma_1t M + t(t-\frac{\gamma^2}{2})\right\}, \end{aligned}$$ where $M \equiv M_{eq}$ (hereafter the suffix “eq” will be often omitted). The numerical analysis of the free energy (45) as a polynomial of $M$ shows that the expression in the curly brackets has one positive zero in the interval of values of $t$ from $t=\gamma^2/2$ (point A in Fig. 4) up to $t=0$, where $M_{t=0}=\gamma/2(\gamma_1+1)$. As far as the obtained values for $M$ are in the interval $0 \le$M$<(\gamma/2 \gamma_1)$ the existence condition of FS, namely, $$\label{eq46} \phi^2=\frac{M(M^2+t)}{(\gamma/2-\gamma_1 M)} \ge 0\:,$$ is also satisfied. At the triple point C with coordinates $t=0$, $r=\gamma^2/4(\gamma_1+1)$ three phases (N, FM, and FS) coexist. To find the magnetization $M$ on the equilibrium curve BC of the first order phase transition FM-FS for t$<0$, we use the equality f$_{FM}$=f$_{FS}$, or, equivalently, $$\label{eq47} \frac{(M^2+t^2)^2}{2(M\gamma_1-\gamma/2)^2}\left[\frac{\gamma}{2}- M(1+\gamma_1)\right] \left[\frac{\gamma}{2}+M(1-\gamma_1)\right]=0.$$ Then the function $r_{eq}$(t) for $t<0$ will have the form $$\label{eq48} r_{eq}(t) = \frac{\gamma^2}{4(1+\gamma_1)}-t,$$ This function describes the line BC of first order phase transition (see Fig. 4) which terminates at the tricritical point B with coordinates $$\label{eq49} t_B = -\frac{\gamma^2}{4(1+\gamma_1)^2}\:,\;\;\; r_B= \frac{\gamma^2(2+\gamma_1)}{4(1+\gamma_1)^2}\:.$$ To the left of the tricritical point the second order phase transition curve is given by the relation, $$\label{eq50} r_e(t)=\gamma\sqrt{-t}+\gamma_1 t,$$ which coincides with the stability condition (39) of FM. This line intersects t-axis for $t=(-\gamma^2/\gamma_1^2)$ and is well defined also for $r<0$. On the curve $r_e(t)$ the magnetization is $M=\sqrt{-t}$ and the superconducting order parameter is equal to zero ($\phi=0$). The function $r_e(t)$ has a maximum at the point $(t,r) = (-\gamma^2/4\gamma_1^2, \gamma^2/4\gamma_1)$; here $M=(\gamma/2\gamma_1)$. When this point is approached the second derivative of the free energy with respect to $M$ tends to infinity, but as we shall see later the inclusion of the anisotropy of triplet pairing smears this singularity. The result for the curves $r_{FS}(t)$ of equilibrium phase transitions (N-FS ans FM-FS) can be used to define the respective equilibrium phase transition temperatures $T_{FS}$. We shall not discuss the region, $t>0$, $r<0$, because we have supposed from the very beginning of our analysis that the transition temperature for the ferromagnetic order T$_f$ is higher then the superconducting transition temperature T$_s$, as i is for the known unconventional ferromagnetic superconductors. But this case may become of substantial interest when, as one may expect, materials with $T_f < T_s$ will be discovered. \ The stability conditions of FS can be written in the general form $$\label{eq51} \frac{-M^2+\gamma \gamma_1 M -t-\gamma^2/2}{M\gamma_1-\gamma/2}\ge0\:,$$ $$\label{eq52} \gamma M \ge 0\:,$$ $$\label{eq53} \frac{1}{M\gamma_1-\gamma/2}\left[\gamma_1(1-\gamma_1^2)M^3 - \frac{3}{4}\gamma(1-2\gamma_1^2) M^2 - \frac{3}{4}\gamma^2\gamma_1 M - \frac{\gamma}{4}(t-\gamma^2/2)\right] \ge 0\:.$$ Our consideration of the stability conditions (51) - (53) together with the existence condition Eq. (46) of the phase FS is illustrated by the picture shown in Fig. 5. For $0 \le t \le \gamma^2/2$ and $0<M<(\gamma/2\gamma_1)$ conditions (46) and (51) are satisfied. Condition (53) is a cubic equation in $M(t)$ which for the above values of the parameter $t$ has three real roots, one of them negative. The positive roots, $M(t) > 0$, as function of $t$ are drawn by circles in Fig. 5 and it is obvious that the condition (53) will be satisfied for those values of $M(t)$ that are between the two circled curves. The smaller positive root of Eq. (53) intersects $t$-axis for $t=\gamma^2/2$ (point A in Fig. 5). Note, that $M=\gamma/(2\gamma_1)$ is given by the horizontal dashed line. For $t \le -\gamma^2(2-\gamma_1^2)/4$ the stability condition (51) has two real roots shown by curves with crosses in Fig. 5. For negative values of the parameter $t$ we shall consider also the curve $M=\sqrt{-t}$ which is the solution of existence condition (46) and is depicted by solid line in Fig. 5. For $(-\gamma^2/4 \gamma_1^2)<t<0$ the FS phase exists and is stable when $\gamma /(2 \gamma_1) \ge M\ge \sqrt{-t}$. The point S in Fig. 5 with coordinates $(-\gamma^2/(4 \gamma_1^2), \gamma/(2 \gamma_1)$ is singular in sense that l.h.s. of conditions (51) and (53) go to infinity there. When $t>(-\gamma^2/4 \gamma_1^2)$ the existence condition (46) implies $\gamma/(2 \gamma_1)<M<\sqrt{-t}$. The stability condition (53) is always satisfied (two complex conjugate roots and one negative root) and condition (51) will be fulfilled for values of $M$ between the two curves denoted by crosses in Fig. 5. [**5.3. Discussion**]{} The shape of the equilibrium phase transition lines corresponding to the phase transitions N-SC, N-FS, and FM-FS is similar to that for the simpler case $\gamma_1 = 0$ and we shall not dwell on the variation of the size of the phase domains with the variations of the parameter $\gamma_1$ from zero to values constrained by the condition $\gamma_1^2 <1$. Besides one may generalize our treatment (Section 4) of the magnetic susceptibility tensor and the thermal quantities in this more complex case and to demonstrate the dependence of these quantities on $\gamma_1$. We shall not dwell on these problems. But an important qualitative difference between the equilibrium phase transition lines shown in Figs. 1 and 4 cannot be omitted. The second order phase transition line $r_e(t)$, shown by the dotted line on the left of point $B$ in Fig. 1, tends to large positive values of $r$ for large negative values of $t$ and remains in the “second quadrant” ($t<0, r>0)$ of the plane ($t,r$) while the respective second order phase transition line in Fig. 4 crosses the $t$-axis in the point $t=-\gamma^2/\gamma_1^2$ and is located in the third quadrant ($t<0,r<0$) for all possible values $t < -\gamma^2/\gamma_1^2$. This means that the ground state (at 0 K) of systems with $\gamma_1 =0$ will be always the FS phase whereas two types of ground states, FM and FS, are allowed for systems with $0< \gamma_1^2 < 1$. The latter seems more realistic in view of comparison of theory and experiment, especially, in ferromagnetic compounds like UGe$_2$, URhGe, and ZrZn$_2$. The neglecting of the $\gamma_1$-term does not allow to describe the experimentally observed presence of FM phase at quite low temperatures and relatively low pressure $P$. The final aim of the phase diagram investigation is the outline of the ($T,P$) diagram. Important conclusions about the shape of the $(T,P)$ diagram can be made from the form of the $(t,r)$ diagram without an additional information about the values of the relevant material parameters $(a_s$, $a_f,...$) and their dependence on the pressure $P$. One should know also the characteristic temperature $T_s$, which has a lower value than the experimentally observed [@Saxena:2000; @Huxley:2001; @Tateiwa:2001; @Pfleiderer:2001; @Aoki:2001] phase transition temperature $(T_{FS} \sim 1 K)$ to the mixed (FS) phase. A supposition about the dependence of the parameters $a_s$ and $a_f$ on the pressure $P$ was made in Ref. [@Walker:2002]. Our results for $T_f \gg T_s$ show that the phase transition temperature $T_{FS}$ varies with the variation of the system parameters $(\alpha_s, \alpha_f,...)$ from values which are much higher than the charactestic temperature $T_s$ up to zero temperature. This is seen from Fig. 4. [**6. Anisotropy effects**]{} When the anisotropy of the Cooper pairs is taken in consideration, there will be not drastic changes in the shape the phase diagram for $r>0$ and the order of the respective phase transitions. Of course, there will be some changes in the size of the phase domains and the formulae for the thermodynamic quantities. The parameter $w$ will also insert a slight change in the values of the thermodynamic quantities like the magnetic susceptibility and the entropy and specific heat jumps at the phase transition points. Besides, and this seems to be the main anisotropy effect, the $w$- and $v$-terms in the free energy lead to a stabilization of the order along the main crystal directions which, in other words, means that the degeneration of the possible ground states (FM, SC, and FS) is considerably reduced. This means also a smaller number of marginally stable states which are encountered by the analysis of the definiteness of the stability matrix (14). All anisotropy effects can be verified by the investigation of the free energy (12) which includes the $w$- and $v$-terms. We have made the above general conclusions on the basis of a detailed analysis of the effect of the Cooper pair anisotropy ($w$-) term, as well as on the basis of a preliminary analysis of the total free energy (12), where the crystal anisotropy ($v$-) term is also taken into account. Here we shall present our basic results for the effect of the Cooper pair anisotropy on the FS phase; the crystal anisotropy is neglected ($v=0$). The dimensionless anisotropy parameter $w=\bar u/(u + \bar u)$ can be either positive or negative depending on the sign of $\bar u$. Obviously when $\bar u > 0$, the parameter $w$ will be positive too ($0<$ w$<1$). We shall illustrate the influence of Cooper-pair anisotropy in this case. The order parameters ($M$, $\phi_j$, $\theta_j$) are given by Eqs. (40), (43), $$\label{eq54} \phi^2=\frac{\pm \gamma M-r-\gamma_1 M^2}{(1-w)} \ge 0\:,$$ and $$\label{eq55} (1- w - \gamma_1^2)M^3 \pm \frac{3}{2} \gamma \gamma_1 M^2 +\left[t(1-w)-\frac{\gamma^2}{2}-\gamma_1 r\right]M \pm \frac{\gamma r}{2}=0\:,$$ where the meaning of the upper and lower sign is the same as explained just below Eq. (43). We consider the FS domain corresponding to the upper sign in the Eq. (54) and (55). The stability conditions for FS read, $$\label{eq56} \frac{ (2-w)\gamma M- r -\gamma_1M^2}{1-w} \ge 0\:,$$ $$\label{eq57} \frac{1-2w}{1-w}(\gamma M -wr-w \gamma_1 M^2) > 0\:,$$ and $$\label{eq58} \frac{1}{1-w}\left[3(1-w-\gamma_1^2) M^2 + 3 \gamma \gamma_1 M + t(1-w)-\frac{\gamma^2}{2} -\gamma_1 r \right]\geq 0\:.$$ For $M\ne (\gamma/2 \gamma_1)$ we can express the function $r(M)$ defined by Eq. (54), substitute the obtained expression for $r(M)$ in the existence and stability conditions (54)-(57) and do the analysis in the same way as for $w=0$. The calculations show that in the domain $r>0$, FS is stable for $w<0.5$, when $w=0.5$ there is a marginal stability, and for $w>0.5$ the FS-phase is unstable ($0<w<1$). The results can be used to outline the phase diagram and calculate the thermodynamic quantities. This is performed in the way explained in the preceding Sections. [**7. Conclusion**]{} We have done an investigation of the M-trigger effect in unconventional ferromagnetic superconductors. This effect due to the $M\psi_1\psi_2$-coupling term in the GL free energy consists of bringing into existence of superconductivity in a domain of the phase diagram of the system that is entirely in the region of existence of the ferromagnetic phase. This form of coexistence of unconventional superconductivity and ferromagnetic order is possible for temperatures above and below the critical temperature $T_s$, which corresponds to the standard phase transition of second order from normal to Meissner phase – usual uniform superconductivity in a zero external magnetic field, which appears outside the domain of existence of ferromagnetic order. Our investigation has been mainly intended to clarify the thermodynamic behaviour at temperatures $T_s< T < T_f$, where the superconductivity cannot appear without the mechanism of M-triggering. We have described the possible ordered phases (FM and FS) in this most interesting temperature interval. The Cooper pair and crystal anisotropies have also been investigated and their main effects on the thermodynamics of the triggered phase of coexistence have been established. In discussions of concrete real material one should take into account the respective crystal symmetry but the variation of the essential thermodynamic properties with the change of the type of this symmetry is not substantial when the low symmetry and low order (in both $M$ and $\psi$) $\gamma$-term is present in the free energy. Below the superconducting critical temperature $T_s$ a variety of pure superconducting and mixed phases of coexistence of superconductivity and ferromagnetism exists and the thermodynamic behavior at these relatively low temperatures is more complex than in known cases of improper ferroelectrics. The case $T_f < T_s$ also needs a special investigation. Our results are referred to the possible uniform superconducting and ferromagnetic states. Vortex and other nonuniform phases need a separate study. The relation of the present investigation to properties of real ferromagnetic compounds, such as UGe$_2$, URhGe, and ZrZn$_2$, has been discussed throughout the text. In these real compounds the ferromagnetic critical temperature is much larger than the superconducting critical temperature $(T_f \gg T_s)$ and that is why the M-triggering of the spin-triplet superconductivity is very strong. Moreover, the $\gamma_1$-term is important to stabilize the FM order up to the absolute zero (0 K), as is in the known spin-triplet ferromagnetic superconductors. The neglecting [@Walker:2002] of the symmetry conserving $\gamma_1$-term prevents the description of the known real substances of this type. More experimental information about the values of the material parameters ($a_s, a_f, ...$) included in the free energy (12) is required in order to outline the thermodynamic behavior and the phase diagram in terms of thermodynamic parameters $T$ and $P$. In particular, a reliable knowledge about the dependence of the parameters $a_s$ and $a_f$ on the pressure $P$, the value of the characteristic temperature $T_s$ and the ratio $a_s/a_f$ at zero temperature are of primary interest. [**Acknowledgments:**]{} DIU thanks the hospitality of MPI-PKS-Dresden. Financial support by SCENET (Parma) and JINR (Dubna) is also acknowledged. [ll]{} L. P. Pitaevskii, [*Zh. Eksp. Teor. Fiz.*]{} [**37**]{} (1959) 1794 \[[*Sov. Phys. JETP*]{}, [**10**]{} (1960) 1267\]. A. J. Leggett, [*Rev. Mod. Phys.*]{} [**47**]{} (1975) 331. D. Vollhardt and P. Wölfle, [*The Superfluid Phases of Helium 3*]{} (Taylor $\&$ Francis, London, 1990). G. E. Volovik, [*The Universe in a Helium Droplet*]{} (Oxford University Press, Oxford, 2003). G. R. Stewart, [*Rev. Mod. Phys.*]{} [**56**]{} (1984) 755. M. Sigrist and K. Ueda, [*Rev. Mod. Phys.*]{} [**63**]{} (1991) 239. V. P. Mineev, K. V. Samokhin, [*Introduction to Unconventional Superconductivity*]{} (Gordon and Breach, Amsterdam, 1999). M. Sigrist and T. M. Ruce, [*Z. Phys. B. - Condensed Matter*]{} [**68**]{} (1987) 9. J. F. Annett, M. Randeria, and S. R. Renn, [*Phys. Rev.*]{} [**B 38**]{} (1988) 4660. G. E. Volovik, [*JETP Lett.*]{} [**48**]{} (1988) 41 \[[*Pis’ma Zh. Eksp. Teor. Fiz.*]{} [**48**]{} (1988) 39\]. E. J. Blagoeva, G. Busiello, L. De Cesare, Y. T. Millev, I. Rabuffo, and D. I. Uzunov, [*Phys. Rev.*]{} [**B42**]{} (1990) 6124. D. I. Uzunov, in: [*Advances in Theoretical Physics*]{}, ed. by E. Caianiello (World Scientific, Singapore, 1990), p. 96. D. I. Uzunov, [*Theory of Critical Phenomena*]{} (World Scientific, Singapore, 1993). J. F. Annett, [*Contemp. Physics*]{} [**36**]{} (1995) 323. D. J. Van Harlingen, [*Rev. Mod. Phys.*]{} [**67**]{} (1995) 515. C. C. Tsuei and J. R. Kirtly, [*Rev. Mod. Phys.*]{} [**72**]{} (2000) 969. G. E. Volovik and L. P. Gor’kov, [*JETP Lett.*]{} [**39**]{} (1984) 674 \[[*Pis’ma Zh. Eksp. Teor. Fiz.*]{} [**39**]{} (1984) 550\]. G. E. Volovik and L. P. Gor’kov, [*Sov. Phys. JETP*]{} [**61**]{} (1985) 843 \[[*Zh. Eksp. Teor. Fiz.*]{} [**88**]{} (1985) 1412\]. K. Ueda and T. M. Rice, [*Phys. Rev.*]{} [**B31**]{} (1985) 7114. E. I. Blount, [*Phys. Rev.*]{} [**B 32**]{} (1985) 2935. M. Ozaki, K. Machida, and T. Ohmi, [*Progr. Theor. Phys.*]{} [**74**]{} (1985) 221. M. Ozaki, K. Machida, and T. Ohmi, [*Progr. Theor. Phys.*]{} [**75**]{} (1986) 442. S. A. Antonenko and A. I. Sokolov, [*Phys. Rev.*]{} [B94]{} (1994) 15901. G. Busiello, L. De Cesare, Y. T. Millev, I. Rabuffo, and D. I. Uzunov, [*Phys. Rev.*]{} [**B43**]{} (1991) 1150. G. Busiello, and D. I. Uzunov, [*Phys. Rev.*]{} [**B42**]{} (1990) 1018. S. S. Saxena, P. Agarwal, K. Ahilan, F. M. Grosche, R. K. W. Haselwimmer, M.J. Steiner, E. Pugh, I. R. Walker, S.R. Julian, P. Monthoux, G. G. Lonzarich, A. Huxley. I. Sheikin, D. Braithwaite, and J. Flouquet, [*Nature*]{} [**406**]{} (2000) 587. A. Huxley, I. Sheikin, E. Ressouche, N. Kernavanois, D. Braithwaite, R. Calemczuk, and J. Flouquet, [*Phys. Rev.*]{} [**B63**]{} (2001) 144519-1. N. Tateiwa, T. C. Kobayashi, K. Hanazono, A. Amaya, Y. Haga. R. Settai, and Y. Onuki, [*J. Phys. Condensed Matter*]{}, [**13**]{} (2001) L17. P. Coleman, [*Nature*]{} [**406**]{} (2000) 580. C. Pfleiderer, M. Uhlatz, S. M. Hayden, R. Vollmer, H. v. Löhneysen, N. R. Berhoeft, and G. G. Lonzarich, [*Nature*]{} [**412**]{} (2001) 58. D. Aoki, A. Huxley, E. Ressouche, D. Braithwaite, J. Flouquet, J-P.. Brison, E. Lhotel, and C. Paulsen, [*Nature*]{} [**413**]{} (2001) 613. S. V. Vonsovsky, Yu. A. Izyumov, and E. Z. Kurmaev, [*Superconductivity of Transition Metals*]{} (Springer Verlag, Berlin, 1982). M. B. Maple and F. Fisher (eds), [*Superconductivity in Ternary Compounds*]{}, Parts I and II, (Springer Verlag, Berlin, 1982). S. K. Sinha, in: [*Superconductivity in Magnetic and Exotic Materials*]{}, ed. by T. Matsubara and A. Kotani (Springer Verlag, Berlin, 1984). A. Kotani, in: [*Superconductivity in Magnetic and Exotic Materials*]{}, ed. by T. Matsubara and A. Kotani (Springer Verlag, Berlin, 1984). S. S. Saxena and P. B. Littlewood, [*Nature*]{} [**412**]{} (2001) 290. K. Shimizu, T. Kikura, S. Furomoto, K. Takeda, K. Kontani, Y. Onuki, K. Amaya, [*Nature*]{} [**412**]{} (2001) 316. A. A. Abrikosov and L. P. Gor’kov, [*Zh. Eksp. Teor. Fiz.*]{} [**39**]{} (1960) 1781 \[[*Sov. Phys. JETP*]{} [**12**]{} (1961) 1243\]. V. L. Ginzburg, [*Zh. Eksp. Teor. Fiz.*]{} [**31**]{} (1956) 202 \[[*Sov. Phys. JETP*]{} [**4**]{} (1957) 153\]. A. I. Buzdin, L. N. Bulaevskii, S. S. Krotov, [*Zh.Eksp. Teor. Fiz.*]{} [**85**]{} (1983) 678 \[[*Sov. Phys. JETP*]{} [**58**]{} (1983) 395\]. K. Machida and H. Nakanishi, [*Phys. Rev.*]{} [**B 30**]{} (1984) 122. K. Machida and T. Ohmi, [*Phys. Rev. Lett.*]{} 86 (2001) 850. M. B. Walker and K. V. Samokhin, [*Phys. Rev. Lett.*]{} [**88**]{} (2002) 207001-1. D. V. Shopova, and D. I. Uzunov, [*Phys. Lett. A*]{} [**313**]{} (2003) 139. D. V. Shopova, and D. I. Uzunov, [*J. Phys. Studies*]{} [**7**]{} (2003) No 4 (in press); see also, [*cond-mat*]{}/0305602 D. V. Shopova, and D. I. Uzunov, [*Compt. Rend. Acad. Bulg. Sciences*]{}, [**56**]{} (2003) 35; see also, a corrected version in: [*cond-mat*]{}/0310016. Yu. M. Gufan and V. I. Torgashev, [*Sov. Phys. Solid State*]{} [**22**]{} (1980) 951 \[[*Fiz. Tv. Tela*]{} [**22**]{} (1980) 1629\]. Yu. M. Gufan and V. I. Torgashev, Sov. Phys. Solid State 23 (1981) 1129. L. T. Latush, V. I. Torgashev, and F. Smutny, [*Ferroelectrics Letts.*]{} [**4**]{} (1985) 37. J-C. Tolédano and P. Tolédano, [*The Landau Theory of Phase Transitions*]{} (World Scientific, Singapore, 1987). Yu. M. Gufan, [*Structural Phase Transitions*]{} (Nauka, Moscow, 1982); in Russian. R. A. Cowley, [*Adv. Phys.*]{} [**29**]{} (1980) 1. Q. Gu, [*Phys. Rev.*]{} [**A68**]{} (2003) 025601. E. I. Blount, and C. M. Varma, [*Phys. Rev. Lett.*]{} [**42**]{} (1979) 1079. H. S. Greenside, E. I. Blount, and C. M. Varma, [*Phys. Rev. Lett.*]{} [**46**]{} (1980) 49. T. K. Ng, and C. M. Varma, [*Phys. Rev. Lett.*]{} [**78**]{} (1997) 330. C. G. Kuper, M. Revzen, and A. Ron, [*Phys. Rev. Lett.*]{} [**44**]{} (1980) 1545. A. A. Abrikosov, [*Zh. Eksp. Teor. Fiz.*]{} [**32**]{} (1957) 1442 \[[*Sov. Phys. JETP*]{}, [**5**]{} (1957) 1174\]. E. M. Lifshitz and L. P. Pitaevskii, [*Statistical Physics, II Part*]{} (Pergamon Press, London, 1980) \[[*Landau-Lifshitz Course in Theoretical Physics, Vol. IX*]{}\]. K. S. Liu, and M. E. Fisher, [*J. Low Temp. Phys.*]{} [**10**]{} (1973) 655. Y. Imry, [*J. Phys. C: Cond. Matter Phys.*]{} [**8**]{} (1975) 567.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present measurements of the large-scale cosmic-ray anisotropies in right ascension, using data collected by the surface detector array of the Pierre Auger Observatory over more than 14 years. We determine the equatorial dipole component, $\vec{d}_\perp$, through a Fourier analysis in right ascension that includes weights for each event so as to account for the main detector-induced systematic effects. For the energies at which the trigger efficiency of the array is small, the “East-West” method is employed. Besides using the data from the array with detectors separated by 1500 m, we also include data from the smaller but denser sub-array of detectors with 750 m separation, which allows us to extend the analysis down to $\sim 0.03$ EeV. The most significant equatorial dipole amplitude obtained is that in the cumulative bin above 8 EeV, $d_\perp=6.0^{+1.0}_{-0.9}$%, which is inconsistent with isotropy at the 6$\sigma$ level. In the bins below 8 EeV, we obtain 99% CL upper-bounds on $d_\perp$ at the level of 1 to 3 percent. At energies below 1 EeV, even though the amplitudes are not significant, the phases determined in most of the bins are not far from the right ascension of the Galactic center, at $\alpha_{\rm GC}=-94^\circ$, suggesting a predominantly Galactic origin for anisotropies at these energies. The reconstructed dipole phases in the energy bins above 4 EeV point instead to right ascensions that are almost opposite to the Galactic center one, indicative of an extragalactic cosmic ray origin.' author: - 'A. Aab, P. Abreu, M. Aglietta, I.F.M. Albuquerque, J.M. Albury, I. Allekotte, A. Almela, J. Alvarez Castillo, J. Alvarez-Muñiz, G.A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, C. Aramo, P.R. Araújo Ferreira, H. Asorey, P. Assis, G. Avila, A.M. Badescu, A. Bakalova, A. Balaceanu, F. Barbato, R.J. Barreira Luz, K.H. Becker, J.A. Bellido, C. Berat, M.E. Bertaina, X. Bertou, P.L. Biermann, T. Bister, J. Biteau, A. Blanco, J. Blazek, C. Bleve, M. Boháčová, D. Boncioli, C. Bonifazi, L. Bonneau Arbeletche, N. Borodai, A.M. Botti, J. Brack, T. Bretz, F.L. Briechle, P. Buchholz, A. Bueno, S. Buitink, M. Buscemi, K.S. Caballero-Mora, L. Caccianiga, L. Calcagni, A. Cancio, F. Canfora, I. Caracas, J.M. Carceller, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, M. Cerda, J.A. Chinellato, K. Choi, J. Chudoba, L. Chytka, R.W. Clay, A.C. Cobos Cerutti, R. Colalillo, A. Coleman, M.R. Coluccia, R. Conceição, A. Condorelli, G. Consolati, F. Contreras, F. Convenga, C.E. Covault, S. Dasso, K. Daumiller, B.R. Dawson, J.A. Day, R.M. de Almeida, J. de Jesús, S.J. de Jong, G. De Mauro, J.R.T. de Mello Neto, I. De Mitri, J. de Oliveira, D. de Oliveira Franco, V. de Souza, J. Debatin, M. del Río, O. Deligny, N. Dhital, A. Di Matteo, M.L. Díaz Castro, C. Dobrigkeit, J.C. D’Olivo, Q. Dorosti, R.C. dos Anjos, M.T. Dova, J. Ebr, R. Engel, I. Epicoco, M. Erdmann, C.O. Escobar, A. Etchegoyen, H. Falcke, J. Farmer, G. Farrar, A.C. Fauth, N. Fazzini, F. Feldbusch, F. Fenu, B. Fick, J.M. Figueira, A. Filipčič, M.M. Freire, T. Fujii, A. Fuster, C. Galea, C. Galelli, B. García, A.L. Garcia Vegas, H. Gemmeke, F. Gesualdi, A. Gherghel-Lascu, P.L. Ghia, U. Giaccari, M. Giammarchi, M. Giller, J. Glombitza, F. Gobbi, G. Golup, M. Gómez Berisso, P.F. Gómez Vitale, J.P. Gongora, N. González, I. Goos, D. Góra, A. Gorgi, M. Gottowik, T.D. Grubb, F. Guarino, G.P. Guedes, E. Guido, S. Hahn, R. Halliday, M.R. Hampel, P. Hansen, D. Harari, V.M. Harvey, A. Haungs, T. Hebbeker, D. Heck, G.C. Hill, C. Hojvat, J.R. Hörandel, P. Horvath, M. Hrabovský, T. Huege, J. Hulsman, A. Insolia, P.G. Isar, J.A. Johnsen, J. Jurysek, A. Kääpä, K.H. Kampert, B. Keilhauer, J. Kemp, H.O. Klages, M. Kleifges, J. Kleinfeller, M. Köpke, G. Kukec Mezek, A. Kuotb Awad, B.L. Lago, D. LaHurd, R.G. Lang, M.A. Leigui de Oliveira, V. Lenok, A. Letessier-Selvon, I. Lhenry-Yvon, D. Lo Presti, L. Lopes, R. López, A. López Casado, R. Lorek, Q. Luce, A. Lucero, A. Machado Payeras, M. Malacari, G. Mancarella, D. Mandat, B.C. Manning, J. Manshanden, P. Mantsch, A.G. Mariazzi, I.C. Mariş, G. Marsella, D. Martello, H. Martinez, O. Martínez Bravo, M. Mastrodicasa, H.J. Mathes, J. Matthews, G. Matthiae, E. Mayotte, P.O. Mazur, G. Medina-Tanco, D. Melo, A. Menshikov, K.-D. Merenda, S. Michal, M.I. Micheletti, L. Miramonti, D. Mockler, S. Mollerach, F. Montanet, C. Morello, G. Morlino, M. Mostafá, A.L. Müller, M.A. Muller, S. Müller, R. Mussa, M. Muzio, W.M. Namasaka, L. Nellen, M. Niculescu-Oglinzanu, M. Niechciol, D. Nitz, D. Nosek, V. Novotny, L. Nožka, A Nucita, L.A. Núñez, M. Palatka, J. Pallotta, M.P. Panetta, P. Papenbreer, G. Parente, A. Parra, M. Pech, F. Pedreira, J. Pȩkala, R. Pelayo, J. Peña-Rodriguez, L.A.S. Pereira, J. Perez Armand, M. Perlin, L. Perrone, C. Peters, S. Petrera, T. Pierog, M. Pimenta, V. Pirronello, M. Platino, B. Pont, M. Pothast, P. Privitera, M. Prouza, A. Puyleart, S. Querchfeld, J. Rautenberg, D. Ravignani, M. Reininghaus, J. Ridky, F. Riehn, M. Risse, P. Ristori, V. Rizi, W. Rodrigues de Carvalho, J. Rodriguez Rojo, M.J. Roncoroni, M. Roth, E. Roulet, A.C. Rovero, P. Ruehl, S.J. Saffi, A. Saftoiu, F. Salamida, H. Salazar, G. Salina, J.D. Sanabria Gomez, F. Sánchez, E.M. Santos, E. Santos, F. Sarazin, R. Sarmento, C. Sarmiento-Cano, R. Sato, P. Savina, C. Schäfer, V. Scherini, H. Schieler, M. Schimassek, M. Schimp, F. Schlüter, D. Schmidt, O. Scholten, P. Schovánek, F.G. Schröder, S. Schröder, S.J. Sciutto, M. Scornavacche, R.C. Shellard, G. Sigl, G. Silli, O. Sima, R. Šmída, P. Sommers, J.F. Soriano, J. Souchard, R. Squartini, M. Stadelmaier, D. Stanca, S. Stanič, J. Stasielak, P. Stassi, A. Streich, M. Suárez-Durán, T. Sudholz, T. Suomijärvi, A.D. Supanitsky, J. Šupík, Z. Szadkowski, A. Taboada, O.A. Taborda, A. Tapia, C. Timmermans, P. Tobiska, C.J. Todero Peixoto, B. Tomé, G. Torralba Elipe, A. Travaini, P. Travnicek, C. Trimarelli, M. Trini, M. Tueros, R. Ulrich, M. Unger, M. Urban, L. Vaclavek, J.F. Valdés Galicia, I. Valiño, L. Valore, A. van Vliet, E. Varela, B. Vargas Cárdenas, A. Vásquez-Ramírez, D. Veberič, C. Ventura, I.D. Vergara Quispe, V. Verzi, J. Vicha, L. Villaseñor, J. Vink, S. Vorobiov, H. Wahlberg, A.A. Watson, M. Weber, A. Weindl, L. Wiencke, H. Wilczyński, T. Winchen, M. Wirtz, D. Wittkowski, B. Wundheiler, A. Yushkov, E. Zas, D. Zavrtanik, M. Zavrtanik, L. Zehrer, A. Zepeda, M. Ziolkowski, F. Zuccarello' title: | Cosmic-ray anisotropies in right ascension\ measured by the Pierre Auger Observatory --- Introduction ============ The distribution of cosmic-ray (CR) arrival directions is expected to provide essential clues to understanding the CR origin. Being charged particles, they are significantly deflected by the magnetic fields present in our galaxy [@hav15] and, for those arriving from outside it, also by the extragalactic magnetic fields [@fere]. Since the deflections get smaller for increasing rigidities, it is only at the highest energies that one may hope to observe localized flux excesses associated with individual CR sources. On the other hand, as the energies lower and the deflections become large, the propagation eventually becomes diffusive and it is likely that only large-scale patterns, such as a dipolar flux modulation, may be detectable. However, the small amplitudes of these anisotropies make their observation quite challenging. Due to the Earth’s rotation, cosmic-ray observatories running for long periods of time have an almost uniform exposure in right ascension. This enables them to achieve a high sensitivity to the modulation of the flux in this angular coordinate. In particular, for a dipolar cosmic-ray flux the first-harmonic modulation in right ascension provides a direct measurement of the projection of the dipole in the equatorial plane, $\vec{d}_\perp$. The possible sources of systematic uncertainties that could affect these measurements, such as those from remaining non-uniformities of the exposure or those related to the effects of atmospheric variations, can often be accounted for. Even when this is not possible, as can happen when the trigger efficiency of the array is small, methods that are insensitive to these systematic effects can be adopted to reconstruct $\vec{d}_\perp$, although they have a somewhat reduced sensitivity to the modulations. On the other hand, at low energies the number of events detected is large, what tends to enhance the statistical sensitivity of the measurements. The projection of the dipole along the Earth rotation axis $d_z$ can, in principle, be reconstructed from the study of the azimuthal modulation of the CR fluxes. This requires accounting in detail for the effects of the geomagnetic field on the air showers, which can affect the reconstruction of the CR energies in an azimuthally dependent way. Also, the presence of a tilt of the array can induce a spurious contribution to $d_z$. When the trigger efficiency of the array is small, these effects may lead to systematic uncertainties that cannot be totally corrected for, particularly given the azimuthal dependence of the trigger efficiency arising from the actual geometry of the surface detector array of the Pierre Auger Observatory. Due to these limitations, we will here restrict our analysis to the determination of $\vec{d}_\perp$ through the study of the distribution in right ascension of the events recorded in different energy bins. We note that the determination of $d_z$ for energies $E\geq4$ EeV, for which that detector has full efficiency for zenith angles up to 80$^\circ$, was discussed in detail in @lsa2015 [@science; @uhedip]. At $E\geq8$ EeV, a significant first-harmonic modulation in right ascension, corresponding to an amplitude $d_\perp \sim 6$%, has been detected by the Pierre Auger Observatory [@science]. The reconstructed direction of the three-dimensional dipole suggests a predominant extragalactic origin of the CR anisotropies at energies above 4 EeV, and the dipolar amplitudes obtained in different bins show a growing trend with increasing energies [@science; @uhedip]. The phase in right ascension of the dipolar modulation of the flux determined above 8 EeV is $\alpha_d\simeq 100^\circ$. This is nearly opposite to the phases measured at PeV energies by IceCube and IceTop [@ic12; @ic16], which lie not far from the Galactic center direction which is at $\alpha_{\rm GC} = -94^\circ$. Also the KASCADE-Grande measurements, involving CR energies from few PeV up to few tens of PeV, lead to phases lying close to the right ascension of the Galactic center, even though the measured amplitudes are not statistically significant [@KG].[^1] All this is in agreement with the expectation that for energies above that of the knee of the CR spectrum, which corresponds to the steepening taking place at $\sim 4$ PeV, the outward diffusive escape of the CRs produced in the Galaxy should give rise to a dipolar flux component having its maximum not far from the Galactic center direction. Also at energies above few EeV, where the propagation would become more rectilinear, a continuous distribution of Galactic sources should give rise to a dipolar component not far from the GC direction [@uhedip]. Departures from these behaviors could however result if the CR source distribution is not symmetric with respect to the Galactic center (such as in the presence of a powerful nearby CR source), in the presence of drift motions caused by the regular Galactic magnetic field components [@ptus], or when the contribution from the extragalactic component becomes sizable. Note that the expected direction of a dipole of extragalactic origin will depend on the (unknown) distribution of the CR sources and on the effects of the deflections caused by the Galactic magnetic field, as was discussed in detail in @uhedip. The change from a Galactic CR origin towards a predominantly extragalactic origin is expected to take place somewhere above the knee. More precise measurements of the large-scale anisotropies, filling the gap between the IceCube/IceTop or KASCADE-Grande measurements and the dipole determined by the Pierre Auger Observatory above 8 EeV, should provide information about this transition. In fact, although at energies below 8 EeV the reported dipolar amplitudes are not significant, indications that a change in the phase of the anisotropies in right ascension takes place around few EeV are apparent in the Pierre Auger Observatory measurements [@App11; @LSA2012; @LSA2013; @ICRC13; @ICRC15]. One has to keep in mind in this discussion that the energy at which the total anisotropy becomes of predominantly extragalactic origin may be different from the energy at which the CR flux becomes of predominantly extragalactic origin, since the intrinsic anisotropies of each component are likely different. We present here an update of the measurements of the large-scale anisotropies that are sensitive to the equatorial component of a dipole, for the whole energy range from $\sim 0.03$ EeV up to $\geq32$ EeV, covering more than three decades of energy. The results above 4 EeV are an update of those presented in @uhedip, including two more years of data, corresponding to an increase in the exposure by 20%. At lower energies, we provide a major update of the latest published results [@ICRC15], with 50% more exposure for the SD1500 array and twice as much for the SD750 array. At energies below 2 EeV, possible systematic effects related to the reduced trigger efficiency could be significant. To study the modulation in right ascension in this regime we have then to resort to the “East-West” method, which has larger associated uncertainties but is not affected by most of the systematic effects [@na89; @ew]. At energies below 0.25 EeV, it proves convenient to use the data from the sub-array of detectors with 750 m spacing which, although being much smaller, can detect a larger number of events at these energies. The Observatory and the dataset =============================== The Pierre Auger Observatory [@NIM2015], located near the city of Malargüe in western Argentina (at latitude $35.2^\circ$ South), is the largest existing CR observatory. Its surface detector array (SD) consists of water-Cherenkov detectors having each one 12 tonnes of ultra-pure water viewed by three 9 inch phototubes. The main array, SD1500, consists of detectors distributed on a triangular grid with separations of 1,500 m that span an area of 3,000 km$^2$. A smaller sub-array, SD750, covers an area of 23 km$^2$ with detectors separated by 750 m, making it sensitive also to smaller CR energies. These arrays sample the secondary particles of the air showers reaching ground level. In addition, the fluorescence detector (FD) consists of 27 telescopes that overlook the SD array. The FD can determine the longitudinal development of the air showers by observing the UV light emitted by atmospheric nitrogen molecules excited by the passage of the charged particles of the shower. This fluorescence light can be detected during clear moonless nights, with a corresponding duty cycle of about 15% [@NIM2015]. The SD arrays have instead a continuous operation, detecting events with a duty cycle close to 100%. They also have a more uniform (and simpler to evaluate) exposure. This is why the studies of the large-scale anisotropies that we perform here are based on the much larger number of events recorded by the surface arrays. For the SD1500 array, the dataset considered in this work includes events with energies above 0.25 EeV that were detected from 2004 January 1 up to 2018 August 31. For energies below 4 EeV, it includes events with zenith angles up to $60^\circ$, allowing coverage of 71% of the sky, and the quality trigger applied requires that all the six detectors surrounding the one with the largest signal be active at the time the event is detected. For energies above 4 EeV, more inclined events can be reliably reconstructed [@inclined] and hence the zenith-angle range is extended up to $80^\circ$, allowing coverage of 85% of the sky. Moreover, given that at these energies the number of detectors triggered by each shower is large (4 or more detectors for more than 99% of the events), we also include in this case events passing a relaxed trigger condition, allowing that one of the six detectors that are neighbors to the one with the largest signal be missing or not functioning, provided that the reconstructed shower core be contained inside a triangle of nearby active detectors [@science]. The integrated exposure of the array for $\theta \le 60^\circ$ and using the strict trigger selection is 60,700 km$^2$sryr, while that for $\theta \le 80^\circ$ and relaxing the trigger is 92,500km$^2$sryr. The CR arrival directions are reconstructed from the timing of the signals in the different triggered stations, and the angular resolution is better than 1.6$^\circ$ [@NIM2015], so that it has negligible impact on the reconstruction of the dipole. The energies of the events with $\theta\leq 60^\circ$ are assigned in terms of the reconstructed signals at a reference distance from the shower core of 1000 m. They are corrected for atmospheric effects, accounting for the pressure and air density variations following @jinst17, as well as for geomagnetic effects, following @geo. The inclined events, whose signals are dominantly produced by the muonic component of the showers, have a negligible dependence on atmospheric variations, while geomagnetic effects are already taken into account in their reconstruction [@inclined]. Their energies are assigned in terms of the estimated muon number at ground level. The SD1500 array has full trigger efficiency for $E\geq 2.5$ EeV if one considers events with $\theta\leq 60^\circ$, and for $E\geq 4$ EeV for events with $\theta\leq 80^\circ$. The energies of the CRs are calibrated using the hybrid events measured simultaneously by the SD and FD detectors, in the regimes of full trigger efficiency. For lower energies, in which case we consider events with $\theta\leq 60^\circ$, the energy assignment is performed using the extrapolation of the corresponding calibration curve. The energy resolution for events with $\theta\leq 60^\circ$ is about 7% above 10 EeV, and degrades for lower energies, reaching about 20% at 1 EeV, while the systematic uncertainty in the energy scale is 14% (see @spectrum for details). The more inclined events have an energy resolution of 19%, with a similar systematic uncertainty [@inclined]. For energies below 0.25 EeV, and down to $\sim 0.03$ EeV (below which the trigger efficiency is tiny), we use the events from the denser and smaller SD750 array, since the accumulated statistics is larger. The dataset comprises events with zenith angles up to $55^\circ$ detected from 2012 January 1 up to 2018 August 31. The trigger applied requires that all six detectors around the one with the largest signal be functioning and the associated exposure is 234 km$^2$sryr. The energies are assigned in terms of the reconstructed signals at a reference distance from the shower core of 450 m. They are corrected for atmospheric effects following @jinst17. The SD750 array has full trigger efficiency for $E\geq 0.3$ EeV if one considers events with $\theta\leq 55^\circ$ [@NIM2015]. The energies are calibrated with hybrid events observed in the regime of full trigger efficiency and below that threshold the energy assignment is performed on the basis of the extrapolation of the corresponding calibration curve. At 0.3 EeV the energy resolution is about 18% [@coleman]. The analysis method =================== The weighted first-harmonic analysis in the right ascension angle $\alpha$, often referred to as Rayleigh analysis, provides the Fourier coefficients as $$\label{eq:fcoef} a=\frac{2}{\mathcal{N}}\sum_{i=1}^N w_i\cos \alpha_i ,~~~~~~ b=\frac{2}{\mathcal{N}}\sum_{i=1}^N w_i\sin \alpha_i,$$ where the sums run over all $N$ detected events. The weights $w_i$, which are of order unity, account for the effects of the non-uniformities in the exposure as a function of time, with the normalization factor being ${\mathcal{N}}\equiv \sum_i w_i$. The amplitude and phase of the first-harmonic modulation are given by $r=\sqrt{a^2+b^2}$ and $\varphi=\arctan(b/a)$. The probability to obtain an amplitude larger than the one measured as a result of a fluctuation from an isotropic distribution is $P(\ge r)=\exp(-\mathcal{N}r^2/4)$. To obtain the weights, we permanently monitor the number of active unitary detector cells, corresponding to the number of active detectors that are surrounded by an hexagon of working detectors or, when considering the relaxed trigger condition above 4 EeV, we also account for detector configurations with only five active detectors around the central one. We obtain from this the exposure of the Auger Observatory in bins of right ascension of the zenith of the array, $\alpha^0$. This angle is given by $\alpha^0(t_i)\equiv 2\pi t_i/T_s ~ (\mathrm{mod} ~ 2\pi)$, with the origin of time being taken such that $\alpha^0(0)=0$. The sidereal-time period, $T_s\simeq 23.934$ h, corresponds to one extra cycle per year with respect to the solar frequency. The fraction of the total exposure that is associated to a given $\alpha^0$ bin, taken to be of 1.25$^\circ$ width (5 minutes), is proportional to the total number of unitary cells in that bin, $N_{\mathrm{cell}}(\alpha^0)$. The weights $w_i$ account for the relative variations of $N_{\mathrm{cell}}$ as a function of $\alpha^0$, i.e. $$w_i =\left(\frac{N_{\mathrm{cell}}(\alpha^0(t_i))}{\langle N_{\mathrm{cell}}\rangle}\right)^{-1},$$ with $\langle N_{\mathrm{cell}}\rangle=1/(2\pi)\int_0^{2\pi} \mathrm{d}\alpha^0~ N_{\mathrm{cell}}(\alpha^0)$. Including these weights in the Fourier coefficients eliminates the spurious contribution to the amplitudes associated to the non-uniform exposure in right ascension. We note that if one were to consider periods of only a few months, the resulting modulation of $N_{\mathrm{cell}}(\alpha^0)$ could amount to an effect of a few percent on the modulation in right ascension of the event rates. However, after considering several years, the modulations that appear on shorter time scales tend to get averaged out, with the surviving effects being now typically at the level of about $\pm 0.5$%. The effects of the tilt of the SD array [@LSA2012], which is inclined on average by $\sim 0.2^\circ$ towards $\phi\simeq-30^\circ$ (i.e. towards the South-East), can also be accounted for by adding an extra factor in the weights [@uhedip]. However, this is actually only relevant when performing the Fourier analysis in the azimuth variable $\phi$, something we will not perform here. When the triggering of the array is not fully efficient, there are additional systematic effects related to the interplay between the atmospheric effects in the air-shower development and the energy-dependent trigger efficiency. In particular, changes in the air density modify the Molière radius determining the lateral spread of the electromagnetic component of the showers. The fall-off of the signal at ground level is preferentially harder under hot weather conditions and steeper under cold ones. The detection efficiency of the SD is thus expected to follow these variations to some extent, being on average larger when the weather is hot than when it is cold. As a consequence, one could expect that, at energies below full trigger efficiency, a spurious modulation could appear at the solar frequency. Moreover, we have found that the amplitude of the modulation of the rates at the antisidereal frequency, which is that corresponding to one cycle less per year than the solar frequency, suggests that spurious unaccounted effects become relevant below 2 EeV. In particular, the Fourier amplitude corresponding to the antisidereal time period $T_{\rm as}=24.066$ h in the bin \[1, 2\] EeV is $r=0.005$. This has a probability of arising as a fluctuation of less than 0.1%. A non-negligible antisidereal amplitude could for instance appear in the presence of daily and seasonal systematic effects which are not totally accounted for. Since in this case comparable spurious amplitudes could be expected in the sidereal and antisidereal sidebands [@fast], we only use the Rayleigh method described before in the bins above 2 EeV. We have checked that in the bins above 2 EeV the amplitudes at both the solar and antisidereal frequencies are consistent with being just due to fluctuations, so that there are no signs indicating that surviving systematic effects could be present at the sidereal frequency at these energies (see Table \[tab:sol-asid\] in the Appendix).[^2] Alternatively, one can use for the energies below 2 EeV the differential [East-West]{} (EW) method [@ew], which is based on the difference between the counting rates of the events measured from the East sector and those from the West sector. Since the exposure is the same for events coming from the East and for those coming from the West[^3], and also the spurious modulations due to the atmospheric effects are the same in both sectors, the relative difference between both rates, $(E-W)/(E+W)$, is not sensitive to these experimental and atmospheric systematic effects. This allows one to reconstruct in a clean way the modulation of the rate itself, without the need to apply any correction but at the expense of a reduced sensitivity to the amplitude of the CR flux modulations. In this approach [@ew], the first-harmonic amplitude and phase are calculated using a slightly modified Fourier analysis that accounts for the subtraction of the Western sector from the Eastern one. The Fourier coefficients are defined as $$a_{\rm EW}=\frac{2}{N}\sum_{i=1}^N \cos(\alpha^0(t_i)-\xi_i),~~~~~~b_{\rm EW}=\frac{2}{N}\sum_{i=1}^N \sin(\alpha^0(t_i)-\xi_i),$$ where $\xi_i=0$ for events coming from the East and $\xi_i=\pi$ for those coming from the West, so as to easily implement the subtraction of data from the two hemispheres. In the case in which the dominant contribution to the flux modulation is purely dipolar, the amplitude $r_{\rm EW}=\sqrt{a_{\rm EW}^2+b_{\rm EW}^2}$ and phase $\varphi_{\rm EW}=\arctan(b_{\rm EW}/a_{\rm EW})$ obtained with this method are related to the ones from the Rayleigh formalism through $r=\frac{\pi\langle\cos\delta\rangle}{2\langle\sin\theta\rangle}r_{\rm EW}$ and $\varphi=\varphi_{\rm EW}+\pi/2$, where $\langle\cos\delta\rangle$ is the average of the cosine of the declination of the events and similarly $\langle\sin\theta\rangle$ is the average of the sine of their zenith angles [@ew]. The probability to obtain an amplitude larger than the one measured as a result of a fluctuation from an isotropic distribution is $P(\ge r_{\rm EW})=\exp(-Nr_{\rm EW}^2/4)$. The amplitude of the equatorial dipole component is related to the amplitude of the first-harmonic modulation through $d_\perp\simeq r/\langle\cos\delta\rangle$, and its phase $\alpha_d$ coincides with the first-harmonic phase $\varphi$. Right ascension modulation from 0.03 EV up to $E\geq 32$ EV =========================================================== In Table \[tab:dper\], we report the results for the reconstructed equatorial dipole in different energy bins, covering the range from $\sim 0.03$ EeV up to $E\geq 32$ EeV. The energies defining the boundaries of the bins are $2^n$EeV, with $n=-5,-4,...,4,5$. As mentioned previously, the results are obtained from the study of the right ascension modulation using different methods and datasets. We use the weighted Rayleigh analysis in the energy bins above 2 EeV, for which the systematic effects associated with the non-saturated detector efficiency and to the effects related to atmospheric variations are well under control. When this is not the case, we report the results of the East-West method which, although having larger uncertainties, is quite insensitive to most sources of systematic effects in the right ascension distribution. For energies above 0.25 EeV, we report the results obtained with the data from the SD1500 array, while for lower energies we use the dataset from the SD750 array since, having a lower threshold, it leads to a larger number of events despite the reduced size of the array. In that case, given that the SD750 array is not fully efficient below 0.3 EeV, we just use the East-West method. ![Reconstructed equatorial-dipole amplitude (left) and phase (right). The upper limits at 99% CL are shown for all the energy bins in which the measured amplitude has a chance probability greater than 1%. The gray bands indicate the amplitude and phase for the energy bin $E\geq 8$ EeV. Results from other experiments are shown for comparison [@ic12; @ic16; @KG].[]{data-label="fig:dper"}](dper_0818_all_ul-eps-converted-to.pdf "fig:") ![Reconstructed equatorial-dipole amplitude (left) and phase (right). The upper limits at 99% CL are shown for all the energy bins in which the measured amplitude has a chance probability greater than 1%. The gray bands indicate the amplitude and phase for the energy bin $E\geq 8$ EeV. Results from other experiments are shown for comparison [@ic12; @ic16; @KG].[]{data-label="fig:dper"}](dper_pha_0818_all-eps-converted-to.pdf "fig:") For each energy bin, we report in Table \[tab:dper\] the number of events $N$, the amplitude $d_\perp$, the uncertainty $\sigma_{x,y}$ of the components $d_x$ or $d_y$, the right ascension phase of the dipolar modulation $\alpha_d$, the chance probability $P( \ge d_\perp)$ and, when the measured amplitude has a probability larger than 1%, we also report the 99% CL upper limit on the amplitude of the equatorial dipole $d_\perp^{\rm\small UL}$. The upper limits on the first-harmonic amplitude at a given confidence level CL (${\rm CL}=0.99$ for 99% CL) are derived from the distribution for a dipolar anisotropy of unknown amplitude, marginalized over the dipole phase, requiring that $$\int_{0}^{r^{\rm\small UL}}\mathrm{d}r\,\frac{r}{\sigma^2}\exp\left[-\frac{r^2+s^2}{2\sigma^2}\right]I_0\left(\frac{rs}{\sigma^2}\right) = {\rm CL},$$ with $I_0(x)$ the zero-order modified Bessel function, $s$ the measured amplitude and the dispersion being $\sigma=\sqrt{2/\mathcal{N}}$ for the Rayleigh analysis while $\sigma=(\pi\langle\cos\delta\rangle/2\langle\sin\theta\rangle)\sqrt{2/N}$ for the East-West method. These bounds on the first-harmonic amplitude are then converted into the corresponding upper limit for the amplitude of the equatorial dipole using that $d_\perp^{\rm\small UL}=r^{\rm\small UL}/\langle\cos\delta\rangle$. For the uncertainties in the phase, we use the two-dimensional distribution marginalized instead over the dipole amplitude $r$ [@linsley]. In Table \[tab:dperew\] in the Appendix we also report the results obtained above 2 EeV with the East-West method, which are consistent with those obtained with the Fourier analysis in Table \[tab:dper\] but have larger uncertainties. Fig. \[fig:dper\] shows the equatorial dipole amplitude (left panel) and phase (right panel) that were determined in all the energy bins considered, as reported in Table \[tab:dper\]. Also shown are the results obtained by the IceCube, IceTop and KASCADE-Grande experiments in the 1–30 PeV range [@ic12; @ic16; @KG]. We also show the 99% CL upper limit $d_\perp^{\rm UL}$ in the cases in which the measured amplitude has more than 1% probability to be a fluctuation from an isotropic distribution. The results for the integral bin with $E\geq8$ EeV, that was considered in @science, is shown as a gray band. A trend of increasing amplitudes for increasing energies is observed, with values going from $d_\perp\simeq 0.1$% at PeV energies, to $\sim 1$% at EeV energies and reaching $\sim 10$% at 30 EeV. Regarding the phases, a transition between values lying close to the right ascension of the Galactic center, $\alpha_d\simeq \alpha_{\rm GC}$, towards values in a nearly opposite direction, $\alpha_d\simeq 100^\circ$, is observed to take place around a few EeV. ![Components of the dipole in the equatorial plane for different energy bins above 0.25 EeV (left panel) and below 1 EeV (right panel). The horizontal axis corresponds to the component along the direction $\alpha=0$ while the vertical axis to that along $\alpha=90^\circ$. The radius of each circle corresponds to the 1$\sigma$ uncertainty in $d_x$ and $d_y$. The Galactic center direction is also indicated. The measurements from IceCube (IC) and IceTop (IT) at PeV energies are also indicated in the right panel [@ic12; @ic16].[]{data-label="fig:circles"}](dperpxy1_c.pdf "fig:") ![Components of the dipole in the equatorial plane for different energy bins above 0.25 EeV (left panel) and below 1 EeV (right panel). The horizontal axis corresponds to the component along the direction $\alpha=0$ while the vertical axis to that along $\alpha=90^\circ$. The radius of each circle corresponds to the 1$\sigma$ uncertainty in $d_x$ and $d_y$. The Galactic center direction is also indicated. The measurements from IceCube (IC) and IceTop (IT) at PeV energies are also indicated in the right panel [@ic12; @ic16].[]{data-label="fig:circles"}](dperpxy2_c.pdf "fig:") The overall behavior of the amplitudes and phases in the $d_x$–$d_y$ plane is depicted in Fig. \[fig:circles\]. The left panel includes the energies above 0.25 EeV while the right panel those below 1 EeV. In these plots, the right ascension $\alpha_d$ is the polar angle, measured anti-clockwise from the x-axis (so that $d_x=d_\perp \cos\alpha_d$ and $d_y=d_\perp\sin\alpha_d$). The circles shown have a radius equal to the 1$\sigma$ uncertainties $\sigma_{x,y}$ in the dipole components $d_{x,y}$ (reported in the Table 1), effectively including $\sim 39$% of the two-dimensional confidence region. One can appreciate in this plot how the amplitudes decrease for decreasing energies, and how the phases change as a function of the energy, pointing almost in the opposite direction of the Galactic center above 4 EeV and not far from it below 1 EeV. The values of the anisotropy parameters obtained above are based, by construction, on the event content in the energy intervals under scrutiny. The finite resolution on the energies induces bin-to-bin migration of events. Due to the steepness of the energy spectrum, the migration happens especially from lower to higher energy bins. This influences the energy dependence of the recovered parameters. However, given that the size of the energy bins chosen here is much larger than the resolution, the migration of events remains small enough to avoid significant distortions for the recovered values above full efficiency. For instance, given the energy resolution of the SD1500 array [@spectrum] and assuming a dipole amplitude scaling as $E^{0.8}$, as was found in @uhedip to approximately hold above 4 EeV, the impact of the migrations remains below an order of magnitude smaller than the statistical uncertainties associated to the recovered parameters. In the energy range below full efficiency, additional systematic effects enter into play on the energy estimate. We note that forward-folding simulations of the response function effects into an injected anisotropy show that the recovered parameters are not impacted by more than their current statistical uncertainties. A complete unfolding of these effects is left for future studies. It requires an accurate knowledge of the response function of the SD arrays down to low energies, which is not available at the moment. Discussion and conclusions ========================== We have updated the searches for anisotropies on large angular scales using the cosmic rays detected by the Pierre Auger Observatory. The analysis covered more than three orders of magnitude in energy, including events with $E\geq 0.03$ EeV and hence encompassing the expected transition between Galactic and extragalactic origins of the cosmic rays. This was achieved by studying the first-harmonic modulation in right ascension of the CR fluxes determined with the SD1500 and the SD750 surface detector arrays. This allowed us to determine the equatorial component of a dipolar modulation, $\vec{d}_\perp$, or eventually to set strict upper-bounds on it. For the inclusive bin above 8 EeV, the first-harmonic modulation in right ascension leads to an equatorial dipole amplitude $d_\perp= 0.060^{+0.010}_{-0.009}$, which has a probability to arise by chance from an isotropic distribution of $1.4\times 10^{-9}$, corresponding to a two-sided Gaussian significance of 6$\sigma$. The phase of the maximum of this modulation is at $\alpha_d= 98^\circ\pm 9^\circ$, indicating an extragalactic origin for these CRs. When splitting the bin above 8 EeV, as originally done in @uhedip, one finds indications of an increasing amplitude with increasing energies, and the direction of the dipole suggests that it has an extragalactic origin in all the three bins considered. A growing dipole amplitude for increasing energies could for instance be associated with the larger relative contribution to the flux that arises at high energies from nearby sources, that are more anisotropically distributed than the integrated flux from the distant ones. A suppression of the more isotropic contribution from distant sources is expected to result from the strong attenuation of the CR flux that should take place at the highest energies as a consequence of their interactions with the background radiation [@gzk1; @gzk2]. At energies below 8 EeV, none of the amplitudes are significant, and we set 99% CL upper bounds on $d_\perp$ at the level of 1 to 3%. The phases measured in most of the bins below 1 EeV are not far from the direction towards the Galactic center. All this suggests that the origin of these dipolar anisotropies changes from a predominantly Galactic one to an extragalactic one somewhere in the range between 1 EeV and few EeV. The small size of the dipolar amplitudes in this energy range, combined with the indications that the composition is relatively light [@compo], disfavor a predominant flux component of Galactic origin at $E>1$ EeV [@LSA2013]. Models of Galactic CRs relying on a mixed mass composition, with rigidity dependent spectra, have been proposed to explain the knee (at $\sim 4$ PeV) and second-knee (at $\sim 0.1$ EeV) features in the spectrum [@candia]. The predicted anisotropies depend on the details of the Galactic magnetic field model considered and, below 0.5 EeV, they are consistent with the upper bounds we obtained. An extrapolation of these models, considering that there is no cutoff in the Galactic component, would predict dipolar anisotropies at the several percent level beyond the EeV, in tension with the upper bounds in this range. The conflict is even stronger for Galactic models [@kusenko] having a light CR composition that extends up to the ankle energy (at $\sim 5$ EeV). The presence of a more isotropic extragalactic component making a significant contribution already at EeV energies could dilute the anisotropy of Galactic origin, so as to be consistent with the bounds obtained. Note that even if the extragalactic component were completely isotropic in some reference frame, the motion of the Earth with respect to that system could give rise to a dipolar anisotropy through the Compton-Getting effect [@cg]. For instance, for a CR distribution that is isotropic in the CMB rest frame, the resulting Compton-Getting dipole amplitude would be about 0.6% [@cg2]. This amplitude depends on the relative velocity and on the CR spectral slope, but not directly on the particle charge. The deflections of the extragalactic CRs caused by the Galactic magnetic field are expected to further reduce this amplitude, and also to generate higher harmonics, in a rigidity dependent way, so that the exact predictions are model dependent. The Compton-Getting extragalactic contribution to the dipolar anisotropy is hence below the upper limits obtained. More data, as well as analyses exploiting the discrimination between the different cosmic-ray mass components that will become feasible with the upgrade of the Pierre Auger Observatory currently being implemented [@augerprime], will be crucial to understand in depth the origin of the cosmic rays at these energies and to learn how their anisotropies are produced. Acknowledgments {#acknowledgments .unnumbered} =============== The successful installation, commissioning, and operation of the Pierre Auger Observatory would not have been possible without the strong commitment and effort from the technical and administrative staff in Malargüe. We are very grateful to the following agencies and organizations for financial support: Argentina – Comisión Nacional de Energía Atómica; Agencia Nacional de Promoción Científica y Tecnológica (ANPCyT); Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET); Gobierno de la Provincia de Mendoza; Municipalidad de Malargüe; NDM Holdings and Valle Las Leñas; in gratitude for their continuing cooperation over land access; Australia – the Australian Research Council; Brazil – Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq); Financiadora de Estudos e Projetos (FINEP); Fundação de Amparo à Pesquisa do Estado de Rio de Janeiro (FAPERJ); São Paulo Research Foundation (FAPESP) Grants No. 2010/07359-6 and No. 1999/05404-3; Ministério da Ciência, Tecnologia, Inovações e Comunicações (MCTIC); Czech Republic – Grant No. MSMT CR LTT18004, LO1305, LM2015038 and CZ.02.1.01/0.0/0.0/16013/0001402; France – Centre de Calcul IN2P3/CNRS; Centre National de la Recherche Scientifique (CNRS); Conseil Régional Ile-de-France; Département Physique Nucléaire et Corpusculaire (PNC-IN2P3/CNRS); Département Sciences de l’Univers (SDU-INSU/CNRS); Institut Lagrange de Paris (ILP) Grant No. LABEX ANR-10-LABX-63 within the Investissements d’Avenir Programme Grant No. ANR-11-IDEX-0004-02; Germany – Bundesministerium für Bildung und Forschung (BMBF); Deutsche Forschungsgemeinschaft (DFG); Finanzministerium Baden-Württemberg; Helmholtz Alliance for Astroparticle Physics (HAP); Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF); Ministerium für Innovation, Wissenschaft und Forschung des Landes Nordrhein-Westfalen; Ministerium für Wissenschaft, Forschung und Kunst des Landes Baden-Württemberg; Italy – Istituto Nazionale di Fisica Nucleare (INFN); Istituto Nazionale di Astrofisica (INAF); Ministero dell’Istruzione, dell’Universitá e della Ricerca (MIUR); CETEMPS Center of Excellence; Ministero degli Affari Esteri (MAE); México – Consejo Nacional de Ciencia y Tecnología (CONACYT) No. 167733; Universidad Nacional Autónoma de México (UNAM); PAPIIT DGAPA-UNAM; The Netherlands – Ministry of Education, Culture and Science; Netherlands Organisation for Scientific Research (NWO); Dutch national e-infrastructure with the support of SURF Cooperative; Poland -Ministry of Science and Higher Education, grant No. DIR/WK/2018/11; National Science Centre, Grants No. 2013/08/M/ST9/00322, No. 2016/23/B/ST9/01635 and No. HARMONIA 5–2013/10/M/ST9/00062, UMO-2016/22/M/ST9/00198; Portugal – Portuguese national funds and FEDER funds within Programa Operacional Factores de Competitividade through Fundação para a Ciência e a Tecnologia (COMPETE); Romania – Romanian Ministry of Research and Innovation CNCS/CCCDI-UESFISCDI, projects PN-III-P1-1.2-PCCDI-2017-0839/19PCCDI/2018 and PN18090102 within PNCDI III; Slovenia – Slovenian Research Agency, grants P1-0031, P1-0385, I0-0033, N1-0111; Spain – Ministerio de Economía, Industria y Competitividad (FPA2017-85114-P and FPA2017-85197-P), Xunta de Galicia (ED431C 2017/07), Junta de Andalucía (SOMM17/6104/UGR), Feder Funds, RENATA Red Nacional Temática de Astropartículas (FPA2015-68783-REDT) and María de Maeztu Unit of Excellence (MDM-2016-0692); USA – Department of Energy, Contracts No. DE-AC02-07CH11359, No. DE-FR02-04ER41300, No. DE-FG02-99ER41107 and No. DE-SC0011689; National Science Foundation, Grant No. 0450696; The Grainger Foundation; Marie Curie-IRSES/EPLANET; European Particle Physics Latin American Network; and UNESCO. In Table \[tab:sol-asid\] we report the amplitudes and probabilities obtained with the SD1500 array at the solar and antisidereal frequencies, in all bins above 2 EeV for which the Rayleigh analysis was applied at the sidereal frequency. One can see that all these amplitudes are consistent with being fluctuations, showing then no signs of remaining systematic effects. We also report in Table \[tab:dperew\] the equatorial dipole amplitudes and phases obtained with the East-West method above 2 EeV, and compare them with the results for the same datasets that were obtained with the Rayleigh method (reported in Table \[tab:dper\]). The inferred equatorial dipole amplitudes turn out to be consistent, although the statistical uncertainty obtained with the East-West method is larger by a factor $\pi\langle \cos\delta\rangle/2\langle \sin\theta \rangle$ [@ew]. Given that above full trigger efficiency one has that $\langle\sin\theta\rangle\simeq 0.58$ when considering $\theta<60^\circ$, as we do for $E<4$ EeV, or $\langle\sin\theta\rangle\simeq 0.65$ when considering $\theta<80^\circ$, as we do for $E\geq 4$ EeV, and that $\langle\cos\delta\rangle\simeq 0.78$ in both zenith ranges, the statistical uncertainties obtained in the East-West analysis are larger by a factor of about 2.1 than those obtained with the Rayleigh analysis for $\theta<60^\circ$, or by a factor of about 1.9 for $\theta<80^\circ$, as can be seen in Table \[tab:dperew\]. -------------- --------- --------------------- ------------- --------------------- ------------- $ E$ \[EeV\] $N$ $r$ \[%\] $P(\geq r)$ $r$ \[%\] $P(\geq r)$ 2 - 4 283,074 $0.6^{+0.3}_{-0.2}$ 0.07 $0.5^{+0.3}_{-0.2}$ 0.20 4 - 8 88,325 $0.8^{+0.5}_{-0.3}$ 0.24 $0.5^{+0.5}_{-0.2}$ 0.59 8 - 16 27,271 $0.6^{+1.1}_{-0.2}$ 0.79 $0.5^{+1.1}_{-0.1}$ 0.83 16 - 32 7,664 $1.1^{+2.0}_{-0.3}$ 0.79 $3.1^{+1.9}_{-1.1}$ 0.16 $\geq 32$ 1,993 $1.5^{+4.4}_{-0.1}$ 0.90 $1.3^{+4.6}_{-0.0}$ 0.92 $\geq 8$ 36,928 $0.3^{+1.1}_{-0.0}$ 0.93 $1.0^{+0.8}_{-0.4}$ 0.39 -------------- --------- --------------------- ------------- --------------------- ------------- : Fourier amplitudes at the solar and antisidereal frequencies, and the probabilities to get larger values from statistical fluctuations of an isotropic distribution, for the different energy bins above 2 EeV.[]{data-label="tab:sol-asid"} [ c c | c c c c | c c c]{} & & &\ $E$ \[EeV\] & $N$ & $d_\perp$ \[%\] & $\sigma_{x,y}$ \[%\] & $\alpha_d [^\circ]$ & $P(\ge d_\perp)$ & $d_\perp$ \[%\] &$\sigma_{x,y}$ \[%\] & $\alpha_d [^\circ]$\ ------------------------------------------------------------------------ 2 - 4 & 283,074 & $0.2^{+0.9}_{-0.2}$ & 0.72 & $-16 \pm 167$ & 0.94 & $0.5^{+0.4}_{-0.2}$ & 0.34 & $-11 \pm 55$\ 4 - 8 & 88,325 & $1.7^{+1.3}_{-0.7}$ &1.1 & $41 \pm 38$ & 0.33 & $1.0^{+0.7}_{-0.4}$ & 0.61 & $69 \pm 46$\ 8 - 16 & 27,271 & $6.4^{+2.3}_{-1.7}$ &2.1 & $147 \pm 18$ & $8.3\times 10^{-3}$ & $5.6^{+1.2}_{-1.0}$ &1.1 & $ 97\pm 12$\ 16 - 32 & 7,664 & $9.3^{+4.5}_{-3.0}$ & 3.9& $67 \pm 24$ & $5.8\times 10^{-2}$ & $7.5^{+2.3}_{-1.8}$ & 2.1 &$ 80\pm 17$\ $\geq 32$ & 1,993 & $25^{+9}_{-6}$ & 7.6 & $151 \pm 17$ & $4.1\times 10^{-3}$ & $13^{+5}_{-3}$ & 4.1 & $152 \pm 19$\ ------------------------------------------------------------------------ $\geq 8$ & 36,928 & $6.6^{+2.0}_{-1.5}$ & 1.8& $132 \pm 15$ & $8.6\times 10^{-4}$ & $6.0^{+1.0}_{-0.9}$ & 0.94 & $98 \pm 9$\ Ahlers, M. 2019, , 886, L18 Al Samarai, I. for the Pierre Auger Collaboration 2016, [PoS ICRC2015]{}, 372 Bonino, R. et al. 2011, , 738, 67 Candia, J., Mollerach, S. and Roulet, E. 2003, , 05, 003 Calvez, A., Kusenko, A. and Nagataki, S. 2010, , 105, 091101 Castellina, A. for the Pierre Auger Collaboration 2019, [EPJ Web Conf.]{}, 210, 06002 Coleman, A. for the Pierre Auger Collaboration 2019, [PoS ICRC2019]{}, 225 Compton, A.H. and Getting, I.A. 1935, PhRv, 47, 817 Farley, F.J.M. and Storey, J.R. 1954, [Proc. Phys. Soc.]{} A, 67, 996 Feretti, L. et al. 2012, , 20, 54 Greisen, K. 1966, , 16, 748 Haverkorn, M. 2015, [Astrophys. Space Sci. Library]{}, 407, 483 IceCube Collaboration 2012, , 746, 33 IceCube Collaboration 2016, , 826, 220 Kachelriess, M. and Serpico, P.D. 2006, [Phys. Lett.]{} B, 640, 225 KASCADE-Grande Collaboration 2019, , 870, 91 Linsley, J. 1975, , 34, 1530 Nagashima, K. et al. 1989, [Il Nuovo Cimento]{} C, 12, 695 Ptuskin, V.S. et al. 1993, , 268, 726 Sidelnik, I. for the Pierre Auger Collaboration 2013, [Proceeding of the 33rd ICRC]{}, arXiv:1307.5059 The Pierre Auger Collaboration 2011a, [Astropart. Phys.]{}, 34, 627 The Pierre Auger Collaboration 2011b, , 11, 022 The Pierre Auger Collaboration 2012, , 203, 34 The Pierre Auger Collaboration 2013, , 762, L13 The Pierre Auger Collaboration 2014a, , 90, 122006 The Pierre Auger Collaboration 2014b, , 08, 019 The Pierre Auger Collaboration 2015a, , 802, 111 The Pierre Auger Collaboration 2015b, [NIM]{} A, 798, 172 The Pierre Auger Collaboration 2017a, [Science]{}, 357, 1266 The Pierre Auger Collaboration 2017b, [JINST]{}, 12, P02006 The Pierre Auger Collaboration  2018, , 868, 4 Verzi, V. for the Pierre Auger Collaboration 2019, [PoS ICRC2019]{}, 450 Zatsepin, G.T. and Kuzmin, V.A. 1966, [JETP Lett.]{}, 4, 78 [^1]: Hints of anisotropies on smaller angular scales were also found recently in a reanalysis of KASCADE-Grande data [@ahlers]. [^2]: Given that, for events with zenith angles smaller than 60$^\circ$, the trigger efficiency is larger than $\sim 95$% above 2 EeV, the efficiency related systematic effects are negligible above this threshold. [^3]: A possible tilt of the array in the East-West direction, giving just a constant term in the East-West rate difference, does not affect the determination of the first-harmonic modulation.
{ "pile_set_name": "ArXiv" }
--- abstract: 'For a graph $G$ let $L(G)$ and $l(G)$ denote the size of the largest and smallest maximum matching of a graph obtained from $G$ by removing a maximum matching of $G$. We show that $L(G)\leq 2l(G),$ and $L(G)\leq \frac{3}{2}l(G)$ provided that $G$ contains a perfect matching. We also characterize the class of graphs for with $L(G)=2l(G)$. Our characterization implies the existence of a polynomial algorithm for testing the property $L(G)=2l(G)$. Finally we show that it is $NP$-complete to test whether a graph $G$ containing a perfect matching satisfies $L(G)=\frac{3}{2}l(G)$.' address: - | Department of Informatics and Applied Mathematics,\ Yerevan State University, Yerevan, 0025, Armenia - | Institute for Informatics and Automation Problems,\ National Academy of Sciences of Republic of Armenia, 0014, Armenia author: - 'Artur Khojabaghyan [^1] and Vahan V. Mkrtchyan [^2]' title: On upper bounds for parameters related to construction of special maximum matchings --- Introduction ============ In the paper graphs are assumed to be finite, undirected, without loops or multiple edges. Let $V(G)$ and $E(G)$ denote the sets of vertices and edges of a graph $G$, respectively. If $v\in V(G)$ and $e\in E(G)$, then $e$ is said to cover $v$ if $e$ is incident to $v$. For $V'\subseteq V(G)$ and $E'\subseteq E(G)$ let $G\backslash V'$ and $G\backslash E'$ denote the graphs obtained from $G$ by removing $V'$ and $E'$, respectively. Moreover, let $V(E')$ denote the set of vertices of $G$ that are covered by an edge from $E'$. A subgraph $H$ of $G$ is said to be spanning for $G$, if $V(E(H))=V(G)$. The length of a path (cycle) is the number of its edges. A $k$-path ($k$-cycle) is a path (cycle) of length $k$. A $3$-cycle is called a triangle. A set $V'\subseteq V(G)$ ($E'\subseteq E(G)$) is said to be independent, if $V'$ ($E'$) contains no adjacent vertices (edges). An independent set of edges is called matching. A matching of $G$ is called perfect, if it covers all vertices of $G$. Let $\nu (G)$ denote the cardinality of a largest matching of $G$. A matching of $G$ is maximum, if it contains $\nu (G)$ edges. For a positive integer $k$ and a matching $M$ of $G$, a $(2k-1)$-path $P$ is called $M$-augmenting, if the $2^{nd}$, $4^{th}$, $6^{th}$,..., $(2k-2)^{th}$ edges of $P$ belong to $M$, while the endvertices of $P$ are not covered by an edge of $M$. The following theorem of Berge gives a sufficient and necessary condition for a matching to be maximum: (Berge [@Harary]) A matching $M$ of $G$ is maximum, if $G$ contains no $M$-augmenting path. For two matchings $M$ and $M'$ of $G$ consider the subgraph $H$ of $G$, where $V(H)=V(M\triangle M')$ and $E(H)=M\triangle M'$. The connected components of $H$ are called $M\triangle M'$-alternating components. Note that $M\triangle M'$ alternating components are always paths or cycles of even length. For a graph $G$ define: $L(G)\equiv \max \{\nu (G\backslash F):F$ is a maximum matching of $G\},$ $l(G)\equiv \min \{\nu (G\backslash F):F$ is a maximum matching of $G\}.$ It is known that $L(G)$ and $l(G)$ are $NP$-hard calculable even for connected bipartite graphs $G$ with maximum degree three [@Complexity], though there are polynomial algorithms which construct a maximum matching $F$ of a tree $G$ such that $\nu (G\backslash F)=L(G)$ and $\nu (G\backslash F)=l(G)$ (to be presented in [@Algorithm]). In the same paper [@Algorithm] it is shown that $L(G)\leq 2l(G).$ In the present paper we re-prove this equality, and also show that $L(G)\leq \frac{3}{2}l(G)$ provided that $G$ contains a perfect matching. A naturally arising question is the characterization of graphs $G$ with $L(G)= 2l(G)$ and the graphs $G$ with a perfect matching that satisfy $L(G)= \frac{3}{2}l(G)$. In this paper we solve these problems by giving a characterization of graphs $G$ with $L(G)= 2l(G)$ that implies the existence of a polynomial algorithm for testing this property, and by showing that it is $NP$-complete to test whether a bridgeless cubic graph $G$ satisfies $L(G)=\frac{3}{2}l(G)$. Recall that by Petersen theorem any bridgeless cubic graph contains a perfect matching. Terms and concepts that we do not define can be found in [Diestel,Harary,Lov,West]{}. Some auxiliarly results ======================= We will need the following: \[Ratios\] Let $G$ be a graph. Then: 1. for any two maximum matchings $F,F'$ of $G$, we have $\nu (G\backslash F')\leq 2\nu (G\backslash F)$; 2. $L(G)\leq 2l(G)$; 3. If $L(G)=2l(G)$, $F_{L},F_{l}$ are two maximum matchings of the graph $G$ with $\nu (G\backslash F_{L})=L(G), $ $\nu (G\backslash F_{l})=l(G)$, and $H_{L}$ is **any** maximum matching of the graph $G\backslash F_{L}$, then: 1. $F_{l}\backslash F_{L}\subset H_{L};$ 2. $H_{L}\backslash F_{l}$ is a maximum matching of $G\backslash F_{l}$; 3. $F_{L}\backslash F_{l}$ is a maximum matching of $G\backslash F_{l}$; 4. if $G$ contains a perfect matching, then $L(G)\leq \frac32l(G)$. (a)Let $H'$ be any maximum matching in the graph $G\backslash F'$. Then:$$\begin{aligned} \nu (G\backslash F') =|H'|=|H'\cap F|+|H'\backslash F|\leq |F\backslash F'|+\nu(G\backslash F)= |F'\backslash F|+\nu (G\backslash F)\leq 2\nu (G\backslash F).\end{aligned}$$ \(b) follows from (a). \(c) Consider the proof of (a) and take $F'=F_{L}$, $H'=H_L$ and $F=F_l$. Since $L(G)=2l(G)$, we must have equalities throughout, thus properties (c1)-(c3) should be true. \(d) Let $F_{L},F_{l}$ be two perfect matchings of the graph $G$ with $\nu (G\backslash F_{L})=L(G),$ $\nu (G\backslash F_{l})=l(G)$, and assume $H_{L}$ to be a maximum matching of the graph $G\backslash F_{L}$. Define: $$\begin{aligned} X =\{e=(u,v)\in F_{L}:u\text{ and }v\text{ are incident to an edge from }H_{L}\cap F_{l}\}, \\ x =|X|, k=|H_{L}\cap F_{l}|;\end{aligned}$$ Clearly, $(H_{L}\backslash F_{l})\cup X$ is a matching of the graph $G\backslash F_{l}$, therefore, taking into account that $(H_{L}\backslash F_{l})\cap X=\emptyset,$ we deduce $$\begin{aligned} l(G) =\nu (G\backslash F_{l})\geq |H_{L}\backslash F_{l}| +|X| =|H_{L}| -|H_{L}\cap F_{l}| +|X| =L(G)-k+x.\end{aligned}$$ Since $F_{L}$ is a perfect matching, it covers the set $V(H_{L}\cap F_{l})\backslash V(X)$, which contains$$|V(H_{L}\cap F_{l})\backslash V(X)| =2| (H_{L}\cap F_{l})| -2|X| =2k-2x$$vertices. Define the set $E_{F_{L}}$ as follows: $$E_{F_{L}}=\{e\in F_{L}:e\text{ covers a vertex from }V(H_{L}\cap F_{l})\backslash V(X)\}.$$ Clearly, $E_{F_{L}}$is a matching of $G\backslash F_{l}$, too, and therefore $$l(G)=\nu (G\backslash F_{l})\geq |E_{F_{L}}|=2k-2x.$$ Let us show that $$\max \{L(G)-k+x,2k-2x\}\geq \frac{2L(G)}{3}.$$Note that\ if $x\geq k-\frac{L(G)}{3}$ then $L(G)-k+x\geq L(G)-k+k-\frac{L(G)}{3}=\frac{2L(G)}{3}$;\ if $x\leq k-\frac{L(G)}{3}$ then $2k-2x\geq \frac{2L(G)}{3}$,\ thus in both cases we have $l(G)\geq \frac{2L(G)}{3}$ or $$\frac{L(G)}{l(G)}\leq \frac{3}{2}.$$ The proof of the theorem \[Ratios\] is completed. $\square$ \[1-1Case\] (Lemma 2.20, 2.41 of [@Algorithm]) Let $G$ be a graph, and assume that $u$ and $v$ are vertices of degree one sharing a neighbour $w\in V(G)$. Then: $$L(G)=L(G\backslash \{u,v,w\})+1,l(G)=l(G\backslash \{u,v,w\})+1.$$ \[L=2l1-1case\] Let $G$ be a graph with $L(G)=2l(G)$. Then there are no vertices $u,v$ of degree one, that are adjacent to the same vertex $w$. Suppose not. Then lemma \[1-1Case\] and (b) of theorem \[Ratios\] imply $$\begin{aligned} L(G) &=&1+L(G-\{u,v,w\})\leq 1+2l(G-\{u,v,w\})= \\ &=&1+2(l(G)-1)=2l(G)-1<2l(G)\end{aligned}$$a contradiction. $\square$ Characterization of graphs $G$ satisfying $L(G)=2l(G)$ ====================================================== Let $T$ be the set of all triangles of $G$ that contain at least two vertices of degree two. Note that any vertex of degree two lies in at most one triangle from $T$. From each triangle $t\in T$ choose a vertex $v_t$ of degree two, and define $V_1(G)$ as follows: $$V_1(G)=\{v:d_G(v)=1\}\cup \{v_t:t\in T\}$$ \[Characterization\] Let $G$ be a connected graph with $|V(G)|\geq3$. Then $L(G)=2l(G)$ if and only if (1) : $G\backslash V_1(G)$ is a bipartite graph with a bipartition $(X,Y)$; (2) : $|V_1(G)|=|Y|$ and any $y\in Y$ has exactly one neighbour in $V_1(G)$; (3) : the graph $G\backslash V_{1}(G)$ contains $|X|$ vertex disjoint $2$-paths. Sufficiency. Let $G$ be a connected graph with $|V(G)|\geq3$ satisfying the conditions (1)-(3). Let us show that $L(G)=2l(G)$. For each vertex $v$ with $d(v)=1$ take the edge incident to it and define $F_1$ as the union of all these edges. For each vertex $v_t\in V_{1}(G)$ take the edge that connects $v_t$ to a vertex of degree two, and define $F_2$ as the union of all those edges. Set: $$F=F_1\cup F_2.$$ Note that $F$ is a matching with $|F|=|V_{1}(G)|=|Y|$. Moreover, since $G$ is bipartite and $|V_{1}(G)|=|Y|$, the definitions of $F_1$ and $F_2$ imply that there is no $F$-augmenting path in $G$. Thus, by Berge theorem, $F$ is a maximum matching of $G$, and $$\nu(G)=|F|=|V_{1}(G)|=|Y|.$$Observe that the graph $G\backslash F$ is a bipartite graph with $\nu(G\backslash F)\leq |X|$, thus $$l(G)\leq \nu(G\backslash F)\leq |X|.$$ Now, consider the $|X|$ vertex disjoint $2$-paths of the graph $G\backslash V_{1}(G)$ guaranteed by (3). (2) implies that these $2$-paths together with the $|F|=|V_{1}(G)|=|Y|$ edges of $F$ form $|X|$ vertex disjoint $4$-paths of the graph $G$. Consider matchings $M_{1}$ and $M_{2}$ of $G$ obtained from these $4$-paths by adding the first and the third, the second and the fourth edges of these $4$-paths to $M_{1}$ and $M_{2}$, respectively. Define: $$F'=(F\backslash M_{2})\cup (M_{1}\backslash F).$$ Note that $F'$ is a matching of $G$ and $|F'|=|F|$, thus $F'$ is a maximum matching of $G$. Since $F'\cap M_2=\emptyset$, we have$$L(G)\geq \nu (G\backslash F')\geq |M_{2}|=2|X|\geq 2l(G).$$ \(b) of theorem \[Ratios\] implies that $L(G)=2l(G)$. Necessity. Now, assume that $G$ is a connected graph with $|V(G)|\geq 3$ and $L(G)=2l(G)$. By proving a series of claims, we show that $G\backslash V_1(G)$ satisfies the conditions (1)-(3) of the theorem. \[SpanningSubgraph\] For any maximum matchings $F_{L},F_{l}$ of the graph $G$ with $\nu (G\backslash F_{L})=L(G),$ $\nu (G\backslash F_{l})=l(G)$, $F_{L}\cup F_{l}$ induces a spanning subgraph, that is $V(F_{L})\cup V(F_{l})=V(G)$. Suppose that there is a vertex $v\in V(G)$ that is covered neither by $F_{L}$ nor by $F_{l}$. Since $F_{L}$ and $F_{l}$ are maximum matchings of $G$, for each edge $e=(u,v)$ the vertex $u$ is incident to an edge from $F_{L}$ and to an edge from $F_{l}$. Case 1: there is an edge $e=(u,v)$ such that $u$ is incident to an edge from $F_{L}\cap F_{l}$. Note that $\{e\}\cup (F_{L}\backslash F_{l})$ is a matching of $G\backslash F_{l}$ which contradicts (c3) of the theorem \[Ratios\]. Case 2: for each edge $e=(u,v)$ $u$ is incident to an edge $f_{L}\in F_{L}\backslash F_{l}$ and to an edge $f_{l}\in F_{l}\backslash F_{L}$. Let $H_{L}$ be any maximum matching of $G\backslash F_{L}$. Due to (c1) of theorem \[Ratios\] $f_{l}\in H_{L}$. Define: $$H'_{L}=(H_{L}\backslash \{f_{l}\})\cup \{e\}.$$ Note that $H'_{L}$ is a maximum matching of $G\backslash F_{L}$ such that $F_{l}\backslash F_{L}$ is not a subset of $H'_{L}$ contradicting (c1) of theorem \[Ratios\]. $\square$ \[AltComp2paths\] For any maximum matchings $F_{L},F_{l}$ of the graph $G $ with $\nu (G\backslash F_{L})=L(G),$ $\nu (G\backslash F_{l})=l(G)$, the alternating components $F_{L}\triangle F_{l}$ are $2$-paths. It suffices to show that there is no edge $f_{L}\in F_{L}$ that is adjacent to two edges from $F_{l}$. Suppose that some edge $f_{L}\in F_{L}$ is adjacent to edges $f_{l}^{\prime }$ and $f_{l}^{\prime \prime }$ from $F_{l}$. Let $H_{L}$ be any maximum matching of $G\backslash F_{L}$. Due to (c1) of theorem \[Ratios\] $f'_{l},f''_{l}\in H_{L}$. This implies that $\{f_{L}\}\cup (H_{L}\backslash F_{l})$ is a matching of $G\backslash F_{l}$ which contradicts (c2) of theorem \[Ratios\]. $\square$ \[DegreeRequirements\]For any maximum matchings $F_{L},F_{l}$ of the graph $G$ with $\nu (G\backslash F_{L})=L(G),$ $\nu (G\backslash F_{l})=l(G)$ 1. if $u\in V(F_{l})\backslash V(F_{L})$ then $d(u)=1$ or $d(u)=2$. Moreover, in the latter case, if $v$ and $w$ denote the two neighbours of $u$, where $(u,w)\in F_l$, then $d(w)=2$ and $(v,w)\in F_L$. 2. if $u\in V(F_{L})\backslash V(F_{l})$ then $d(u)\geq 2.$ \(a) Assume that $u$ is covered by an edge $e_{l}\in F_{l}$ and $u\notin V(F_{L})$. Suppose that $d(u)\geq 2$, and there is an edge $e=(u,v)$ such that $e\notin F_{l}$. Taking into account the claim \[SpanningSubgraph\], we need only to consider the following four cases: Case 1: $v\in V(F_{l})\backslash V(F_{L})$. This is impossible, since $F_{L}$ is a maximum matching. Case 2: $v$ is covered by an edge $f\in F_{L}\cap F_{l}$; Let $H_{L}$ be any maximum matching of $G\backslash F_{L}$. Due to (c1) of theorem \[Ratios\] $e_{l}\in H_{L}$, thus $e\notin H_{L}$. Define: $$F_{L}^{\prime }=(F_{L}\backslash \{f\})\cup \{e\}.$$Note that $F_{L}^{\prime }$ is a maximum matching, and $H_{L}$ is a matching of $G\backslash F_{L}^{\prime }$. Moreover, $$\nu (G\backslash F_{L}^{\prime })\geq \left\vert H_{L}\right\vert =\nu (G\backslash F_{L})=L(G),$$ thus $H_{L}$ is a maximum matching of $G\backslash F_{L}^{\prime }$ and $\nu (G\backslash F_{L}^{\prime })=L(G)$. This is a contradiction because $F_{L}^{\prime }\triangle F_{l}$ contains a component which is not a $2$-path contradicting claim \[AltComp2paths\]. Case 3: $v$ is incident to an edge $f_{L}\in F_{L},$ $f_{l}\in F_{l}$ and $f_{L}\neq $ $f_{l}$. Let $H_{L}$ be any maximum matching of $G\backslash F_{L}$. Due to (c1) of theorem \[Ratios\], $e_{l},f_{l}\in H_{L}$. Define: $$F_{L}^{\prime }=(F_{L}\backslash \{f_{L}\})\cup \{e\}.$$Note that $F_{L}^{\prime }$ is a maximum matching, and $H_{L}$ is a matching of $G\backslash F_{L}^{\prime }$. Moreover, $$\nu (G\backslash F_{L}^{\prime })\geq \left\vert H_{L}\right\vert =\nu (G\backslash F_{L})=L(G),$$thus $H_{L}$ is a maximum matching of $G\backslash F_{L}^{\prime }$ and $\nu (G\backslash F_{L}^{\prime })=L(G)$. This is a contradiction because $F_{L}^{\prime }\triangle F_{l}$ contains a component which is not a $2$-path contradicting claim \[AltComp2paths\]. Case 4: $v$ is covered by an edge $e_{L}\in F_{L}$ and $v\notin V(F_{l}).$ Note that if $e_{L}$ is not adjacent to $e_{l}$ then the edges $e,e_{L}$ and the edge $\tilde{e}\in F_{l}\backslash F_{L}$ that is adjacent to $e_{L}$ would form an augmenting $3$-path with respect to $F_{L}$, which would contradict the maximality of $F_{L}$. Thus it remains to consider the case when $e_{L}$ is adjacent to $e_{l}$ and $d(u)=2$. Let $w$ be the vertex adjacent to both $e_{l}$ and $e_{L}$. Let us show that $d(w)=2$. Let $H_{L}$ be any maximum matching of $G\backslash F_{L}$. Due to (c1) of theorem \[Ratios\], $e_{l}\in H_{L}$. Define: $$F'_{L}=(F_{L}\backslash \{e_{L}\})\cup \{e\}.$$Note that $F'_{L}$ is a maximum matching, and $H_{L}$ is a matching of $G\backslash F'_{L}$. Moreover, $$\nu (G\backslash F'_{L})\geq |H_{L}| =\nu (G\backslash F_{L})=L(G),$$thus $H_{L}$ is a maximum matching of $G\backslash F'_{L}$ and $\nu (G\backslash F'_{L})=L(G)$. Since $d(w)\geq 3$ there is a vertex $w'\neq u,v$ such that $(w,w')\in E(G)$ and $w'$ satisfies one of the conditions of cases 1,2 and 3 with respect to $F'_{L}$ and $F_{l}$. A contradiction. Thus $d(w)=2$. Clearly, $(v,w)=e_L\in F_L$. \(b) This follows from (a) of claim \[DegreeRequirements\] and corollary \[L=2l1-1case\]. $\square$ \[IntersectionEdges\] Let $F_{L},F_{l}$ be any maximum matchings of the graph $G$ with $\nu (G\backslash F_{L})=L(G),$ $\nu (G\backslash F_{l})=l(G)$. Then for any maximum matching $H_{L}$ of the graph $G\backslash F_{L}$ there is no edge of $F_{L}\cap F_{l}$ which is adjacent to two edges from $H_{L}$. Due to (c3) of theorem \[Ratios\] any edge from $H_{L}$ that is incident to a vertex covered by an edge of $F_{L}\cap F_{l}$ is also incident to a vertex from $V(F_{L})\backslash V(F_{l})$. If there were an edge $e\in F_{L}\cap F_{l}$ which is adjacent to two edges $h_{L},h_{L}^{\prime }\in H_{L}$, then due to (c1) of theorem [Ratios]{} and (a) of claim \[DegreeRequirements\] we would have an augmenting $7$-path with respect to $F_{L}$, which would contradict the maximality of $F_{L}$. $\square$ \[ChoiceClaim\] 1. for any maximum matchings $F_{L},F_{l}$ of the graph $G$ with $\nu (G\backslash F_{L})=L(G),$ $\nu (G\backslash F_{l})=l(G)$, we have $(V(F_L)\backslash V(F_l))\cap V_1(G)=\emptyset$; 2. there is a maximum matching $F_{l}$ of $G$ with $\nu (G\backslash F_{l})=l(G)$ and a maximum matching $F_{L}$ of the graph $G$ with $\nu(G\backslash F_{L})=L(G),$ such that $V_1(G)\subseteq V(F_L\cap F_l)\cup (V(F_l) \backslash V(F_L))$. \(1) On the opposite assumption, consider a vertex $x\in (V(F_L)\backslash V(F_l))\cap V_1(G)$. Since $x\in V_1(G)$ then $d(x)\leq 2$. On the other hand, (b) of claim \[DegreeRequirements\] implies that $d(x)\geq 2$, thus $d(x)=2$. Then there are vertices $y,z$ such that $(x,z)\in F_L$, $(z,y)\in F_l$. Note that due to (a) of claim \[DegreeRequirements\], we have $d(y)\leq 2$. Let us show that $d(y)=1$. Suppose that $d(y)=2$. Then due to (a) of claim \[DegreeRequirements\], we have that $d(z)=2$, thus $G$ is the triangle, which is a contradiction, since $G$ does not satisfy $L(G)=2l(G)$. Thus $d(y)=1$. Since $x\in V_1(G)$, we imply that there is a vertex $w$ with $d(w)=2$ such that $w,x,z$ form a triangle. Note that $w$ is covered neither by $F_L$ nor by $F_l$, which contradicts claim \[SpanningSubgraph\]. \(2) Let $e_t$ be an edge of a triangle $t\in T$ connecting the vertex $v_t\in V_1(G)$ to a vertex of degree two. Let us show that there is a maximum matching $F_{l}$ of $G$ with $\nu (G\backslash F_{l})=l(G)$ such that $e_t\in F_l$ for each $t\in T$. Choose a maximum matching $F_{l}$ of $G$ with $\nu (G\backslash F_{l})=l(G)$ that contains as many edges $e_t$ as possible. Let us show that $F_l$ contains all edges $e_t$. Suppose that there is $t_0\in T$ such that $e_{t_0}\notin F_l$. Define: $$F'_{l}=(F_{l}\backslash \{e\})\cup \{e_{t_0}\},$$where $e$ is the edge of $F_l$ that is adjacent to $e_{t_0}$. Note that $$\nu(G\backslash F'_l)\leq \nu(G\backslash F_l)=l(G),$$thus $F'_{l}$ is a maximum matching of $G$ with $\nu (G\backslash F_{l})=l(G)$. Note that $F'_{l}$ contains more edges $e_t$ than does $F_l$ which contradicts the choice of $F_{l}$. Thus, there is a maximum matching $F_{l}$ of $G$ with $\nu (G\backslash F_{l})=l(G)$ such that $e_t\in F_l$ for all $t\in T$. Now, for this maximum matching $F_{l}$ of $G$ choose a maximum matching $F_{L}$ of the graph $G$ with $\nu(G\backslash F_{L})=L(G),$ such that $V(F_L\cap F_l)\cup (V(F_l) \backslash V(F_L))$ covers maximum number of vertices from $V_1(G)$. Let us show that $V_1(G)\subseteq V(F_L\cap F_l)\cup (V(F_l) \backslash V(F_L))$. Suppose that there is a vertex $x\in V_1(G)$ such that $x\notin V(F_L\cap F_l)\cup (V(F_l) \backslash V(F_L))$. Note that due to claim \[SpanningSubgraph\] and (b) of claim \[DegreeRequirements\], any vertex of degree one is either incident to an edge from $F_L\cap F_l$ or to an edge $V(F_l) \backslash V(F_L)$. Thus due to definition of $V_1(G)$, $d(x)=2$ and if $y$ and $z$ denote the two neighbors of $x$, then $d(y)=2$ and $(y,z)\in E(G)$. Since $x\notin V(F_L\cap F_l)$, we have that $(x,y)\notin F_L$, and since $x\notin (V(F_l) \backslash V(F_L))$, we have that $(y,z)\notin F_L$, thus $(x,z)\in F_L$, as $F_L$ is a maximum matching. Let $H_L$ be any maximum matching of $G\backslash F_L$. As $L(G)=2l(G)$, we have $(x,y)\in H_L$ ((c1) of theorem \[Ratios\]). Define: $$F'_{L}=(F_{L}\backslash \{(x,z)\})\cup \{(y,z)\}.$$Note that $F'_{L}$ is a maximum matching of $G$, $H_L$ is a matching of $G\backslash F_L$, thus $$\nu(G\backslash F'_L)\geq |H_L|=\nu(G\backslash F_L)=L(G).$$Therefore $F'_{L}$ is a maximum matching of $G$ with $\nu(G\backslash F'_L)=L(G)$. Now, observe that $V(F'_L\cap F_l)\cup (V(F_l) \backslash V(F'_L))$ covers more vertices than does $V(F_L\cap F_l)\cup (V(F_l) \backslash V(F_L))$ which contradicts the choice of $F_L$. The proof of the claim \[ChoiceClaim\] is completed. $\square$ \[IndependenceClaim\] For any maximum matchings $F_{L},F_{l}$ of the graph $G$ with $\nu(G\backslash F_{L})=L(G),$ $\nu (G\backslash F_{l})=l(G)$, we have 1. $V(F_L)\backslash V(F_l)$ is an independent set; 2. no edge of $G$ connects two vertices that are covered by both $F_L\backslash F_l$ and $F_l\backslash F_L$; 3. no edge of $G$ is adjacent to two different edges from $F_L\cap F_l$; 4. no edge of $G$ connects a vertex covered by $F_L\cap F_l$ to a vertex covered by both $F_L\backslash F_l$ and $F_l\backslash F_L$; 5. if $(u,v)\in F_L\cap F_l$ then either $u\in V_1(G)$ or $v\in V_1(G)$. (1)There is no edge of $G$ connecting two vertices from $V(F_{L})\backslash V(F_{l})$ since $F_{l}$ is a maximum matching. \(2) follows from (c1) and (c2) of theorem \[Ratios\]. \(3) follows from (c3) of theorem \[Ratios\]. \(4) Suppose that there is an edge $e=(y_{1},y_{2})$, such that $y_1$ is covered by $F_L\cap F_l$ and $y_2$ is covered by both $F_L\backslash F_l$ and $F_l\backslash F_L$. Consider a maximum matching $H_{L}$ of the graph $G\backslash F_{L}.$ Note that $y_{1}$ must be incident to an edge from $H_{L}$, as otherwise we could replace the edge of $H_{L}$ that is adjacent to $e$ and belongs also to $F_{l}\backslash F_{L}$ ((c1) of theorem \[Ratios\]) by the edge $e$ to obtain a new maximum matching $H_{L}^{\prime }$ of the graph $G\backslash F_{L}$ which would not satisfy (c1) of theorem \[Ratios\]. So let $y_{1}$ be incident to an edge $h_{L}\in H_{L}$, which connects $y_{1}$ with a vertex $x\in V(F_{L})\backslash V(F_{l})$. Note that due to claim \[IntersectionEdges\], (c1) of theorem \[Ratios\] and (a) of claim \[DegreeRequirements\], the edge $h_{L}$ lies on an $H_{L}-F_{L}$ alternating $4$-path $P$. Define: $$\begin{aligned} F_{L}^{\prime } &=&(F_{L}\backslash E(P))\cup (H_{L}\cap E(P)), \\ H_{L}^{\prime } &=&(H_{L}\backslash E(P))\cup (F_{L}\cap E(P)).\end{aligned}$$Note that $F_{L}^{\prime }$ is a maximum matching of $G$, $H_{L}^{\prime }$ is a matching of $G\backslash F_{L}^{\prime }$ of cardinality $\left\vert H_{L}\right\vert $, and $$\nu (G\backslash F_{L}^{\prime })\geq \left\vert H_{L}^{\prime }\right\vert =\left\vert H_{L}\right\vert =\nu (G\backslash F_{L})=L(G),$$ thus $H_{L}^{\prime }$ is a maximum matching of $G\backslash F_{L}^{\prime }$ and $\nu (G\backslash F_{L}^{\prime })=L(G)$. This is a contradiction since the edge $e$ connects two vertices which are covered by $F_{L}^{\prime }\backslash F_{l}$ and $F_{l}\backslash F_{L}^{\prime }$ ((2) of claim \[IndependenceClaim\]). (5)Suppose that $e=(u,v)\in F_{L}\cap F_{l}$. Since $G$ is connected and $\left\vert V\right\vert \geq 3$, we, without loss of generality, may assume that $d(v)\geq 2$, and there is $w\in V(G),w\neq u$ such that $(w,v)\in E(G)$. Consider a maximum matching $H_{L}$ of the graph $G\backslash F_{L}.$ Note that $v$ must be incident to an edge from $H_{L}$, as otherwise we could replace the edge of $H_{L}$ that is incident to $w$ ($H_{L}$ is a maximum matching of $G\backslash F_{L}$) by the edge $(w,v)$ to obtain a new maximum matching $H_{L}^{\prime }$ of the graph $G\backslash F_{L}$ such that $v$ is incident to an edge from $H_{L}^{\prime }$. So we may assume that there is an edge $(v,q)\in H_{L}$, $q\neq u$. Note that due to claim \[IntersectionEdges\], (c1) of theorem [Ratios]{} and (a) of claim \[DegreeRequirements\] the edge $(q,w)$ lies on an $H_{L}-F_{L}$ alternating $4$-path $P$. Define: $$\begin{aligned} F_{L}^{\prime } &=&(F_{L}\backslash E(P))\cup (H_{L}\cap E(P)), \\ H_{L}^{\prime } &=&(H_{L}\backslash E(P))\cup (F_{L}\cap E(P)).\end{aligned}$$Note that $F_{L}^{\prime }$ is a maximum matching of $G$, $H_{L}^{\prime }$ is a matching of $G\backslash F_{L}^{\prime }$ of cardinality $\left\vert H_{L}\right\vert $, and $$\nu (G\backslash F_{L}^{\prime })\geq \left\vert H_{L}^{\prime }\right\vert =\left\vert H_{L}\right\vert =\nu (G\backslash F_{L})=L(G),$$thus $H_{L}^{\prime }$ is a maximum matching of $G\backslash F_{L}^{\prime }$ and $\nu (G\backslash F_{L}^{\prime })=L(G)$. Since $u\in V(F_{l})\backslash V(F_{L}^{\prime })$ (a) of claim \[DegreeRequirements\] implies that either $d(u)=1$ and therefore $u\in V_1(G)$, or $d(u)=d(v)=2$ and therefore either $u\in V_1(G)$ or $v\in V_1(G)$. Proof of the claim \[IndependenceClaim\] is completed. $\square$ We are ready to complete the proof of the theorem. Take any maximum matchings $F_{L},F_{l}$ of the graph $G$ guaranteed by the (2) of claim \[ChoiceClaim\] and consider the following partition of $V(G\backslash V_1(G))=V(G)\backslash V_1(G)$: $$\begin{aligned} X =X(F_{L},F_{l})=V(F_{L})\backslash V(F_{l}), Y =Y(F_{L},F_{l})=V(G)\backslash (V_1(G)\cup X).\end{aligned}$$ Claim \[IndependenceClaim\] implies that $X$ and $Y$ are independent sets of vertices of $G\backslash V_1(G)$, thus $G\backslash V_1(G)$ is a bipartite graph with a bipartition $(X,Y)$. The choice of maximum matchings $F_{L},F_{l}$, (a) of claim \[DegreeRequirements\], (5) of claim \[IndependenceClaim\] and the definition of the set $Y$ imply (2) of the theorem \[Characterization\]. Let us show that it satisfies (3), too. Consider the alternating $2$-paths of $$(H_{L}\backslash F_{l})\triangle (F_{L}\backslash F_{l}).$$(c2), (c3) of theorem \[Ratios\] and the definition of the set $X$ imply that there are $|X|$ such $2$-paths. Moreover, these $2$-paths are in fact $2$-paths of the graph $G\backslash V_{1}(G) $. Thus $G$ satisfies (3) of the theorem. The proof of the theorem \[Characterization\] is completed. $\square$ \[PolynomialAlgorithm\] The property of a graph $L(G)=2l(G)$ can be tested in polynomial time. First of all note that the property $L(G)=2l(G)$ is additive, that is, a graph satisfies this property if and only if all its connected components do. Thus we can concentrate only on connected graphs. All connected graphs with $|V(G)|\leq 2$ satisfy the equality $L(G)=2l(G)$, thus we can assume that $|V(G)|\geq 3$. Next, we construct a set $V_1(G)$, which can be done in linear time. Now, we need to check whether the graph $G\backslash V_1(G)$ satisfies the conditions (1)-(3) of the theorem \[Characterization\]. It is well-known that the properties (1) and (2) can be checked in polynomial time, so we will consider only the testing of (3). From a graph $G\backslash V_{1}(G)$ with a bipartition $(X,Y)$ we construct a network $\vec{G}$ with new vertices $s$ and $t$. The arcs of $\vec{G}$ are defined as follows: - connect $s$ to every vertex of $X$ with an arc of capacity $2$; - connect every vertex of $Y$ to $t$ by an arc of capacity $1$; - for every edge $(x,y)\in E(G)$, $x\in X$, $y\in Y$ add an arc connecting the vertex $x$ to the vertex $y$ which has capacity $1$. Note that - the value of the maximum $s-t$ flow in $\vec{G}$ is no more than $2\left\vert X\right\vert $ (the capacity of the cut $(S,\bar{S})$, where $S=\{s\}$, $\bar{S}=V(\vec{G})\backslash S$, is $2\left\vert X\right\vert $); - the value of the maximum $s-t$ flow in $\vec{G}$ is $2\left\vert X\right\vert $ if and only if the graph $G\backslash V_{1}(G)$ contains $\left\vert X\right\vert $ vertex disjoint $2$-paths, thus (3) also can be tested in polynomial time. $\square$ Recently Monnot and Toulouse in [@Monnot] proved that $2$-path partition problem remains $NP$-complete even for bipartite graphs of maximum degree three. Fortunately, in theorem \[Characterization\] we are dealing with a special case of this problem which enables us to present a polynomial algorithm in corollary \[PolynomialAlgorithm\]. $NP$-completeness of testing $L(G)=\frac{3}{2}l(G)$ in the class of bridgeless cubic graphs =========================================================================================== The reader may think that a result analogous to corollary \[PolynomialAlgorithm\] can be proved for the property $L(G)=\frac{3}{2}l(G)$ in the class of graphs containing a perfect matching. Unfortunately this fails already in the class of bridgeless cubic graphs, which by the well-known theorem of Petersen are known to possess a perfect matching. It is $NP$-complete to test the property $L(G)=\frac{3}{2}l(G)$ in the class of bridgeless cubic graphs. Clearly, the problem of testing the property $L(G)=\frac{3}{2}l(G)$ for graphs containing a perfect matching is in $NP$, since if we are given perfect matchings $F_{L},F_{l}$ of the graph $G$ with $\nu (G\backslash F_{L})=L(G),$ $\nu (G\backslash F_{l})=l(G)$ then we can calculate $L(G)$ and $l(G)$ in polynomial time. We will use the well-known $3$-edge-coloring problem ([@Holyer]) to establish the NP-completeness of our problem. Let $G$ be a bridgeless cubic graph. Consider a bridgeless cubic graph $G_{\bigtriangleup }$ obtained from $G$ by replacing every vertex of $G$ by a triangle. We claim that $G$ is $3$-edge-colorable if and only if $L(G_{\bigtriangleup })=\frac{3}{2}l(G_{\bigtriangleup })$. Suppose that $G$ is $3$-edge-colorable. Then $G_{\bigtriangleup }$ is also $3 $-edge-colorable, which means that $G_{\bigtriangleup }$ contains two edge disjoint perfect matchings $F$ and $F^{\prime }$. This implies that $$L(G_{\bigtriangleup })\geq \nu (G_{\bigtriangleup }\backslash F)\geq \left\vert F^{\prime }\right\vert =\frac{\left\vert V(G_{\bigtriangleup })\right\vert }{2},$$On the other hand, the set $E(G)$ forms a perfect matching of $G_{\bigtriangleup }$, and $$l(G_{\bigtriangleup })\leq \nu (G_{\bigtriangleup }\backslash E(G))=\frac{\left\vert V(G_{\bigtriangleup })\right\vert }{3},$$since every component of $G_{\bigtriangleup }\backslash E(G)$ is a triangle. Thus: $$\frac{L(G_{\bigtriangleup })}{l(G_{\bigtriangleup })}\geq \frac{3}{2},$$(d) of theorem \[Ratios\] implies that $\frac{L(G_{\bigtriangleup })}{l(G_{\bigtriangleup })}=\frac{3}{2}$. Now assume that $\frac{L(G_{\bigtriangleup })}{l(G_{\bigtriangleup })}=\frac{3}{2}$. Note that for every perfect matching $F$ of the graph $G_{\bigtriangleup }$ the graph $G_{\bigtriangleup }\backslash F$ is a 2-factor, therefore $$\begin{aligned} L(G_{\bigtriangleup }) &=&\frac{\left\vert V(G_{\bigtriangleup })\right\vert -w(G_{\bigtriangleup })}{2}, \\ l(G_{\bigtriangleup }) &=&\frac{\left\vert V(G_{\bigtriangleup })\right\vert -W(G_{\bigtriangleup })}{2}\end{aligned}$$where $w(G_{\bigtriangleup })$ and $W(G_{\bigtriangleup })$ denote the minimum and maximum number of odd cycles in a $2$-factor of $G_{\bigtriangleup }$, respectively. Since $\frac{L(G_{\bigtriangleup })}{l(G_{\bigtriangleup })}=\frac{3}{2}$ we have $$W(G_{\bigtriangleup })=\frac{\left\vert V(G_{\bigtriangleup })\right\vert +2w(G_{\bigtriangleup })}{3}.$$Taking into account that $W(G_{\bigtriangleup })\leq \frac{\left\vert V(G_{\bigtriangleup })\right\vert }{3}$, we have: $$\begin{aligned} W(G_{\bigtriangleup }) &=&\frac{\left\vert V(G_{\bigtriangleup })\right\vert }{3}, \\ w(G_{\bigtriangleup }) &=&0.\end{aligned}$$Note that $w(G_{\bigtriangleup })=0$ means that $G_{\bigtriangleup }$ is $3$-edge-colorable, which in its turn implies that $G$ is $3$-edge-colorable. The proof of the theorem is completed. $\square$ [99]{} R. Diestel, Graph theory, Springer-Verlag Heidelberg, New York, 1997, 2000, 2005. F. Harary, Graph Theory, Addison-Wesley, Reading, MA, 1969. I. Holyer, The NP-completeness of edge coloring, SIAM J. Comput. 10, N4, 718-720, 1981 (available at: http://cs.bris.ac.uk/ian/graphs). R.R. Kamalian, V. V. Mkrtchyan, On complexity of special maximum matchings constructing, Discrete Mathematics 308, (2008), pp. 1792-1800 R.R. Kamalian, V. V. Mkrtchyan, Two polynomial algorithms for special maximum matching constructing in trees, under construction (http://arxiv.org/abs/0707.2295). L. Lovász, M.D. Plummer, Matching theory, Ann. Discrete Math. 29 (1986). J. Monnot, S. Toulouse, The path partition problem and related problems in bipartite graphs, Operation Research Letters, 35, (2007), pp. 677-684 D. B. West, Introduction to Graph Theory, Prentice-Hall, Englewood Cliffs, 1996. [^1]: email: arturkhojabaghyan@gmail.com [^2]: email: vahanmkrtchyan2002@{ysu.am, ipia.sci.am, yahoo.com}
{ "pile_set_name": "ArXiv" }
--- author: - 'Shao-Ping Li,' - 'Xin-Qiang Li,' - 'Xin-Shuai Yan' - 'and Ya-Dong Yang' bibliography: - 'reference.bib' title: 'Freeze-in Dirac neutrinogenesis: thermal leptonic CP asymmetry' --- Introduction {#sec:intro} ============ Recent developments in particle physics and cosmology, especially those related to the neutrino mass, dark matter, as well as baryon asymmetry of the Universe (BAU), have highlighted the importance of feeble couplings. Actually, feeble couplings are already present in the Yukawa couplings of the light charged fermions within the Standard Model (SM); *e.g.*, the SM predicts an electron Yukawa coupling with $y_e\simeq 10^{-6}$. If one also accepts feeble Yukawa couplings of the Dirac neutrinos, the smallness of neutrino masses can then be simply addressed via the Higgs-like mechanism with three right-handed Dirac neutrino singlets. Feeble couplings can also play an important role in the early Universe. For example, the feebleness allows a freeze-in production for the dark matter abundance, which can be effectively kept from large annihilation [@Hall:2009bx; @Bernal:2017kxu]. Moreover, the feebleness stirs up a new leptogenesis, named Dirac neutrinogenesis (DN) [@Dick:1999je], in which the out-of-equilibrium condition for generating the lepton-number ($L$) asymmetry can be guaranteed and the baryon-number ($B$) asymmetry is generated via thermal sphaleron transitions [@Kuzmin:1985mm], even in a theory with $B-L=0$ initially. In the typical versions of DN mechanism [@Dick:1999je; @Murayama:2002je; @Cerdeno:2006ha; @Gu:2007mc; @Bechinger:2009qk; @Narendra:2017uxl], the lepton-number asymmetry is generated by heavy particle decays with a non-thermal distribution. In addition, to discuss the loop correction for nonzero kinetic phase (or the absorptive part of the decay amplitude), one usually focuses on the new particle sector, which is also the case in seesaw-based leptogenesis [@Giudice:2003jh; @Buchmuller:2004nz; @Davidson:2008bu], while the contribution from wavefunction correction in the lepton-doublet sector has not yet been considered to the best of our knowledge. There could be two possible reasons for having neglected the lepton-doublet wavefunction contribution. On the one hand, the charged-lepton flavors are widely assumed to populate in the diagonal basis, and thus the leptonic CP asymmetry cannot be generated from self-energy diagrams in the lepton-doublet sector. Interestingly, however, it has been pointed out earlier that the well-known tri-bimaximal (TB) mixing pattern [@Harrison:2002er], with a minimal correction from the charged-lepton or neutrino sector, can produce compatible neutrino oscillaton data while retaining its compelling prediction [@Albright:2008rp; @He:2011gb]. In this respect, a nontrivial combination of the charged-lepton and neutrino mixings is preferred to produce the oscillation observables. On the other hand, even with a non-diagonal charged-lepton Yukawa matrix, there is no on-shell cut in the self-energy loop at zero-temperature regime, and hence no CP asymmetry either. Nevertheless, it has been illustrated in ref. [@Giudice:2003jh] and later implemented in ref. [@Hambye:2016sby] that, at high-temperature regime where thermal effects come into play, the zero-temperature cutting rules should be superseded by the thermal cuts [@Das1997], allowing consequently nonzero contributions to the leptonic CP asymmetry that would otherwise vanish at vacuum regime. Therefore, as will be exploited in this paper, when both thermal effects and nontrivial mixings in the charged-lepton and neutrino sectors are taken into account, one can expect the leptonic CP asymmetry at finite temperature to carry a nonzero imaginary piece, *i.e.*, $\text{Im}[(Y_\nu Y^\dagger_\nu)(Y_\ell Y^\dagger_\ell)]\neq0$, where $Y_{\ell}$ and $Y_{\nu}$ denote respectively the charged-lepton and neutrino Yukawa matrices that are responsible for their respective masses and mixings. This enables us to exploit a direct interplay between the BAU and the neutrino oscillation observables in a minimal setup, without tuning additional Yukawa couplings beyond $Y_{\ell, \nu}$. Furthermore, since the feeble neutrino Yukawa couplings essentially prompt an out-of-equilibrium condition (*i.e.*, the right-handed Dirac neutrinos undergo a freeze-in production in the early Universe), there is no need to invoke much heavier dynamical degrees of freedom (d.o.f), and the evolution of the lepton-number asymmetry can be much simplified as well. The remainder of this paper is constructed as follows. We begin in section \[sec:2\] with a brief overview of the DN mechanism, and then calculate the leptonic CP asymmetry with two different thermal cuts in a model-independent way. The Boltzmann equation for the evolution of the lepton-number asymmetry in the freeze-in regime is also derived here. In section \[sec:3\], we discuss the nontrivial mixings in the charged-lepton and neutrino sectors by focusing on minimal corrections to the TB mixing pattern. In section \[sec:4\], we identify the scalars participating in the out-of-equilibrium decay and perform our detailed numerical analyses. Our conclusions are finally made in section \[sec:con\]. Thermal leptonic CP asymmetry and evolution {#sec:2} =========================================== Dirac neutrinogenesis --------------------- The basic idea of DN [@Dick:1999je] can be summarized as follows. In a theory without lepton-number-violating Lagrangian, due to the feeble neutrino Yukawa couplings that prevent the left- and right-handed Dirac neutrinos from equilibration (dubbed as “L-R equilibration" from now on), the leptonic CP asymmetry from a heavy particle decay in the early Universe can result in a net lepton-number asymmetry stored in the right-handed Dirac neutrinos and lepton doublets. As the sphaleron transitions act only on the left-handed particles, the net lepton-number asymmetry stored in the lepton doublets will be partially converted to the baryon-number asymmetry via rapid sphaleron processes, while the portion stored in the right-handed Dirac neutrinos keeps intact. After the sphaleron freezes out around $T\simeq \mathcal{O}(100)$ GeV, a net baryon-number (as well as lepton-number) asymmetry survives till today. At the thermal sphaleron epoch, $10^2~\text{GeV}<T<10^{12}~\text{GeV}$, all the SM species (except for the right-handed Dirac neutrinos) are kept in chemical equilibrium, and the conversion fraction between lepton- and baryon-number asymmetries is given by [@Harvey:1990qw] $$\begin{aligned} \label{sphaleron relation} Y_{\Delta B}=c\,Y_{\Delta (B-L)}=-c\,Y_{\Delta L},\end{aligned}$$ where $c=(8N_f+4N_H)/(22N_f+13N_H)$, with $N_{f}$ and $N_{H}$ denoting the numbers of fermion generations and Higgs doublets, respectively. To prompt the necessary out-of-equilibrium condition so as to keep the generated lepton-number asymmetry from being washed out by the L-R equilibration, the lepton-number-violating thermal decay rate must be sufficiently smaller than the expansion rate of the Universe, which typically requires the Dirac neutrino Yukawa couplings to be $Y_\nu\lesssim \mathcal{O}(10^{-8})$. Such feeble couplings, despite of their non-aesthetic nature, are generically present, if the sub-eV Dirac neutrino masses are generated by the Higgs-like mechanism with a vacuum expectation value (VEV) around the electroweak scale. On the other hand, such a mechanism of neutrino mass generation is often criticized on account of *naturalness*, and dynamical explanations of the smallness of neutrino masses are, therefore, more biased by enlarging the Yukawa space and/or introducing sufficiently heavy particles, which have also been considered in explicit realizations of the DN [@Murayama:2002je; @Cerdeno:2006ha; @Gu:2007mc; @Bechinger:2009qk; @Narendra:2017uxl]. Nonetheless, for these dynamical explanations with overabundant Yukawa parameters, reliable phenomenological predictions rely on particular bases and values of the unknown Yukawa couplings beyond those that can be directly fixed by the lepton flavor spectrum. In particular, a simple connection between the BAU and the low-energy neutrino oscillation observables cannot be established, if the DN realization has nothing to do with the Yukawa couplings that are directly responsible for the lepton masses and mixings. Furthermore, it is difficult, if not impossible, to detect the additional sufficiently heavy particles at colliders. In this paper, as an underlying theory for explaining the feebleness of the Yukawa couplings for both light charged fermions and neutrinos is still unknown, if it were to exist, we shall take the feeble neutrino couplings as a starting point. In this context, we provide a new DN realization in which the decaying particle that generates the lepton-number asymmetry is in thermal equilibrium. For this purpose, we consider a Higgs-like doublet which has a vacuum mass around $\mathcal{O}(10^2)$ GeV and feeble couplings to the right-handed Dirac neutrinos. The feeble Yukawa couplings ensure that the right-handed Dirac neutrinos never reach equilibrium with the thermal bath. On top of that, the leptonic CP asymmetry is induced by the self-energy correction in the lepton-doublet sector due to thermal effects. This realization allows us to establish a simple connection between the BAU and the low-energy neutrino oscillation observables, and, at the same time, renders the detection of the scalars *at least* in principle possible at colliders. Theoretical setup ----------------- In this subsection, we shall adopt a real-time formalism in thermal field theory to calculate the thermal leptonic CP asymmetry. To appreciate the subtlety in calculating the CP asymmetry between thermal field theory and non-equilibrium quantum field theory (QFT), we shall use two different thermal cuts and compare the corresponding consequences that arise from the different dependence on the distribution functions. The evolution of the lepton-number asymmetry will be determined by a simplified Boltzmann equation in the freeze-in regime. ### Thermal field theory: real-time formalism There are two equivalent approaches in thermal field theory, the real-time and the imaginary-time formalism [@Landsman:1986uw; @Das1997]. Within the real-time formalism, we do not need to perform analytic continuation for the physical region, but there is a doubling of d.o.f dual to each field presented in vacuum QFT. As a result, the interaction vertices are doubled, and the thermal propagators have a $2\times 2$ structure. In the following, we shall adopt this formalism to calculate the thermal leptonic CP asymmetry. ![\[thermal FeynRule\] Circling rules in doubled interaction vertices specified by different thermal indices $\pm$. Here $\mathcal{L}_{Y}$ can be either the Yukawa Lagrangian of the SM extended by a neutrino term or of the neutrinophilic two-Higgs-doublet model (2HDM), to be discussed later.](figures/vertices.pdf){width="82.00000%"} ![\[thermal FeynRule2\] Circling rules in thermal propagators. Here $\alpha$ and $\beta$ take the thermal indices $\pm$. Note that the propagator indices in $(c)$ and $(d)$ are completely determined by the uncircled ones, with $\dot{\alpha}$ and $\dot{\beta}$ taking the opposite signs of $\alpha$ and $\beta$, respectively. ](figures/GreenFun.pdf) Within the real-time formalism, while both the closed-time path formulation and the thermo-field dynamics can be used, we shall follow here the former [@Das1997]. In this formulation, the circling rules necessary for evaluating the absorptive part of the decay amplitude are given in figures \[thermal FeynRule\] (for interaction vertices) and \[thermal FeynRule2\] (for thermal propagators). In order to get a compact expression for the thermal propagators and a unified rule in writing the amplitude for each vertex, we adopt a convention in which the numerator factor $\slashed p \pm m$ of the fermion propagator is decomposed into a spin summation $\sum_s u^s \bar u^s (v^s \bar v^s)$, where the Dirac spinors would then be attached to each vertex. Thus, the thermal propagators, with the subscript indices $\pm$ specifying the corresponding matrix elements, can be written, explicitly, as $$\begin{aligned} G_{++}(p)&=\frac{i}{p^2-m^2+i \epsilon}\pm 2\pi f_{B/F}(\vert p^0 \vert)\delta(p^2-m^2),\\[0.2cm] G_{--}(p)&=\left(G_{++}(p)\right)^*,\\[0.2cm] G_{+-}(p)&=2\pi \left[\pm f_{B/F}(\vert p^0\vert)+\theta(-p^0)\right]\delta(p^2-m^2),\\[0.2cm] G_{-+}(p)&=2\pi \left[\pm f_{B/F}(\vert p^0\vert)+\theta(p^0)\right]\delta(p^2-m^2),\end{aligned}$$ where $f_{B/F}(E)=(e^{E/T}\mp 1)^{-1}$ are the standard distribution functions, with $B$ and $F$ referring to the bosons and fermions, respectively. $\theta(p^0)$ denotes the Heaviside step function. Note that the circled indices in figures \[thermal FeynRule2\]$(c)$ and \[thermal FeynRule2\]$(d)$ are completely determined by the uncircled ones, with $\dot{\alpha}$ and $\dot{\beta}$ taking the opposite signs of $\alpha$ and $\beta$, respectively. For example, the propagator in figure \[thermal FeynRule2\]$(c)$ with an uncircled thermal index $\alpha=+$ is given by $G_{+-}(p)$. ### Leptonic CP asymmetry: model-independent approach As a generic model-independent discussion, let us consider the neutrino Yukawa term, $$\begin{aligned} \label{Yukawa} -\mathcal{L}_\nu=Y_\nu \bar{L} \tilde{\Phi}\nu_R+ {\rm h.c.},\end{aligned}$$ added to the SM Lagrangian. Here we denote the lepton doublet by $L$, and assume that the Higgs doublet $\tilde{\Phi}\equiv i \sigma_2 \Phi^\ast$, with $\sigma_2$ being the Pauli matrix, does not populate well above the electroweak scale. It could be the SM Higgs doublet or a second Higgs doublet which may or may not couple to quarks. To forbid the appearance of Majorana neutrino mass term and, at the same time, to realize the DN, the right-handed neutrinos must carry a non-zero lepton number under some global $U(1)$ symmetry. After the Higgs doublet develops a non-vanishing VEV, $\langle\Phi\rangle=\left(0,v_\Phi/\sqrt{2}\right)^{T}$, the vacuum neutrino mass is then given by $m_\nu=v_\Phi Y_\nu/\sqrt{2}$. ![\[CPasym\] Leptonic CP asymmetry generated in $\Phi\to L\bar\nu$ decay at $\mathcal{O}(Y_\nu^2 Y_\ell^2)$, where the left and the right diagram represent the tree-level and the one-loop contribution, respectively.](figures/CPasym.pdf){width="86.00000%"} Now, let us consider the leptonic CP asymmetry generated in $\Phi \to L \bar\nu$ decay. Since $Y_\nu \ll Y_\ell$ is a generic condition for realizing the DN, we shall not consider the CP asymmetry at $\mathcal{O}(Y_\nu^4)$, which is the case in seesaw-based leptogenesis [@Buchmuller:2004nz]. Instead, we shall determine the CP asymmetry at $\mathcal{O}(Y_\nu^2 Y_\ell^2)$. At this order, the absorptive part of the decay amplitude could arise from self-energy diagrams in the lepton-doublet sector, as well as from vertex diagrams if $\Phi$ also couples to the right-handed charged leptons. Here, let us concentrate on the former. Note that the contribution from vertex diagrams is found to be of similar size as that from the self-energy diagrams in the SM Higgs case, and is even absent in the neutrinophilic 2HDM, as will be detailed in section \[sec:4\]. Then, the CP asymmetry may arise from the interference between tree and one-loop diagrams shown in figure \[CPasym\]. At zero-temperature regime, $T=0$, there is no on-shell cut for an electroweak scalar running in the loop. At high-temperature regime, however, due to thermal bath corrections, the propagators can be on shell, producing therefore a nonzero absorptive part in the amplitude [@Giudice:2003jh; @Hambye:2016sby]. The amplitude for $\Phi \to L \bar\nu$ decay can be defined as $i \mathcal{M}\equiv c_0 I_0+c_1 I_1$, where the coupling constants have been factored out into $c_{0,1}$, while all the other factors are contained in $I_{0,1}$, with the subscripts $0$ and $1$ referring respectively to the contributions from tree and one-loop diagrams shown in figure \[CPasym\]. The thermal leptonic CP asymmetry is then given by $$\begin{aligned} \label{CP definition} \epsilon_D\equiv \frac{\Gamma(\Phi\to L \bar\nu)-\Gamma(\bar\Phi\to \bar L \nu)}{\Gamma(\Phi\to L \bar\nu)+\Gamma(\bar\Phi\to \bar L \nu)} \simeq -2\frac{\text{Im}(c_0^* c_1)}{\vert c_0\vert^2} \frac{\text{Im}(I_0^*I_1)}{\vert I_0\vert^2},\end{aligned}$$ where the second equation is obtained in the rest frame of $\Phi$. As all the charged-lepton (neutrino) flavors are in (out of) L-R equilibration before the sphaleron transition decouples, an implicit summation over all final lepton flavors is assumed in the decay. With the flavor indices being specified in figure \[CPasym\], we have $c_0=Y_{\nu,ij}$ and $c_1=Y_{\nu, kj} Y^*_{\ell,kl} Y_{\ell, il}$. It can be seen that diagonal $Y_\nu$ or $Y_\ell$ would lead to $\text{Im}(c_0^* c_1)=0$. Taking into account the thermal masses and neglecting the small neutrino masses, we obtain the tree-level amplitude squared as $$\begin{aligned} \vert I_0\vert^2=M_\Phi^2(T)-m_{L_i}^2(T).\end{aligned}$$ Here we should mention that thermal corrections to fermions would modify the Dirac equation for a spinor $\psi$ to $[(1+a)\slashed p+b\slashed u] \psi=0$ [@Weldon:1982bn], where $u$ is the four-velocity of the thermal bath, and $a,b$ are temperature-dependent functions (see *e.g.*, ref. [@Giudice:2003jh]). The $a,b$ functions would also modify the fermion propagators and hence the dispersion relations, making the expressions for spin summation and propagator poles quite lengthy and involved. Nevertheless, as illustrated in ref. [@Giudice:2003jh], to a good approximation, both the dispersion relation and the modified Dirac equations can be simplified by replacing the vacuum mass with the thermal one, *i.e.*, $p^2\simeq m^2(T)$. We shall confine ourselves to adopt this approximation in the subsequent calculations. ### Thermal effects: time-ordered and retarded/advanced cuts {#sec:TO&RAcuts} To calculate the leptonic CP asymmetry arising purely from thermal effects, one can either use non-equilibrium QFT or thermal field theory outlined above. However, it was originally noticed that there exists a discrepancy between these two approaches in calculating the CP asymmetry in seesaw-based leptogenesis, *i.e.*, in the heavy Majorana neutrino decay $N\to H L$ [@Garny:2009rv; @Garny:2009qn]. In ref. [@Garny:2010nj], on the other hand, it was demonstrated that both approaches can yield the same result for this process, if the conventional time-ordered (TO) cut [@Kobes:1986za; @Kobes:1990ua; @Gelis:1997zv; @Giudice:2003jh] is replaced by the retarded/advanced product (dubbed as retarded/advanced (RA) cut hereafter for comparison) [@Kobes:1990kr; @Kobes:1990ua]. In this paper, we shall stick to the thermal real-time formalism, but take the subtlety into account by using both the TO and RA cuts. ![\[thermalcuts\] Thermal TO cuts (circlings) for producing a non-vanishing CP asymmetry in $\Phi\to L_i \bar\nu_{j}$ decay. The external thermal indices are fixed to $+$, while the internal index is summed over $\pm$.](figures/CPasym-Cut-1.pdf "fig:"){width="45.00000%"} ![\[thermalcuts\] Thermal TO cuts (circlings) for producing a non-vanishing CP asymmetry in $\Phi\to L_i \bar\nu_{j}$ decay. The external thermal indices are fixed to $+$, while the internal index is summed over $\pm$.](figures/CPasym-Cut-2.pdf "fig:"){width="45.00000%"} With our definitions of the amplitudes $I_{0,1}$, the imaginary (absorptive) part of the product $I_0^* I_1$ in eq.  can be written as $$\begin{aligned} \label{imagianry amp} \text{Im}(I_0^* I_1)=\frac{1}{2i}I_0^* \sum_{\text{circling}}I_1.\end{aligned}$$ With the TO-cutting scheme, there are two circling diagrams contributing to the CP asymmetry, as shown in figure \[thermalcuts\]. Summing the internal thermal index over $\pm$, while fixing the external indices to $+$, we can write the corresponding amplitudes as $$\begin{aligned} I_1^{(a)} &= i\int\frac{d^4 k}{(2\pi)^4} \left(\bar{u}_{L_i} P_R\, u_{e_l} \right)\left(\bar{u}_{e_l} P_L u_{L_k} \right)\left(\bar{u}_{L_k} P_R v_{\nu_j} \right)\,\nonumber\\[0.1cm] &\hspace{0.5cm} \times G^F_{++}(p_i)\,G^F_{+-}(k)\,G^B_{+-}(p_i-k), \\[0.2cm] I_1^{(b)} & = -i\int\frac{d^4 k}{(2\pi)^4} \left(\bar{u}_{L_i} P_R\, u_{e_l} \right)\left(\bar{u}_{e_l} P_L u_{L_k} \right)\left(\bar{u}_{L_k} P_R v_{\nu_j} \right)\,\nonumber\\[0.1cm] &\hspace{0.5cm} \times G^F_{--}(p_i)\,G^F_{-+}(k)\,G^B_{-+}(p_i-k),\end{aligned}$$ where the superscripts $B$ and $F$ denote the bosonic and fermionic propagators, respectively. The absorptive part for the TO cut is then determined to be $$\begin{aligned} \label{ImIoI1} \text{Im}(I_0^* I_1)^{ \text{TO}} &= \frac{1}{2 (2\pi)^2}\int d\omega\; \vert\boldsymbol{k}\vert^2 d \vert\boldsymbol{k}\vert\; d\cos\theta \; d\varphi \times \text{Tr} \nonumber \\[0.1cm] &\times \frac{1}{p_i^2-m_{L_k}^2} \times \delta[k^2-m_{e_l}^2]\times \delta[(p_i-k)^2-M_H^2] \nonumber \\[0.1cm] &\times \Big\{ \left[\theta(-\omega)-f_F(\vert \omega\vert)\right]\cdot \left[\theta(-(E_i-\omega))+f_B(\vert E_i-\omega\vert)\right] \nonumber \\[0.1cm] &\hspace{0.4cm} + \left[\theta(\omega)-f_F(\vert \omega\vert)\right]\cdot\left[\theta(E_i-\omega)+f_B(\vert E_i-\omega\vert)\right] \Big\},\end{aligned}$$ where the four-momenta $k$ and $p_i$ are decomposed, respectively, as $k=(\omega,\boldsymbol{k})$ and $p_i=(E_i, \boldsymbol{p_i})$, while $\cos\theta\equiv \boldsymbol{p_i}\cdot \boldsymbol{k}/\vert\boldsymbol{p_i}\vert \vert\boldsymbol{k}\vert$. The trace from spin summation is given by $$\begin{aligned} \text{Tr}=(k\cdot p_i) \left(4 q\cdot p_i-2m_{L_i}^2\right)-2m_{L_i}^2 (k\cdot q).\end{aligned}$$ To perform the integration in eq. , a convenient way is to integrate firstly over $\cos\theta$ via $\delta[(p_i-k)^2-M_H^2]$, then over $\vert\boldsymbol{k}\vert$ via $\delta[k^2-m_{e_l}^2]$, and finally over $\omega$. Due to the appearance of Heaviside step functions in eq. , however, we must determine the sign of $\omega$ before performing the integration over $\omega$. To this end, remembering that $M_{H}$ is much larger than $m_{L_i}$ and $m_{e_l}$, the presence of $\delta[(p_i-k)^2-M_H^2]$, together with $\delta[k^2-m_{e_l}^2]$, implies that $$\begin{aligned} \label{deltamil} \Delta m_{il}^2\equiv m_{L_i}^2+m_{e_l}^2-M_{H}^2=2p_i\cdot k=2(E_i \omega -\vert\boldsymbol{p_i}\vert \vert\boldsymbol{k}\vert \cos\theta)<0. \end{aligned}$$ With $-1\leqslant\cos\theta \leqslant 1$, it can then be found that $$\begin{aligned} \label{oemga sign} \omega<0,\quad E_i-\omega>0. \end{aligned}$$ As a consequence, the overall dependence of eq.  on the distribution functions is now simplified as $$\begin{aligned} \label{quadratic dependence} N(\omega)\equiv f_B(\vert E_i-\omega\vert)-f_F(\vert \omega\vert)-2f_F(\vert \omega \vert)f_B(\vert E_i-\omega\vert).\end{aligned}$$ The final integration region of $\omega$ is determined by $$\begin{aligned} -1\leqslant\frac{\Delta m_{il}^2-2E_i \omega}{-2\vert\boldsymbol{p_i}\vert \vert\boldsymbol{k}\vert}\leqslant 1,\end{aligned}$$ where $\vert\boldsymbol{k}\vert $ takes the approximate dispersion relation, $\vert\boldsymbol{k}\vert =\sqrt{\omega^2-m_{e_l}^2}$, resulting therefore in $\omega_{min}\leqslant\omega\leqslant\omega_{max}$, with $$\begin{aligned} \omega_{min(max)}=\frac{1}{4\,M_{\Phi}\,m_{L_i}^2}\left[\Delta m_{il}^2\, (M_{\Phi}^2+m_{L_i}^2)\mp (M_\Phi^2-m_{L_i}^2)\,\sqrt{\Delta m_{il}^4-4m_{e_l}^2 m_{L_i}^2}\,\right],\end{aligned}$$ in the limit of vanishing neutrino masses. Our final expression of the CP asymmetry is then given by $$\begin{aligned} \label{CP result} \epsilon_D = -2\,\frac{1}{\sum\limits_i (Y_\nu Y^\dagger_\nu)_{ii}(M_\Phi^2-m_{L_i}^2)} \sum\limits_{i\neq k}\text{Im}[(Y_\nu Y_\nu^\dagger)_{ki}(Y_{\ell,il}Y^\dagger_{\ell,lk})]\, \mathcal{F}(M_\Phi^2,m_{L_i}^2,m_{L_k}^2,m_{e_l}^2),\end{aligned}$$ where the scalar function is defined as $$\begin{aligned} \label{scalar fun} \mathcal{F}(M_\Phi^2,m_{L_i}^2,m_{L_k}^2,m_{e_l}^2)&=\frac{1}{8\pi}\frac{M_\Phi^2}{(M_\Phi^2-m_{L_i}^2)(m_{L_i}^2-m_{L_k}^2)} \nonumber \\[0.1cm] &\times \int_{\omega_{min}}^{\omega_{max}} d\omega \left(\Delta m_{il}^2 M_\Phi-2m_{L_i}^2\omega\right) N(\omega).\end{aligned}$$ It should be emphasized here that the dependence of $\mathcal{F}$ on the index $l$ comes from the charged-lepton Yukawa coupling contribution to the thermal lepton mass $m_{e_l}^2$ present in $\Delta m_{il}^2$ (see eq. ). As $\Delta m_{il}^2$ is dominated by contributions from the gauge and top-quark Yukawa couplings (the thermal masses will be discussed later in section \[sec:4\]), it can be inferred that the $l$ dependence is very weak, rendering therefore the CP asymmetry to carry an imaginary piece, $\text{Im}[(Y_\nu Y_\nu^\dagger)_{ki}(Y_{\ell}Y^\dagger_{\ell})_{ik}]$, as mentioned already in the Introduction. Furthermore, the $i$ dependence coming from $M_\Phi^2-m_{L_i}^2$ is also overwhelmed by contributions from the gauge couplings, as well as the possibly sizable scalar potential parameters and quark Yukawa couplings. However, as the dominate contributions from the gauge couplings are canceled out in $m_{L_i}^2-m_{L_k}^2$, the $i,k$ dependence coming from $m_{L_i}^2-m_{L_k}^2$ cannot be neglected. Thus, the CP asymmetry given by eq.  displays a nontrivial dependence on the indices $i,k$. Using the TO cut, we have obtained a quadratic dependence of the CP asymmetry on the distribution functions, as can be seen from eq. . Such a quadratic dependence was also derived for the thermal Higgs decay $H\to NL$ in ref. [@Giudice:2003jh]. Within the non-equilibrium QFT framework [@Frossard:2012pc], however, the dependence was found to be linear in $f_H+f_L$ for the same decay [@Hambye:2016sby]. Following the argument made in ref. [@Garny:2010nj], we now turn to use the RA cut to determine the absorptive part, and check if such a linear dependence can be reproduced in our case. With our convention, the imaginary amplitude is given by $$\begin{aligned} \text{Im}(I_0^*I_1)^{\text{R/A}}=\mp \frac{1}{2i}I_0^* \sum^{RA\;\text{cut}}_{\text{circling}}I_1,\end{aligned}$$ where $\mp$ correspond to the results with the retarded/advanced cut, respectively. In this context, only the circling diagram shown in figure \[thermalcuts\](a) contributes, leading to $$\begin{aligned} \label{RAcut_ImI0I1} \text{Im}(I_0^*I_1)^{\text{R/A}}&=\mp \frac{1}{2i}\int \frac{d^4 k}{(2\pi)^4} \left(\bar{u}_{L_i} P_R v_{\nu_j} \right)^* \left(\bar{u}_{L_i} P_R u_{e_l} \right)\left(\bar{u}_{e_l} P_L u_{L_k} \right)\left(\bar{u}_{L_k} P_R v_{\nu_j} \right) \nonumber \\[0.1cm] &\hspace{0.5cm} \times \left[D_{L_k}(p_i)D^+_{e_l}(k)D_{H}^+(p_i-k)-D_{L_k}(p_i)D^-_{e_l}(k)D_{H}^-(p_i-k)\right] \nonumber \\[0.2cm] &=\pm\frac{1}{2(2\pi)^2}\int d^4k \frac{1}{m_{L_i}^2-m_{L_k}^2}\delta[k^2-m_{e_l}^2]\delta[(p_i-k)^2-M_H^2] \times \text{Tr} \nonumber \\[0.1cm] &\hspace{0.5cm} \times \left[f_F(-\omega)+f_B(E_i-\omega)\right],\end{aligned}$$ where the thermal propagators with the RA cut are given by $$\begin{aligned} D(p)=G_{++}(p), \quad D^-(p)=G_{+-}(p), \quad D^+(p)=G_{-+}(p),\end{aligned}$$ and eq.  has been used. It can be clearly seen from eq.  that the RA-cutting scheme does lead to a linear dependence on the distribution function $f_B+f_F$. At the same time, the imaginary amplitude, $\text{Im}(I_0^*I_1)^{\text{TO}}$, obtained with the TO-cutting scheme can be reconciled to match the retarded amplitude, $\text{Im}(I_0^*I_1)^{\text{R}}$, via the following replacement for the distribution functions: $$\begin{aligned} \label{distribution dependence} f_B-f_F-2f_B f_F\to f_B+f_F. \end{aligned}$$ It should be mentioned that the linear dependence obtained in ref. [@Hambye:2016sby] is also based on a retarded self-energy cut, though within the non-equilibrium QFT framework. In conclusion, using the real-time formalism in thermal field theory, a quadratic dependence on the distribution functions is obtained under the conventional TO-cutting scheme, while a linear dependence appears in the RA-cutting scheme. It is also found that, albeit with a different dependence on the distribution functions, the retarded amplitude can be simply obtained from the TO result with the replacement specified by eq. , which has also been observed in ref. [@Garny:2010nj]. ### Simplified Boltzmann equation: freeze-in evolution With the CP asymmetry in hand, we now proceed to determine the evolution of the lepton-number asymmetry based on the Boltzmann equation. The general Boltzmann equation for species $X$ participating in the process $A+B\rightleftarrows C+X$ reads $$\begin{aligned} \label{Boltzmann eq} \dot{n}_X+3H n_X & = \int d\Pi_X\, d\Pi_A\, d\Pi_B\, d\Pi_C\, (2\pi)^4\,\delta^{(4)}(p_A+p_B-p_C-p_X) \\[0.1cm] &\hspace{-2.0cm} \times \Big[\vert\mathcal{M}_{A+B\to C+X}\vert^2\,f_A f_B (1\pm f_C)(1\pm f_X) -\vert \mathcal{M}_{C+X\to A+B}\vert^2 f_C f_X (1\pm f_A)(1\pm f_B) \Big],\nonumber\end{aligned}$$ where the phase-space factor is given by $d\Pi_i=d^3p_i/(2\pi)^32E_i$. The Dirac delta function $\delta^{(4)}(p_A+p_B-p_C-p_X)$ enforces the four-momentum conservation in collisions. The amplitude squared is obtained by summing over the initial- and final-state spins but without average. The factors $1\pm f_i$ correspond to the Bose enhancement and the Pauli blocking effect, respectively. The Hubble parameter at radiation-dominated flat Universe is given by $H=1.66\sqrt{g^\rho_\ast}\,T^2/M_{Pl}$, where $g^\rho_*$ denotes the relativistic d.o.f at temperature $T$, and $M_{Pl}=1.2\times 10^{19}$ GeV is the Planck mass. In the early Universe, right-handed neutrinos are produced only through feeble Yukawa interactions. Thus, the production is essentially out-of-equilibrium and the particle number density is therefore negligibly small. Such a freeze-in production mechanism [@Hall:2009bx] effectively prevents large washout effects in the neutrino number density (as well as the lepton-number asymmetry generated therein) from inverse decay and annihilation scattering. In this context, the Boltzmann equation for the lepton-number asymmetry generated in $\Phi\to L \bar \nu$ decay can be much simplified by neglecting the inverse decay and annihilation scattering, because these processes are proportional to the negligible particle-number density. As a consequence, the lepton-number asymmetry can be accumulated as the right-handed neutrinos are produced and converted to the baryon-number asymmetry via rapid sphaleron transitions. Since the lepton-number asymmetry stored in the right-handed neutrinos is equal but with an opposite sign to that in the lepton doublets, we can determine it in either sector. For the lepton doublet, the Boltzmann equation can be simplified as $$\begin{aligned} \dot{n}_L+3H n_L =\int d\Pi_{\Phi} f^{eq}_\Phi \int d\Pi_{\nu} d\Pi_L\, (2\pi)^4\delta^{(4)}(p_\Phi-p_{\nu}-p_L)\,\vert\mathcal{M}(\Phi\to L \bar\nu)\vert^2,\end{aligned}$$ where, as an approximation, we have set the quantum statistic factors $1\pm f\approx1$. In addition, as we consider a thermal particle with vacuum mass around $\mathcal{O}(10^2)$ GeV, for a simple estimation, we shall use $f^{eq}_{\Phi}=e^{-E/T}$ for the phase-space integration. With our definition of the CP asymmetry $\epsilon_D$ (see eq. ), the evolution of the lepton-number asymmetry $n_{\Delta L}\equiv n_L-n_{\bar L}$ can be written as $$\begin{aligned} \label{lepton-asymmetry Boltzmann } \dot{n}_{\Delta L}+3Hn_{\Delta L}&\approx \int d\Pi_\Phi f_{\Phi}^{eq}\, 2\,g_{\Phi} \,M_{\Phi}\,\left[\Gamma(\Phi \to L \bar\nu)-\Gamma(\bar\Phi\to \bar L \nu)\right] \nonumber\\[0.1cm] &=\int d\Pi_{\Phi}\,f_{\Phi}^{eq} \times 2\,g_{\Phi}\,M_{\Phi} \times 2\,\epsilon_D \times \Gamma(\Phi\to L \bar\nu),\end{aligned}$$ where $g_{\Phi}=4$ accounts for the four d.o.f of the Higgs doublet $\Phi$. In the sphaleron-active epoch, the relativistic d.o.f $g_*^s$ present in the entropy density, $s=T^3 g_*^s 2\pi^2/45$, can be treated as a constant, and thus we can use the yield definition $Y=n/s$, with $\dot{s}=-3Hs$, and $\dot{T}=-H T$, to rewrite the above equation as $$\begin{aligned} \label{lepton-asymmetry Boltzmann2} \frac{dY_{\Delta L}}{dT}=-\frac{1}{sH}\,\frac{1}{\pi^2}\,g_{\Phi}\,\epsilon_D\,M_{\Phi}^2\, K_1(M_{\Phi}/T)\,\Gamma(\Phi\to L\bar\nu).\end{aligned}$$ Here $K_1$ denotes the first modified Bessel function of the second kind. In the rest frame of $\Phi$, the decay width is given by $$\begin{aligned} \label{decay rate} \Gamma(\Phi \to L \bar\nu)&=\sum\limits_{i=e,\mu,\tau}\frac{1}{8\pi}\,(Y_\nu Y_\nu^\dagger)_{ii}\,\frac{\vert \boldsymbol{p_i}\vert}{M_{\Phi}^2}\,(M_{\Phi}^2-m_{Li}^2),\end{aligned}$$ where $\vert\boldsymbol{p_i}\vert=(M_{\Phi}^2-m_{Li}^2)/(2M_{\Phi})$ is the momentum of the two final-state particles, and the neutrino masses have been neglected. Up to now, we have obtained a nonzero kinetic phase contained in $\text{Im}(I_0^*I_1)$ within the framework of thermal field theory. To generate a non-vanishing lepton-number asymmetry, however, non-diagonal Yukawa couplings $Y_{\ell,\nu}$ are also required. In the next section, we turn to exploit the Yukawa structures that can generate a nonzero coupling phase contained in $\text{Im}(c_0^* c_1)$ and, at the same time, produce compatible neutrino oscillation data. Non-diagonal Yukawa textures in the lepton sector {#sec:3} ================================================= When the lepton-number asymmetry is generated via the freeze-in production of right-handed neutrinos, the washout effects from inverse decay and annihilation scattering are negligible. Therefore, the flavor effects encoded in these washout processes, *e.g.*, $\Phi \bar\nu_{i}\rightleftharpoons \bar \Phi \nu_{j}$, would not play a significant role in the Boltzmann evolution. However, as mentioned already below eq. , the dependence of $\mathcal{F}$ on the indices $i, k$ from the lepton-doublet propagator cannot be neglected. In fact, such a dependence is crucial to induce a non-vanishing CP asymmetry $\epsilon_D$, because otherwise the imaginary coupling sector would vanish, *i.e.*, $\text{Im}[\text{tr}(Y_\nu Y^\dagger_\nu\,Y_\ell Y^\dagger_\ell)]=0$. Furthermore, a nonzero $\epsilon_D$ also requires the Yukawa matrices $Y_{\ell,\nu}$ to be non-diagonal. Therefore, after summing over the lepton flavors, the CP asymmetry is still texture dependent, which is a generic feature of leptogenesis [@Branco:2002kt]. As a consequence, the freeze-in DN considered here essentially puts us towards the flavor puzzle in particle physics, on which no consensus has been hitherto reached. Especially, it is not known *a priori* whether a flavor basis, in which the charged-lepton (or neutrino) Yukawa matrix is diagonal while the other is not, should be used as a natural setup, even though the charged-lepton flavors are often assumed to populate in the diagonal space for most flavor studies. On the other hand, the SM is often extended by introducing sufficient dynamical fields and free parameters, which might break the freeze-in DN or result in the BAU explanation via other avenues. In order not to spoil the formulation and results obtained up to now in this paper, we shall consider the lepton Yukawa textures from a phenomenological point of view. With such a perspective in mind, we make a general assumption: the non-trivial Yukawa textures are induced by some underlying flavor mechanism, such that it forbids us to choose the otherwise arbitrary flavor basis, in particular, the basis in which either the charged-lepton or the neutrino Yukawa matrix is diagonal. This can be realized, *e.g.*, by some symmetry-based ansatz in which a preferred basis is constructed under the symmetry invariance. In addition, we shall postulate that the observed pattern of the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix [@Pontecorvo:1957cp; @Maki:1962mu] is induced by the TB mixing with a minimal correction from the charged-lepton or neutrino sector [@Albright:2008rp]. The well-known TB mixing pattern has a mass-independent form [@Harrison:2002er] $$\begin{aligned} V_{TB} =\left( \begin{array}{ccc} \sqrt{\frac{2}{3}}\,\, & \frac{1}{\sqrt{3}}\,\, & 0 \\[0.2cm] -\frac{1}{\sqrt{6}}\,\, & \frac{1}{\sqrt{3}}\,\, & \frac{1}{\sqrt{2}} \\[0.2cm] -\frac{1}{\sqrt{6}}\,\, & \frac{1}{\sqrt{3}}\,\, & -\frac{1}{\sqrt{2}} \\ \end{array} \right).\end{aligned}$$ Although the pure TB mixing is already excluded by the observed nonzero reactor angle $\theta_{13}$ (see *e.g.*, refs. [@Esteban:2016qun; @deSalas:2017kay; @Esteban:2018azc; @Capozzi:2018ubv] for recent global analyses), it has been pointed out that, with a minimal correction from the charged-lepton or neutrino sector, this mixing pattern can readily produce the compatible neutrino oscillation data, while retaining the predictability and testability of the relations among the mixing angles [@Albright:2008rp; @He:2011gb]. As demonstrated in ref. [@He:2011gb], there exist four possible minimal corrections to the TB mixing pattern that are still compatible with the current PMNS data at $3\sigma$ level. According to which column or row of the TB matrix is invariant under the minimal corrections, the modified patterns can be classified as $\text{TM}_i$ (invariance of the $i$-th column) and $\text{TM}^i$ (invariance of the $i$-th row) [@Albright:2008rp]. Thus, on account of the observations made in refs. [@Albright:2008rp; @He:2011gb], we have $$\begin{aligned} \label{four patterns} \text{TM}_1:\;\;U&=V_{TB}R_{23},\qquad \text{TM}_2:\;\;U=V_{TB}R_{13}, \nonumber \\[0.2cm] \text{TM}^2:\;\;U&=R_{13}V_{TB},\qquad \text{TM}^3:\;\;U=R_{12}V_{TB},\end{aligned}$$ with the unitary Euler rotation matrices given, respectively, by $$\begin{aligned} R_{12}(\theta)&=\left( \begin{array}{ccc} \cos\theta\; & \sin\theta e^{i\varphi}\; & 0\\[0.1cm] -\sin\theta e^{-i\varphi }\; & \cos\theta\; & 0\\[0.1cm] 0\; & 0\; & 1\\ \end{array} \right),\quad R_{13}(\theta)=\left( \begin{array}{ccc} \cos\theta\; & 0\; & \sin\theta e^{i\varphi }\\[0.1cm] 0\; & 1\; & 0\\[0.1cm] -\sin\theta e^{-i\varphi}\; & 0\; & \cos\theta \\ \end{array} \right), \nonumber \\[0.2cm] R_{23}(\theta )&=\left( \begin{array}{ccc} 1\;& 0\; & 0\\[0.1cm] 0\;& \cos\theta\; & \sin\theta e^{i\varphi}\\[0.1cm] 0\;& -\sin\theta e^{-i\varphi }\; & \cos\theta\\ \end{array} \right),\end{aligned}$$ where $0\leq\theta\leq\pi$ and $0\leq\varphi\leq2\pi$. With the convention $M_f=V_f^{L\dagger} \hat{M}_f V_f^R$, where $M_f$ and $\hat{M}_f$ represent the mass matrix before and after the diagonalization, the flavor (primed) and the mass (un-primed) eigenstates are transformed to each other via the relations $f_{L(R)}^\prime =V_f^{L(R)\dagger} f_{L(R)}$, and the PMNS matrix present in the $W$-mediated charged current $\bar{\ell}_L U \gamma_\mu \nu_L$ is given by $U=V_\ell^L V_\nu ^{L \dagger}$. Then, in accordance with eq. , the charged-lepton (neutrino) mixing matrix in patterns $\text{TM}_{1,2}$ would be given by $V_\ell^L=V_{TB}$ ($V_\nu^L=R_{23}^\dagger, R_{13}^\dagger$), while the matrix for neutrinos (charged leptons) turns out to be $V_\nu^L=V_{TB}^\dagger$ ($V_\ell^L=R_{13}, R_{12}$) in patterns $\text{TM}^{2,3}$. The product of Yukawa matrices can also be rewritten in terms of the mixing and physical mass matrices as $$\begin{aligned} Y_f Y_f^\dagger =\frac{2}{v_f^2}V_f^{L \dagger} \hat{M}_f^2 V^L_f,\end{aligned}$$ where $v_f$ denote the VEVs developed by the Higgs doublets responsible for generating the charged-lepton and neutrino masses, respectively. Given that the leptonic CP asymmetry $\epsilon_D$ is approximately proportional to $\text{Im}[(Y_\nu Y_\nu^\dagger)_{ki}(Y_{\ell}Y^\dagger_{\ell})_{ik}]$ (see eq. ), it can be seen that, after fixing the kinematics, $\epsilon_D$ will depend on the two mixing parameters, $(\theta, \varphi)$, both of which are also directly responsible for producing the current neutrino oscillation data. Besides the requirement that eq.  should produce the observed moduli of the PMNS matrix, $\vert U \vert$, a basis-independent and rephasing-invariant measure of the low-energy CP violation, defined as [@Jarlskog:1985ht] $$\begin{aligned} \mathcal{J}\sum\limits_\gamma \epsilon_{\alpha \beta \gamma} \sum\limits_k \epsilon_{ijk}=\text{Im}[U_{\alpha i}U^*_{\alpha j}U^*_{\beta i}U_{\beta j}],\end{aligned}$$ is also crucial to exploit how successfully the freeze-in DN can be inferred, particularly, from the sign of $\mathcal{J}$. In the standard convention of the PMNS matrix [@Tanabashi:2018oca; @Xing:2019vks], the Jarlskog invariant $\mathcal{J}$ is given by $$\begin{aligned} \mathcal{J}=\frac{1}{8}\cos\theta_{13}\sin(2\theta_{12})\sin(2\theta_{23}) \sin(2\theta_{13})\sin\delta.\end{aligned}$$ Recently, a global fit of neutrino oscillation parameters has obtained a strong preference for values of the Dirac CP phase $\delta$ in the range $[\pi,2\pi]$ [@deSalas:2017kay]. Given the observed mixing angles, this implies a negative CP measure $\mathcal{J}<0$. In particular, the best-fit value favors $\delta\simeq 3\pi/2$ [@Esteban:2016qun], leading to (at $3\sigma$ level) $$\begin{aligned} \mathcal{J}^{max}_{CP}=-\left(0.0329^{+0.0021}_{-0.0024}\right),\end{aligned}$$ where the uncertainties come from the determination of the mixing angles. Corresponding to the four mixing patterns specified by eq. , the Jarlskog invariant $\mathcal{J}$ is given, respectively, by $$\begin{aligned} \label{Jarlskog} \text{TM}_1&:~\mathcal{J}=-\frac{\sin(2\theta)\sin\varphi}{6\sqrt{6}}, \qquad &\text{TM}_2&:~\mathcal{J}=-\frac{\sin(2\theta)\sin\varphi}{6\sqrt{3}}, \nonumber \\[0.5mm] \text{TM}^2&:~\mathcal{J}=\frac{\sin(2\theta)\sin\varphi}{12}, \qquad &\text{TM}^3&:~\mathcal{J}=-\frac{\sin(2\theta)\sin\varphi}{12}.\end{aligned}$$ If the maximal CP violation, $\mathcal{J}=\mathcal{J}_{CP}^{max}$ is assumed, and the values of the mixing angles, $(\theta,\varphi)$ are taken to produce the $3\sigma$ ranges of the PMNS matrix moduli $\vert U\vert$, we can then establish whether such a low-energy maximal CP violation can prompt a successful DN. To this end, we need firstly specify the decaying particle for generating the leptonic CP asymmetry, which will be explored at length in the next section. Thermal scalar implementation {#sec:4} ============================= As an alternative to most DN applications in which the lepton-number asymmetry is generated by non-thermal heavy particle decays [@Dick:1999je; @Murayama:2002je; @Cerdeno:2006ha; @Gu:2007mc; @Bechinger:2009qk; @Narendra:2017uxl], we have considered the case where the asymmetry is accumulated via the freeze-in production of right-handed neutrinos from thermal scalar decay. In this section, we shall specify the minimal Higgs doublet for implementing the freeze-in DN described in section \[sec:2\], and consider, in particular, the CP asymmetry obtained by using both the TO and RA cuts. SM Higgs case ------------- If the DN were realized by the SM Higgs, the neutrino mass and BAU would then be simultaneously addressed by simply adding the missing neutrino Yukawa interactions (eq. ) to the SM. The only price to pay is to accept the non-aesthetic, feeble neutrino Yukawa couplings, which are of $\mathcal{O}(10^{-14})$ for $\mathcal{O}(10^{-2})$ eV neutrino masses. Since the sphaleron-active epoch, $10^2~\text{GeV}<T<10^{12}~\text{GeV}$, is considered, we shall use the thermal masses of the SM Higgs and leptons that are given by [@Weldon:1982bn; @Giudice:2003jh] $$\begin{aligned} M_H^2 &\simeq \left(\frac{3}{16}g_2^2+\frac{1}{16}g_1^2+\frac{1}{4}y_t^2+\frac{1}{2}\lambda\right)T^2, \label{thermal Higgs masses} \\[0.2cm] m_{L_i}^2 &= \left(\frac{3}{32}g_2^2+\frac{1}{32}g_1^2+\frac{1}{16}(Y_{\ell}Y^\dagger_{\ell})_{ii}\right)T^2, \label{thermal L-doublet masses} \\[0.2cm] m_{e_l}^2 &= \left(\frac{1}{8}g_1^2+\frac{1}{8}(Y_{\ell}Y^\dagger_{\ell})_{ll}\right)T^2, \label{thermal L-singlet masses}\end{aligned}$$ where $g_{2}$ ($g_{1}$) is the $\text{SU(2)}_L$ ($\text{U(1)}_Y$) gauge coupling, and $\lambda$ is the SM Higgs potential parameter satisfying the tadpole equation, $m_h^2=\lambda v^2$, with $h$ being the physical SM Higgs boson. We have only kept the dominant top-Yukawa contribution to $M_{H}^2$ by assuming a diagonal Yukawa matrix in the up-type quark sector. A general charged-lepton Yukawa matrix is, however, retained for the thermal masses of leptons, because a non-diagonal $Y_\ell$ is crucial for generating a non-vanishing lepton-number asymmetry. While the renormalization-group running of the coupling constants should in principle be taken into account at a scale $\mu\simeq 2\pi T$ [@Giudice:2003jh], which would prompt additional $T$-dependent sources, as a simple estimation, we shall use here the vacuum values of these coupling constants. Based on the analysis of index dependence of the CP asymmetry made below eq. , we can neglect the small Yukawa contributions to $M_\Phi^2 \pm m_{L_i}^2$ and $\Delta m_{il}^2$, because they are basically overwhelmed by the contributions from gauge, potential parameters, and top-quark Yukawa couplings. Under this approximation, the integration of eq.  over the sphaleron-active regime induces a semi-analytic expression for the baryon-number asymmetry. It is found that, for the mixing patterns $\text{TM}_{1,2}$, the baryon-number asymmetry is estimated to be $$\begin{aligned} \label{asymmetry in SM Higgs} \text{TM}_1&:\;\;Y_{\Delta B}^{\text{TO}}\simeq -Y_{\Delta B}^{\text{R}} \simeq \mathcal{O}(10^{-15}) \sin(2\theta) \sin\varphi, \\[0.2cm] \text{TM}_2&:\;\;Y_{\Delta B}^{\text{TO}}\simeq -Y_{\Delta B}^{\text{R}}\simeq -\mathcal{O}(10^{-16}) \sin(2\theta) \sin\varphi,\end{aligned}$$ where $Y_{\Delta B}^{\text{TO}}$ and $Y_{\Delta B}^{\text{R}}$ are obtained with the input of the CP asymmetry determined in the TO- and the retarded-cutting scheme, respectively. For patterns $\text{TM}^{2,3}$, the baryon-number asymmetry is found to be even smaller, with $$\begin{aligned} \text{TM}^2&:\;\;Y_{\Delta B}^{\text{TO}}\simeq -Y_{\Delta B}^{\text{R}} \simeq -\mathcal{O}(10^{-17}) \tan(2\theta)\sin\varphi, \\[0.2cm] \text{TM}^3&:\;\;Y_{\Delta B}^{\text{TO}}\simeq -Y_{\Delta B}^{\text{R}}\simeq -\mathcal{O}(10^{-17}) \tan(2\theta)\sin\varphi. \label{asymmetry in SM Higgs2}\end{aligned}$$ To obtain the numerical factors, we have used $g_*^\rho=g_*^s=106.75$. In addition, we have adopted a normal-ordering neutrino mass hierarchy, as suggested by the recently global analyses [@Capozzi:2018ubv; @Esteban:2018azc], and neglected the lightest neutrino mass. Explicitly, the input values of neutrino masses are given by $m_1\simeq 0$, $m_2\simeq \sqrt{\Delta m_{21}^2}$, and $m_3\simeq \sqrt{\Delta m_{31}^2}$, with the mass-squared differences taken from ref. [@Esteban:2016qun]. It can be seen from eqs. – that the dependence of $Y_{\Delta B}$ on the trigonometric functions is different between $\text{TM}_{1,2}$ and $\text{TM}^{2,3}$. This is because an additional $\theta$ dependence appears in the thermal fermion masses for patterns $\text{TM}^{2,3}$. It is also found that the baryon-number asymmetries induced by the TO- and retarded-cutting CP asymmetries have basically the same size but with an opposite sign. As will be discussed in the next subsection, such a sign difference becomes important in generating a positive baryon-number asymmetry, together with a negative CP measure in neutrino oscillations. Compared with the observed baryon-number asymmetry of the Universe at present day [@Aghanim:2018eyx], $$\begin{aligned} \label{BAU today} Y_{\Delta B}=(8.75\pm 0.23)\times 10^{-11},\end{aligned}$$ the amount of asymmetry induced by the minimal SM Higgs is negligible. Although we have followed here a phenomenological perspective, $Y_{\Delta B}^{\text{TO,R}}$ given by eqs. – are primarily controlled by the neutrino Yukawa couplings $Y_\nu\simeq \mathcal{O}(10^{-14})$, and thus the orders of magnitude estimated therein are quite reasonable. This can also be justified by noting that, even though the neutrino Yukawa couplings may be canceled in the imaginary coupling sector, the decay rate $\Gamma(H\to L \bar\nu)$ involves the couplings at $\mathcal{O}(Y_\nu^2)$. In addition, as the SM Higgs also couples to the right-handed charged leptons, an additional contribution to the leptonic CP asymmetry can be induced by the vertex correction. It is, however, expected that such an amount of asymmetry would be similar to that generated by the wavefunction correction, as no quasi-degenerate mass spectrum could resonantly enhance the latter within the SM. Based on these observations, the SM Higgs implementation should be therefore dismissed, and we are driven to consider new scalars beyond the minimal SM. Neutrinophilic two-Higgs-doublet model {#sec:4.2} -------------------------------------- A direct enhancement of the lepton-number asymmetry can be achieved by invoking a sufficiently large neutrino Yukawa coupling, while retaining the out-of-equilibrium condition. This can be realized by introducing another Higgs doublet which develops a smaller VEV. As a minimal extension of the SM, let us focus on the neutrinophilic 2HDM [@Davidson:2009ha], in which the second Higgs doublet couples neither to quarks nor to right-handed charged leptons. In such a neutrinophilic 2HDM, both the right-handed Dirac neutrinos and the new Higgs doublet possess an additional $Z_2$ parity. The model Lagrangian has a soft $Z_2$-breaking scalar potential [@Davidson:2009ha] $$\begin{aligned} V&=m_1^2 H_1^{\dagger}H_1 +m_2^2H_2^{\dagger} H_2 -(m_{12}^2H_1^{\dagger} H_2 + \text{h.c.}) +\frac{\lambda_1}{2}(H_1^{\dagger} H_1)^2+ \frac{\lambda_2}{2}(H_2^{\dagger}H_2)^2 \nonumber \\ &+\lambda_3(H_1^{\dagger} H_1) (H_2^{\dagger}H_2) +\lambda_4(H_1^{\dagger} H_2) (H_2^{\dagger} H_1) + \left[ \dfrac{\lambda_5}{2}(H_1^{\dagger} H_2)^2+ \text{h.c.} \right]. \label{Higgs potential}\end{aligned}$$ For a real and positive soft-breaking term $m_{12}^2\ll v^2$, with $v_1^2+v_2^2=v^2=(246~\text{GeV})^2$, the tadpole equations, $\partial V/\partial H_i=0$, would induce a seesaw-like relation $$\begin{aligned} v_1\simeq v, \qquad v_2\simeq \frac{m_{12}^2v}{ \lambda_{345} v^2+m_{2}^2},\end{aligned}$$ with $\lambda_{345}\equiv(\lambda_3+\lambda_4+\lambda_5)/2$. In the conventional neutrinophilic 2HDM [@Davidson:2009ha] (see also some phenomenological studies of the model performed in refs. [@Machado:2015sha; @Bertuzzo:2015ada]), the value of $v_2$ is tuned at eV scale so as to have $\mathcal{O}(1)$ neutrino Yukawa couplings. Apparently, when $Y_\nu\simeq \mathcal{O}(1)$, neutrinos would establish the L-R equilibration in the sphaleron-active epoch, and thus no net lepton-number asymmetry would be stored. Here we assume, instead, $Y_\nu\lesssim \mathcal{O}(10^{-8})$ to guarantee the out-of-equilibrium generation of the lepton-number asymmetry. Such an assumption is justified by the requirement $v_2\gtrsim \mathcal{O}(10^{-3})$ GeV, which in turn indicates that $m_{12}\gtrsim0.5$ GeV for $m_{2}\simeq \mathcal{O}(v)$. At a temperature well above the electroweak scale, we use the following thermal mass for the second Higgs doublet [@Cline:1995dg]: $$\begin{aligned} M_{H_2}^2\simeq \left(\frac{3}{16}g_2^2+\frac{1}{16}g_1^2+\frac{1}{4}\lambda_2+\frac{1}{6}\lambda_3+\frac{1}{12}\lambda_4\right)T^2,\end{aligned}$$ where marginal contributions from the soft $Z_2$-breaking term and the neutrino Yukawa couplings have been neglected. The thermal masses of $H_1$ and leptons are the same as that given by eqs. -. For $M_{H_2}$ present in eq. , we shall also include the positive mass parameter $m_2$, *i.e.*, $M_{H_2}^2=m_2^2+M_{H_2}^2(T)$. For our numerical analyses, we shall fix $m_2=500$ GeV, and assume that the effects from $\lambda_{4,5}$ are negligible. Furthermore, we shall work in the alignment limit where $H_1$ contains the SM Higgs boson. Within such a numerical setup, the Higgs mass spectrum is nearly degenerate, $M_{H^\pm}\simeq M_{H}\simeq M_{A}\simeq 500$ GeV, and $\lambda_1\simeq \lambda_2\simeq \lambda_3\simeq m_h^2/v^2$. For explicit expressions of the potential parameters $\lambda_{1-5}$ and the Higgs mass spectrum in the alignment limit, together with the theoretical and experimental constraints, the readers are referred to, *e.g.*, ref. [@Li:2018aov]. It can also be found in refs. [@Machado:2015sha; @Bertuzzo:2015ada] that such a numerical mass spectrum is phenomenologically viable. As done in the last subsection, here the numerical integration of eq.  over the sphaleron-active regime also prompts a semi-analytic expression for the baryon-number asymmetry $Y_{\Delta B}$. Explicitly, for a TO-cutting CP asymmetry, we get $$\begin{aligned} \text{TM}_1&:\; \;Y_{\Delta B}^{\text{TO}}\simeq 3.16\times 10^{-10}\;\frac{\sin(2\theta)\sin\varphi}{v_2^2},\label{baryon asym in pattern-1} \\[0.2cm] \text{TM}_2&: \;\; Y_{\Delta B}^{\text{TO}}\simeq -1.15\times 10^{-10}\;\frac{\sin(2\theta)\sin\varphi}{v_2^2},\label{baryon asym in pattern-2} \\[0.2cm] \text{TM}^2&:\; \; Y_{\Delta B}^{\text{TO}}\simeq -1.33\times 10^{-12}\;\frac{\tan(2\theta)\sin\varphi}{v_2^2}, \label{baryon asym in pattern-3} \\[0.2cm] \text{TM}^3&:\; \; Y_{\Delta B}^{\text{TO}}\simeq -1.33\times 10^{-12}\;\frac{\tan(2\theta)\sin\varphi}{v_2^2},\label{baryon asym in pattern-4}\end{aligned}$$ where $g_*^\rho=g_*^s=110.75$ has been used. Again, the different dependence of $Y_{\Delta B}^{\text{TO}}$ on the trigonometric functions between $\text{TM}_{1,2}$ and $\text{TM}^{2,3}$ is also due to an additional $\theta$ dependence of the thermal fermion masses in patterns $\text{TM}^{2,3}$. ![\[TO-TM-Angle\] Allowed regions for the two mixing parameters, $(\theta, \varphi)$, under the individual constraint from a positive baryon-number asymmetry (yellow bands), a negative CP measure $\mathcal{J}=\mathcal{J}_{CP}^{max}<0$ (red regions), as well as the PMNS matrix element $\vert U_{13}\vert$ (narrow blue bands) in neutrino oscillations [@Esteban:2016qun].](figures/TO-TMLow1_Angle_re.pdf "fig:"){width="42.00000%"} ![\[TO-TM-Angle\] Allowed regions for the two mixing parameters, $(\theta, \varphi)$, under the individual constraint from a positive baryon-number asymmetry (yellow bands), a negative CP measure $\mathcal{J}=\mathcal{J}_{CP}^{max}<0$ (red regions), as well as the PMNS matrix element $\vert U_{13}\vert$ (narrow blue bands) in neutrino oscillations [@Esteban:2016qun].](figures/TO-TMLow2_Angle_re.pdf "fig:"){width="42.00000%"}\ ![\[TO-TM-Angle\] Allowed regions for the two mixing parameters, $(\theta, \varphi)$, under the individual constraint from a positive baryon-number asymmetry (yellow bands), a negative CP measure $\mathcal{J}=\mathcal{J}_{CP}^{max}<0$ (red regions), as well as the PMNS matrix element $\vert U_{13}\vert$ (narrow blue bands) in neutrino oscillations [@Esteban:2016qun].](figures/TO-TMUp2_Angle_re.pdf "fig:"){width="42.00000%"} ![\[TO-TM-Angle\] Allowed regions for the two mixing parameters, $(\theta, \varphi)$, under the individual constraint from a positive baryon-number asymmetry (yellow bands), a negative CP measure $\mathcal{J}=\mathcal{J}_{CP}^{max}<0$ (red regions), as well as the PMNS matrix element $\vert U_{13}\vert$ (narrow blue bands) in neutrino oscillations [@Esteban:2016qun].](figures/TO-TMUp3_Angle_re.pdf "fig:"){width="42.00000%"} It can be seen from eqs.  and – that, for patterns $\text{TM}_{1,2}$, the baryon-number asymmetry $Y_{\Delta B}^{\text{TO}}$ has basically the same size but with an opposite sign, while the CP measure $\mathcal{J}$ has the same sign. This indicates that, to generate a positive $Y_{\Delta B}$, the product of the trigonometric functions, $\sin(2\theta)\sin\varphi$, should be positive (negative) for $\text{TM}_1$ ($\text{TM}_2$). However, if we follow the favored Dirac CP phase $\delta=[\pi,2\pi]$, which indicates a negative $\mathcal{J}$, the same factor $\sin(2\theta)\sin\varphi$ should be positive in both patterns. Therefore, for a successful DN with a TO-cutting CP asymmetry, the pattern $\text{TM}_2$ is already disfavored by the neutrino oscillation data with a Dirac CP phase in the range $\delta=[\pi,2\pi]$. For patterns $\text{TM}^{2,3}$, on the other hand, $Y_{\Delta B}^{\text{TO}}$ has basically the same value, while $\mathcal{J}$ has the opposite sign. This implies that the pattern $\text{TM}^3$ is also disfavored by the range of Dirac CP phase in realizing a successful DN. To visualize the sign significance observed above, we show in figure \[TO-TM-Angle\] the allowed regions for the two mixing parameters, $(\theta, \varphi)$, under the individual constraint from a positive baryon-number asymmetry, a negative CP measure in neutrino oscillations, as well as the PMNS matrix element $\vert U_{13}\vert$. For the two allowed patterns, $\text{TO}\text{--}\text{TM}_1$ and $\text{TO}\text{--}\text{TM}^2$, we further investigate in detail the compatibility between the freeze-in DN and the neutrino oscillation observables in a particular quadrant with $\theta=[0,\pi/2]$, which is shown in figure \[TO-TMLow1Up2\_Contour\]. ![\[TO-TMLow1Up2\_Contour\] Compatibility between the freeze-in DN and the neutrino oscillation observables for patterns $\text{TM}_1$ and $\text{TM}^2$ with the TO-cutting scheme. The area enclosed by the black-dotted line represents the $3\sigma$ allowed range of $\vert U\vert$ and a maximal CP measure $\mathcal{J}=\mathcal{J}_{CP}^{max}<0$. The contours denote the variations of $v_2$ (in unit of GeV), with each contour corresponding to the best-fit point of $Y_{\Delta B}$ given by eq. .](figures/TO-TMLow1_Contour_re.pdf "fig:"){width="45.00000%"} ![\[TO-TMLow1Up2\_Contour\] Compatibility between the freeze-in DN and the neutrino oscillation observables for patterns $\text{TM}_1$ and $\text{TM}^2$ with the TO-cutting scheme. The area enclosed by the black-dotted line represents the $3\sigma$ allowed range of $\vert U\vert$ and a maximal CP measure $\mathcal{J}=\mathcal{J}_{CP}^{max}<0$. The contours denote the variations of $v_2$ (in unit of GeV), with each contour corresponding to the best-fit point of $Y_{\Delta B}$ given by eq. .](figures/TO-TMUp2_Contour_re.pdf "fig:"){width="45.00000%"} With the input of a retarded-cutting CP asymmetry, it is found that, compared with $Y_{\Delta B}^{\text{TO}}$ given by eqs. -, $Y_{\Delta B}^{\text{R}}$ has basically the same size but with a different sign, as is observed already in the SM Higgs case. Using the same arguments as made above, the patterns $\text{TM}_1$ and $\text{TM}^2$ would be dismissed, while both $\text{TM}_2$ and $\text{TM}^3$ are favored by a Dirac CP phase in the range $\delta=[\pi,2\pi]$. The resulting $Y_{\Delta B}^{\text{R}}$ is now given by $$\begin{aligned} \text{TM}_2&:\;\;Y_{\Delta B}^{\text{R}}\simeq 1.78\times 10^{-10}\,\frac{\sin(2\theta) \sin\varphi}{v_2^2}, \\[0.2cm] \text{TM}^3&:\;\;Y_{\Delta B}^{\text{R}}\simeq 2.05 \times 10^{-12}\,\frac{\tan(2\theta) \sin\varphi}{v_2^2}.\end{aligned}$$ For the two allowed patterns $\text{R}\text{--}\text{TM}_2$ and $\text{R}\text{--}\text{TM}^3$, the trigonometric functions for prompting a positive baryon-number asymmetry and a negative CP measure $\mathcal{J}$ have the same behavior as observed for the pattern $\text{TO}\text{--}\text{TM}_1$ shown in figure \[TO-TM-Angle\], because these three patterns share a common sign in $Y_{\Delta B}$ and $\mathcal{J}$. The compatibility between the freeze-in DN and the neutrino oscillation observables are shown in figure \[R-TMLow2Up3\_Contour\], where a particular quadrant with $\theta=[0,\pi/2]$ is selected. ![\[R-TMLow2Up3\_Contour\] Same as in figure \[TO-TMLow1Up2\_Contour\] but for patterns $\text{TM}_2$ and $\text{TM}^3$ with the retarded-cutting scheme.](figures/R-TMLow2_Contour_re.pdf "fig:"){width="45.00000%"} ![\[R-TMLow2Up3\_Contour\] Same as in figure \[TO-TMLow1Up2\_Contour\] but for patterns $\text{TM}_2$ and $\text{TM}^3$ with the retarded-cutting scheme.](figures/R-TMUp3_Contour_re.pdf "fig:"){width="45.00000%"} As shown in figure \[TO-TMLow1Up2\_Contour\], both $\text{TM}_1$ and $\text{TM}^2$ can produce the $3\sigma$-allowed range of $\vert U\vert$ as well as a maximal CP measure $\mathcal{J}=\mathcal{J}_{CP}^{max}<0$ (the area enclosed by the black-dotted line), pointing out a mixing angle in the range $0.2<\theta<0.3$. It is also observed that the maximal leptonic CP asymmetry at low energy favors a maximal CP phase $\varphi$ that is necessary for a lepton-number asymmetry at high-temperature regime: $\varphi\simeq \pi/2$ for $\text{TO}\text{--}\text{TM}_1$ and $\varphi\simeq 3\pi/2$ for $\text{TO}\text{--}\text{TM}^2$. For the two allowed patterns $\text{R}\text{--}\text{TM}_2$ and $\text{R}\text{--}\text{TM}^3$ shown in figure \[R-TMLow2Up3\_Contour\], on the other hand, the mixing angle is found at $\theta\simeq 0.2$, and a CP-violating phase $\varphi\simeq \pi/2$ is favored for both cases. Finally, as can be seen from figures \[TO-TMLow1Up2\_Contour\] and \[R-TMLow2Up3\_Contour\], $v_2\simeq\mathcal{O}(0.1\text{--}1)$ GeV is required by the allowed range of $\vert U\vert$ and a maximal CP measure $\mathcal{J}=\mathcal{J}_{CP}^{max}<0$. With such a range of $v_2$ as input, the Dirac neutrino Yukawa couplings are then estimated to be $Y_\nu\simeq \mathcal{O}(10^{-10}\text{--}10^{-11})$ for neutrino masses at $\mathcal{O}(10^{-2})$ eV. Therefore, the feeble neutrino Yukawa couplings of $\mathcal{O}(10^{-10}\text{--}10^{-11})$ obtained in the neutrinophilic 2HDM can account for the smallness of Dirac neutrino masses in a simple while less aesthetic manner. We have further shown that, it is also the feebleness that renders the accumulation of lepton-number asymmetry to convert into the baryon-number asymmetry via rapid sphaleron transitions in the early Universe. Conclusion {#sec:con} ========== We have demonstrated in this paper that, when both thermal effects at high temperature and non-diagonal textures of lepton (both charged-lepton and neutrino) Yukawa matrices are considered, it is feasible to account for the matter-antimatter asymmetry of the Universe within a minimal freeze-in DN setup. While the SM Higgs cannot generate the observed baryon-number asymmetry in such a minimal setup, the second Higgs doublet of the neutrinophilic 2HDM, when being in equilibrium with the thermal bath, can realize the freeze-in DN. To establish a direct connection between the high-temperature leptonic CP asymmetry and the low-energy neutrino oscillation observables, we have considered various minimal corrections to the TB mixing pattern, and found that the patterns with a small mixing angle and a maximal CP-violating phase can produce compatible neutrino oscillation observables with a (negative) maximal CP measure and, at the same time, account for the matter-antimatter asymmetry of the Universe observed today. Such a minimal setup realized in this paper is predictable on account of the correlation between the BAU and the neutrino oscillation observables, and might also be testable at colliders in terms of the electroweak scalars introduced to generate the neutrino masses and to implement the BAU. This work is supported by the National Natural Science Foundation of China under Grant Nos. 11675061 and 11775092, as well as by the Fundamental Research Funds for the Central Universities under Grant Nos. CCNU18TS029 and 2019YBZZ079.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study a scheme of quantum simulator for two-dimensional xy-model Hamiltonian. Previously the quantum simulator for a coupled cavity array spin model has been explored, but the coupling strength is fixed by the system parameters. In the present scheme several cavity resonators can be coupled with each other simultaneously via an ancilla qubit. In the two-dimensional Kagome lattice of the resonators the hopping of resonator photonic modes gives rise to the tight-binding Hamiltonian which in turn can be transformed to the quantum xy-model Hamiltonian. We employ the transmon as an ancilla qubit to achieve [*in situ*]{} controllable xy-coupling strength.' author: - Mun Dae Kim date: 'Received: date / Accepted: date' title: 'Quantum simulation scheme of two-dimensional xy-model Hamiltonian with controllable coupling' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction ============ In spite of the remarkable advancements of coherent quantum operation the realization of fully controlled quantum computing is severely challenging in quantum information processing technology. On the other hand, significant attention has been paid to quantum spin models as a promising candidate for quantum simulation of many-body effects [@Georg; @Cirac; @Buluta]. Quantum many-body simulation may provide a variety of possibilities to study the properites of many-body systems, realize a new phase of quantum matter, and eventually lead to the scalable quantum computing, which is hard for classical approaches. Large-scale quantum simulators consisting of many qubits integrated have been experimentally demonstrated to study the quantum phenomena such as many-body dynamics and quantum phase transition. Quantum simulators have been studied in the so-called coupled cavity array (CCA) model, where a two-level atom in the cavity interacts with its own cavity and the hopping of a photon bewteen cavities gives rise to the cavity-cavity coupling. The CCA model has been applied to study the Jaynes-Cummings Hubbard model (JCHM) [@HartmanNP; @Xue; @Greentree; @Schmidt; @Koch; @Fan] and the Bose-Hubbard model [@Greentree; @Koch] to exhibit the phase transition between Mott insulator and superfluid. However, in the CCA model the cavity-cavity hopping amplitude is set by the system parameters and thus not tunable. In recent studies for one-dimensional quantum simulators using trapped cold atoms [@Nature51] and trapped ion systems [@Nature53] the coupling strength was tunable. Previously the superconducting resonators in two-dimensional lattice have been coupled through an interface capacitance, where the resonator-resonator coupling strength is not controllable as the capacitance is fixed [@NPreview; @AP]. For superconducting resonator cavities in circuit-quantum electrodynamics (QED) systems, qubit is located outside of the cavity [@MDK; @QIP]. Hence a qubit can interact with many resonator cavities surrounding the qubit. By using a qubit as a mediator of coupling between many resonators one can obtain a tunable resonator-resonator coupling which is quite different from the coupling by direct photon hopping in the CCA model. In this study we consider a lattice model of superconducting resonator cavities coupled by ancilla qubits for simulating the quantum xy-model Hamiltonian. The simulation for quantum xy-model has been studied in one-dimensional [@Hartmann; @Angelakis] and two-dimensional [@Koch] JCHM in the CCA model architecture. In the present model the intervening ancilla qubit which couples cavities has controllable qubit frequency. After discarding the ancilla qubit degrees of freedom by performing a coordinate transformation we show that the photon states in the resonators are described by the tight-binding Hamiltonian which, in turn, can be rewritten as the quantum xy-type interaction Hamiltonian. Consequently, the xy-coupling constant depends on the hopping amplitude of the tight-binding Hamiltonian and thus on the ancilla qubit frequency. We consider two-dimensional Kagome lattice model as well as one-dimensional chain model for the quantum simulation of xy-model Hamiltonian and show that the xy-coupling strength is [*in situ*]{} controllable. Hamiltonian of coupled $n$-resonators ===================================== In circuit-QED architectures qubits can be coupled with the transmission resonator at the boundaries of the resonator [@Blais; @Steffen; @Inomata] so that we may couple several resonators to a qubit as depicted in Fig. \[fig1\] (a). In principle, any kind of qubits are available, but in this study we employ the transmon as the ancilla qubit coupling the resonators with the advantage of controllability. The Hamiltonian of the system with $n$ resonators and an ancilla qubit in Fig. \[fig1\](a) is given by $$\begin{aligned} \label{H} H_{nR}=\frac12\omega_a\sigma^z_{a}+\sum^n_{p=1}[\omega_{rp}a^\dagger_p a_p-f_p(a^\dagger_p\sigma^-_{a}+\sigma^+_{a}a_p)],\end{aligned}$$ where $a^\dagger_p$ and $a_p$ with the frequency $\omega_{rp}$ are the creation and annihilation operators for microwave photon in $p$-th resonator, respectively, and the Pauli matrix $\sigma^z_{a}$ with the frequency $\omega_a$ represents the ancilla qubit state, and $f_p$ is the coupling amplitude between the photon mode in the $p$-th resonator and the ancilla qubit. This Hamiltonian conserves the excitation number $$\begin{aligned} \label{Ne} {\cal N}_e=\sum^n_{p=1}N_{rp}+(s_{az}+1/2),\end{aligned}$$ where $s_{az}\in \{-1/2,1/2\}$ are the eigenvalue of the operator $S_{az}=\frac12\sigma^z_{a}$ for ancilla qubit and $N_{rp}$ is the excitation number of oscillating mode in $p$-th resonator. Here, we consider the subspace that ${\cal N}_e=1$ and thus $N_{rp}\in \{0,1\}$, that is, the state of resonator is the superposition of zero and one-photon states which was generated in experiments previously [@Houck; @Hofheinz08; @Hofheinz09]. ![(a) $n$ cavities of circuit-QED resonators are coupled via an intervening ancilla qubit. (b) Effective cavity-cavity coupling, $J_3$, for $n=3$ as a function of ancilla qubit frequency $\omega_a$ and resonator-ancilla coupling $f$ with the frequency $\omega_r$ of resonator photon mode. []{data-label="fig1"}](ncavities.pdf){width="100.00000%"} In order to obtain the Hamiltonian describing the interaction between photon modes we introduce the transformation $$\begin{aligned} \label{tildeH} {\tilde H}_{nR}=U^\dagger H_{nR} U,\end{aligned}$$ where $$\begin{aligned} U=e^{-\sum^n_{p=1}\theta_p(a^\dagger_p\sigma^-_{a}-\sigma^+_{a}a_p)}.\end{aligned}$$ Here we, for simplicity, assume identical resonators and thus set $\omega_{rp}=\omega_r, f_p=f$ and $\theta_p=\theta$. We then expand $U=e^M$ with $M=-\sum^n_{p=1}\theta_p(a^\dagger_p\sigma^1_{a}-\sigma^+_{a}a_p)$ by using the relation $e^M=1+M+\frac{1}{2!}M^2+\frac{1}{3!}M^3+\cdots$ to obtain $$\begin{aligned} U_{pp}\!\!&=&\!\!1-\frac{1}{2!}\theta^2\!\!+\!\!\frac{1}{4!}n\theta^4\!\!-\!\!\frac{1}{6!}n^2\theta^6 \!\!+\!\!\cdots= \frac{1}{n}(n\!\!-\!\!1\!\!+\!\!\cos\sqrt{n}\theta)\\ U_{n+1,n+1}\!\!&=&\!\!1-\frac{1}{2!}n\theta^2+\frac{1}{4!}n^2\theta^4-\frac{1}{6!}n^3\theta^6+\cdots= \cos\sqrt{n}\theta\\ U_{p,n+1}\!\!&=&\!\!-\theta\!\!+\!\!\frac{1}{3!}n\theta^3\!\!-\!\!\frac{1}{5!}n^2\theta^5\!\!+\!\!\cdots = -\frac1{\sqrt{n}}\sin\sqrt{n}\theta=-U_{n+1,p}\\ U_{pq,p\neq q}\!\!&=&\!\!-\frac{1}{2!}\theta^2\!\!+\!\!\frac{1}{4!}n\theta^4\!\!-\!\!\frac{1}{6!}n^2\theta^6\!\!+\!\!\cdots= \frac{1}{n}(\cos\sqrt{n}\theta\!\!-\!\!1).\end{aligned}$$ Here $U$ is a $(n+1)\times (n+1)$ matrix in the basis of $|N_{r1},N_{r2},N_{r3}, \cdots, N_{rn},s_{az}\rangle$ and $p,q \in \{1,2,3, \cdots, n\}$. The degree of freedoms of ancilla qubit and resonator photon modes in the Hamiltonian of Eq. (\[H\]) can be decoupled by imposing the condition $$\begin{aligned} \tan2\sqrt{n}\theta=2\sqrt{n}\frac{f}{\Delta}\end{aligned}$$ which can be achieved by adjusting the detuning $\Delta\equiv \omega_a-\omega_r$ [@Blais]. The resulting transformed Hamiltonian of Eq. (\[tildeH\]) becomes $$\begin{aligned} \label{tildeHM} {\tilde H}_{nR}=\left( \begin{array}{cccccc} \epsilon^r_1 & J_n & J_n & \cdots & J_n &0 \\ J_n & \epsilon^r_2 & J_n & \cdots & J_n &0 \\ J_n & J_n & \epsilon^r_3 & \cdots & J_n &0 \\ \vdots & \vdots & \vdots & \ddots &\vdots &\vdots \\ J_n & J_n & J_n &\cdots & \epsilon^r_n &0 \\ 0 & 0 & 0& \cdots & 0 & \epsilon^a \end{array} \right),\end{aligned}$$ where $\epsilon^a$ is the energy for the state that $s_{az}=1/2$ and $N_{rp}=0$ for all $p \in \{1,2,3,\cdots,n\}$, and $\epsilon^r_p$ is the energy for the state that $s_{az}=-1/2$ and only the $p$-th resonator has one photon, $N_{rp}=1$ and $N_{rq}=0 ~(q\neq p)$. For identical resonators, $\epsilon^r_1=\epsilon^r_2=\epsilon^r_3= \cdots=\epsilon^r$ and $\epsilon^a$ are explicitly evaluated as $$\begin{aligned} \epsilon^r&=&-\frac{1}{2n}\left(\Delta+ sgn(\Delta)\sqrt{\Delta^2+4nf^2}\right)+\frac12\omega_r,\\ \epsilon^a&=&\frac12 sgn(\Delta)\sqrt{\Delta^2+4nf^2}+\frac12\omega_r,\end{aligned}$$ and the resonator-resonator coupling is given by $$\begin{aligned} \label{J} J_n=\frac{1}{2n}\left(\Delta-sgn(\Delta)\sqrt{\Delta^2+4nf^2}\right),\end{aligned}$$ where $sgn(\Delta)$ is $+1(-1)$ for $\Delta>0~(\Delta<0)$. In the subspace satisfying ${\cal N}_e=1$ the Hamiltonian ${\tilde H}_{nR}$ in Eq. (\[tildeHM\]) can be represented as $$\begin{aligned} \label{TB} {\tilde H}_{nR}&=&\frac12\sum^2_{p=1}\omega'_r(2a^\dagger_pa_p-1) +\sum^n_{p,q=1,p\neq q}J_n(a^\dagger_pa_q+a_pa^\dagger_q)\nonumber\\ &&+\frac12\omega'_a\sigma^z_{a}.\end{aligned}$$ Consequently, $\epsilon^r_p$ and $\epsilon^a$ can be rewritten as $\epsilon^r_p=\epsilon^r=-\frac{n-2}{2}\omega'_r-\frac12\omega'_a$ and $\epsilon^a=-\frac{n}{2}\omega'_r+\frac12\omega'_a$ so that we can have the relations, $\omega'_a=-(n\epsilon^r-(n-2)\epsilon^a)/(n-1)$ and $\omega'_r=-(\epsilon^r+\epsilon^a)/(n-1)$. In this tight-binding Hamiltonian the ancilla qubit operator $\sigma^z_{a}$ is decoupled from the resonator photon mode $a$, and afterward we will ignore the ancilla term. The tight-binding Hamiltonian ${\tilde H}_{nR}$ can be easily transformed to the xy-spin model by introducing a pseudo spin operator $\sigma_p$ such that $2a_pa^\dagger_p-1=|1_p\rangle\langle 1_p|-|0_p\rangle\langle 0_p|=\sigma^z_{p}$ and $a^\dagger_pa_q+a_pa^\dagger_q=|1_p0_q\rangle\langle 0_p1_q|+|0_p1_q\rangle\langle 1_p0_q| =\sigma^+_{p}\sigma^-_{q}+\sigma^-_{p}\sigma^+_{q}=(1/2)(\sigma^x_{p}\sigma^x_{q}+\sigma^y_{p}\sigma^y_{q})$ as follows: $$\begin{aligned} \label{xy} H_{xy}=\frac12\sum^n_{p=1}\omega'_r\sigma^z_{p} +\frac12\sum^n_{p,q=1,p\neq q}J_n(\sigma^x_{p}\sigma^x_{q}+\sigma^y_{p}\sigma^y_{q}).\end{aligned}$$ Here, the hopping parameter $J_n$ acts as a xy coupling constant between pseudo spins. xy-model with tunable coupling =============================== Figure \[fig2\](a) shows one-dimensional lattice model by extending the structure in Fig. \[fig1\](a) for two resonators and an ancilla qubit ($n=2$). The transformation of Hamltonian ${\tilde H}_{2R}=U^\dagger H_{2R} U$ in Eq. (\[tildeH\]) can be evaluated by using the transformation matrix $U=e^M=e^{-\sum^2_{j=1}\theta_j(a^\dagger_j\sigma^--\sigma^+a_j)}$ with $$\begin{aligned} H_{2R}=\left[ {\begin{array}{ccc} \omega_{r1}-\frac12\omega_a & 0 & -f_1\\ 0 & \omega_{r2}-\frac12\omega_a & -f_2 \\ -f_1 & -f_2 & \frac12\omega_a \\ \end{array} } \right], M= \left[ {\begin{array}{ccc} 0 & 0 & -\theta_1\\ 0 & 0 & -\theta_2 \\ \theta_1 & \theta_2 & 0 \\ \end{array} } \right].\end{aligned}$$ For identical resonators such that $\omega_{r1}=\omega_{r2}=\omega_r, f_1=f_2=f$, and thus $\theta_1=\theta_2=\theta$, the transformation matrix can be calculated as $$\begin{aligned} U= \left[ {\begin{array}{ccc} \frac12(\cos\sqrt{2}\theta+1)& \frac12(\cos\sqrt{2}\theta-1) & -\frac1{\sqrt{2}}\sin\sqrt{2}\theta \\ \frac12(\cos\sqrt{2}\theta-1) & \frac12(\cos\sqrt{2}\theta+1) & -\frac1{\sqrt{2}}\sin\sqrt{2}\theta \\ \frac1{\sqrt{2}}\sin\sqrt{2}\theta & \frac1{\sqrt{2}}\sin\sqrt{2}\theta & \cos\sqrt{2}\theta \\ \end{array} } \right]\end{aligned}$$ with the basis $|N_{r1},N_{r2},s_{az}\rangle \in \{|1,0,-1/2\rangle,|0,1,-1/2\rangle,|0,0,1/2\rangle\}$, the photon number in 1st (2nd) resonator $N_{r1} (N_{r2})$ and the ancilla qubit spin $s_{az}$. ![(a) One-dimensional chain of cavity resonators coupled via ancilla qubits with the effective cavity-cavity coupling $J_2$. (b) Two-dimensional Kagome lattice of cavity resonators consisting of three triangular sublattices, $a_{i,j}$ (red), $b_{i,j}$(purple) and $c_{i,j}$(black), with effective coupling strength $J_3$.[]{data-label="fig2"}](lattice.pdf){width="70.00000%"} The transformed Hamiltonian ${\tilde H}_{2R}$ can be represented as the tight-binding Hamiltonian of Eq. (\[TB\]), $$\begin{aligned} {\tilde H}_{2R}=\frac12\sum^2_{i=1}\omega'_r(2a^\dagger_ia_i-1) +\sum^N_{i=1}J_2(a^\dagger_ia_{i+1}+a^\dagger_{i+1}a_i),\end{aligned}$$ with the hopping parameter $J_2=\frac14(\Delta-\sqrt{\Delta^2+8f^2})$, discarding the decoupled ancilla term. This tight-binding Hamiltonian describes photon hopping in the chain model of Fig. \[fig2\](a), which can be subsequently transformed to the one-dimensional xy-model Hamiltonian similar to Eq. (\[xy\]) as $$\begin{aligned} H^{1D}_{xy}=\frac12\sum^N_{i=1}\omega'_r\sigma^z_{i} +\frac12\sum^N_{i=1}J_2(\sigma^x_{i}\sigma^x_{i+1}+\sigma^y_{i}\sigma^y_{i+1}).\end{aligned}$$ Further, for $n=3$ we can construct a two-dimensional lattice model as shown in Fig. \[fig2\](b). Here the ancilla qubits form the hexagonal lattice, but the resonators the dual lattice, i.e., the Kagoma lattice. The Kagome lattice has been widely studied in the relation of, for example, the frustrated spin model [@Mielke] and the interacting boson model [@You; @Petrescu]. The Kagome lattice in Fig. \[fig2\](b) consists of three triangular sublattices denoted as $a_{i,j}, b_{i,j}$ and $c_{i,j}$. Here, two triangles consisting of, for example, $a_{i,j}, b_{i,j}, c_{i,j}, a_{i+1,j-1}$ and $c_{i+1,j}$ in Fig. \[fig2\](b), make up the unit cell and thus the xy-model Hamiltonian in the Kagome lattice can be written as $$\begin{aligned} \label{Kagome} H^{Kagome}_{xy}&=&\frac12\sum^N_{i,j=1}\omega'_r(\sigma^z_{a,i,j}+\sigma^z_{b,i,j}+\sigma^z_{c,i,j})\nonumber\\ &&+\frac12\sum^N_{i,j=1}J_3(\sigma^x_{a,i,j}\sigma^x_{b,i,j}+\sigma^x_{b,i,j}\sigma^x_{c,i,j}+\sigma^x_{c,i,j}\sigma^x_{a,i,j}\nonumber\\ &&+\sigma^y_{a,i,j}\sigma^y_{b,i,j}+\sigma^y_{b,i,j}\sigma^y_{c,i,j}+\sigma^y_{c,i,j}\sigma^y_{a,i,j}\nonumber\\ &&+\sigma^x_{a,i+1,j-1}\sigma^x_{c,i+1,j}+\sigma^x_{b,i,j}\sigma^x_{a,i+1,j-1}+\sigma^x_{c,i+1,j}\sigma^x_{b,i,j}\nonumber\\ &&+\sigma^y_{a,i+1,j-1}\sigma^y_{c,i+1,j}+\sigma^y_{b,i,j}\sigma^y_{a,i+1,j-1}+\sigma^y_{c,i+1,j}\sigma^y_{b,i,j}).\nonumber\\\end{aligned}$$ Photons hop between resonators with amplitude $J_n$ which depends on the sign of detuning $\Delta$ in Eq. (\[J\]). If $\Delta>0$, the hopping amplitude is negative, $J_n<0$, indicating that the hopping process reduces the total system energy and the photons hop between cavities, while for $\Delta<0$ and $J_n>0$ the hopping process has energy cost and thus the photon state is localized in the resonator at the ground state. Since typically the transmon qubit frequency $\omega_a/2\pi \sim$ 10GHz [@KochPRA; @Wallraff] and the resonator microwave photon frequency in circuit-QED scheme is $\omega_r/2\pi\sim$ 5-10GHz [@Blais], we will consider the parameter range of $\Delta=\omega_a-\omega_r>0$. For three resonators coupled to an ancilla qubit ($n=3$) in Fig. \[fig1\](a) the hopping amplitude becomes $J_3=\frac{1}{6}(\Delta-\sqrt{\Delta^2+12f^2})$. Figure \[fig1\](b) shows $J_3$ as a function of the ancilla qubit frequency $\omega_a$ and the ancilla-resonator coupling strength $f$. For the resonant case, $\Delta=\omega_a-\omega_r=0$, the hopping ampltude has the maximum value, $|J_3|=f/\sqrt{3}$, and diminishes as the detuning $\Delta$ grows, which means that $J_3$ can be controllable between $-f/\sqrt{3}<J_3 <0$. Here the typical value of the coupling between transmon ancilla and resonator $f/2\pi\sim$ 100MHz [@Zeytin; @Keller; @Bosman]. If we can adjust the parameters, $\Delta=\omega_a-\omega_r$ and $f$, the coupling constant $J_3$ becomes tunable. The resonator frequency $\omega_r$ and the resonator-photon coupling $f$ are usually set in the experiment, but we can tune the ancilla qubit frequency $\omega_a$ during the experiment for some qubit scheme. For the transmon qubit the qubit frequency is represented as $\omega_a\sim \sqrt{8E_JE_C}$ with the Josephson coupling energy $E_J$ and the charging energy $E_C$ [@KochPRA]. Since the Josepson coupling energy $E_J=E_{J,max}|\cos(\pi\Phi/\Phi_0)|$ is controllable by varying the magnetic flux $\Phi$ threading a dc-SQUID loop [@KochPRA], we can adjust the frequency of the transmon qubit, $\omega_a$. In the Hamiltonian for the two-dimensional xy-model in Kagome lattice in Eq. (\[Kagome\]) $J_3$, corresponding to the coupling constant between pseudo spins $\sigma$, becomes tunable. Hence, in this way we can achieve a quantum simulator for the two-dimensional xy-model in Kagome lattice with [*in situ*]{} tunable coupling. We can measure the resonator states by attaching measurement ports to the resonators, resulting in a complex lattice design. Instead, as in a recent study [@Kollar] measurement ports can be attached at the boundary of the lattice, but the analysis of the simulation results becomes complicated. In this study we assume identical resonators with equal ancilla qubit-resonator coupling $f$ and further consider a restricted subspace with ${\cal N}_e=1$ in the Hilbert space as shown in Eq. (\[Ne\]). If the couplings $f_p$ have some fluctuations from the uniform value $f$, the transformed Hamiltonian will deviate from the exact xy-model Hamiltonian. Furthermore, multiple photons or higher harmonic modes in the resonators may be generated, giving rise to errors in the processes. The effect of these non-idealities should be considered in a future study. conclusion ========== We proposed a scheme for simulating quantum xy-model Hamiltonian in two-dimensional Kagome lattice of resonator cavities with tunable coupling. By using an intervening ancilla qubit several cavities are coupled with each other. We found that the cavity lattice formed by extending this structure can be transformed to the tight-binding lattice of photons after discarding the ancilla qubit degree of freedom. In the subspace of zero and one photon mode in the cavities this Hamiltonian can be described as the quantum xy-model Hamiltonian. We introduced the ancilla transmon qubit whose energy levels can be controlled by varying a threading magnetic flux. The coupling strength can be [*in situ*]{} tuned by adjusting the frequency of ancilla qubit intervening cavities. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2011-0023467). I. M. Georgescu, S. Ashhab and F. Nori: Quantum simulation. Rev. Mod. Phys. [**86**]{}, 153 (2014) J. I. Cirac and P. Zoller: Goals and opportunities in quantum simulation. Nature Phys. [**8**]{}, 264 (2012) I. Buluta and F. Nori: Quantum simulators. Science [**326**]{}, 108 (2009). M. J. Hartmann, F. G. S. L. Brand$\tilde{a}$o, and M. B. Plenio: Strongly interacting polaritons in coupled arrays of cavities. Nature Phys. [**2**]{}, 849 (2006). P. Xue, Z. Ficek, and B. C. Sanders: Probing multipartite entanglement in a coupled Jaynes-Cummings system. Phys. Rev. A [**86**]{}, 043826 (2012) A. D. Greentree, C. Tahan, J. H. Cole, and L. C. L. Hollenberg: Quantum phase transitions of light. Natute Phys. [**2**]{}, 856 (2006). S. Schmidt and G. Blatter: Strong Coupling Theory for the Jaynes-Cummings-Hubbard Model. Phys. Rev. Lett. [**103**]{}, 086403 (2009) J. Koch and K. Le Hur: Superfluid–Mott-insulator transition of light in the Jaynes-Cummings lattice. Phys. Rev. A [**80**]{}, 023811 (2009). J. Fan, Y. Zhang, L. Wang, F. Mei, G. Chen, and S. Jia: Superfluid-Mott-insulator quantum phase transition of light in a two-mode cavity array with ultrastrong coupling. Phys. Rev. A [**95**]{}, 033842 (2017) Bernien, H. [*et al.*]{}: Probing many-body dynamics on a 51-atom quantum simulator Nature. [**551**]{}, 579 (2017) Zhang, J. [*et al.*]{}: Observation of a many-body dynamical phase transition with a 53-qubit quantum simulator. Nature [**551**]{}, 601 (2017) Houck, A. A., T[ü]{}reci, H. E., Koch, J.: On-chip quantum simulation with superconducting circuits. Nature Phys. [**8**]{}, 292 (2012) S. Schmidt and J. Koch: Circuit QED lattices: Towards quantum simulation with superconducting circuits. Ann. Phys. (Berlin) [**525**]{}, 395 (2013) M. D. Kim and J. Kim: Coupling qubits in circuit-QED cavities connected by a bridge qubit. Phys. Rev. A [**93**]{} 012321 (2016) M. D. Kim and J. Kim: Scalable quantum computing model in the circuit-QED lattice with circulator function. Quantum Inf. Process. [**16**]{}, 192 (2017) M. J. Hartmann, F. G. S. L. Brand$\tilde{a}$o, and M. B. Plenio: Effective Spin Systems in Coupled Microcavities. Phys. Rev. Lett. [**99**]{}, 160501 (2007) D. G. Angelakis, M. F. Santos, and S. Bose: Photon-blockade-induced Mott transitions and XY spin models in coupled cavity arrays. Phys. Rev. A [**76**]{}, 031805(R) (2007) A. Blais, J. Gambetta, A. Wallraff, D. I. Schuster, S. M. Girvin, M. H. Devoret, and R. J. Schoelkopf: Quantum-information processing with circuit quantum electrodynamics. Phys. Rev. A [**75**]{}, 032329 (2007) M. Steffen, S. Kumar, D. P. DiVincenzo, J. R. Rozen, G. A. Keefe, M. B. Rothwell, and M. B. Ketchen: High-Coherence Hybrid Superconducting Qubit. Phys. Rev. Lett. [**105**]{}, 100502 (2010) K. Inomata, T. Yamamoto, P.-M. Billangeon, Y. Nakamura, and J. S. Tsai: Large dispersive shift of cavity resonance induced by a superconducting flux qubit in the straddling regime. Phys. Rev. B [**86**]{}, 140508(R) (2012) A. A. Houck, D. I. Schuster, J. M. Gambetta, J. A. Schreier, B. R. Johnson, J. M. Chow, L. Frunzio, J. Majer, M. H. Devoret, S. M. Girvin and R. J. Schoelkopf: Generating single microwave photons in a circuit. Nature [**449**]{}, 328 (2007) M. Hofheinz, E. M. Weig, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O’Connell, H. Wang, J. M. Martinis and A. N. Cleland: Generation of Fock states in a superconducting quantum circuit. Nature [**454**]{}, 310 (2008) M. Hofheinz, H. Wang, M. Ansmann, R. C. Bialczak, E. Lucero, M. Neeley, A. D. O’Connell, D. Sank, J. Wenner, J. M. Martinis and A. N. Cleland: Synthesizing arbitrary quantum states in a superconducting resonator. Nature [**459**]{}, 546 (2009) see, for example, A. Mielke: Exact ground states for the hubbard model on the Kagome lattice. J. Phys. A., [**25**]{}, 4335 (1992) Y.-Z. You, Z. Chen, X.-Q. Sun, and H. Zhai: Superfluidity of Bosons in Kagome Lattices with Frustration. Physical Review Letters [**109**]{}, 265302 (2012) A. Petrescu, A. A. Houck, and K. L. Hur: Anomalous Hall effects of light and chiral edge modes on the Kagome lattice. Physical Review A [**86**]{}, 053804 (2012) Koch, J., Yu, T. M., Gambetta, J., Houck, A. A., Schuster, D. I., Majer, J., Blais, A., Devoret, M. H., Girvin, S. M., Schoelkopf, R. J.: Charge-insensitive qubit design derived from the Cooper pair box. Phys. Rev. A [**76**]{}, 042319 (2007) A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S. Huang, J. Majer, S. Kumar, S. M. Girvin, and R. J. Schoelkopf: Strong coupling of a single photon to a superconducting qubit using circuit quantum electrodynamics. Nature [**431**]{}, 162 (2004) S. Zeytinoğlu, M. Pechal, S. Berger, A. A. Abdumalikov Jr., A. Wallraff, and S. Filipp: Microwave-induced amplitude- and phase-tunable qubit-resonator coupling in circuit quantum electrodynamics. Phys. Rev. A [**91**]{}, 043846 (2015) A. J. Keller, P. B. Dieterle, M. Fang, B. Berger, J. M. Fink, and O. Painter: Al transmon qubits on silicon-on-insulator for quantum device integration. Appl. Phys. Lett. [**111**]{}, 042603 (2017) S. J. Bosman, M. F. Gely, V. Singh, D. Bothner, A. Castellanos-Gomez, and G. A. Steele: Approaching ultrastrong coupling in transmon circuit QED using a high-impedance resonator. Phys. Rev. B [**95**]{}, 224515 (2017) A. J. Kollár, M. Fitzpatrick, and A. A. Houck: Hyperbolic Lattices in Circuit Quantum Electrodynamics. arXiv:1802.09549
{ "pile_set_name": "ArXiv" }
--- abstract: 'This study constructs an integrated early warning system (EWS) that identifies and predicts stock market turbulence. Based on switching ARCH (SWARCH) filtering probabilities of the high volatility regime, the proposed EWS first classifies stock market crises according to an indicator function with thresholds dynamically selected by the two-peak method. An hybrid algorithm is then developed in the framework of a long short-term memory (LSTM) network to make daily predictions that alert turmoils. In the empirical evaluation based on ten-year Chinese stock data, the proposed EWS yields satisfying results with the test-set accuracy of $96.6\%$ and on average $2.4$ days of forewarned period. The model’s stability and practical value in the real-time decision-making are also proven by the cross-validation and back-testing.' address: - 'Department of Mathematical Sciences, Xi’an Jiaotong-Liverpool University.' - '111 Ren’ai Road, Dushu Lake Science and Education Innovation District, Suzhou Industrial Park, Suzhou, Jiangsu Province, P.R. China, 215123.' author: - Peiwan Wang - Lu Zong - Ye Ma bibliography: - 'elsews.bib' title: An Integrated Early Warning System for Stock Market Turbulence --- Early warning system, LSTM, SWARCH, two-peak method, dynamic prediction Introduction {#sec:introduction} ============ Due to the Subprime Mortgage crisis, the Shanghai Stock Exchange Composite (SSEC) index experienced one of its greatest falls in the end of 2007. In mid-2015, another Chinese stock market bubble crashed and led to extreme turbulence and instability in the domestic financial environment. As the lasting effect of stock market crises is recognized as the cause of critical society stress and results in increasing financial loads of the government, a systematic model that monitors the economic scenarios of financial markets, and generates early warning signals for potential extreme risks is in heavy demand. Financial early warning systems (EWSs) are designed to forecast crises via studying pre-turmoil patterns, thus to allow market participants to take early actions to hedge against vital risks. In practice, the target of early warning ranges from individual financial markets, such as the banking sector, the currency and stock markets, to the entire economic system. The modeling of crises are then commonly formulated as a classification problem based on the identified crisis indicators. To design an effective and reliable EWS with true warnings and limited false alarms, two matters need to be delicately addressed, that is the identification of crises and the mechanism of prediction. In the previous studies, an EWS is primarily constructed by identifying crises on the basis of either expert opinions or an indicator function describing the market crash. The former approach is widely used in the early studies of EWS, especially those concerning banking and debt crises [@klr; @kaminsky2006currency; @reinhart2011financial; @reinhart2013banking; @caprio2002episodes; @valencia2008systemic; @laeven2010resolution; @laeven2012systemic; @detragiache2001crises; @yeyati2011elusive]. Despite that the expert-defined crises are considered to be reliable for long-term predictions [@oh2006early], this paradigm fails to offer an efficient modeling solution as the frequency of observation increases. On the other hand, indicator functions based on a pre-specified threshold are more frequently used to define currency or stock market crashes. [@reinhart2011financial] define a currency crisis as the excessive exchange rate depreciation exceeds the threshold value of $15\%$. Alternatively, [@eichengreen1995exchange] propose to use the Financial Pressure Index (FPI) to measure the gross foreign exchange reserves of the Central Bank and the repo rate [@sevim2014developing]. Currency crises are thus identified as the FPI raises more than $1.5$ [@kibritcioglu1999leading], $2$ [@eichengreen1995exchange; @bussiere2006towards], $2.5$ [@edison2003indicators] or, $3$ [@klr; @berg1999predicting; @duan2008china]) standard deviations from its long-term mean. In the context of stock EWS, market crashes are indicated by the CMAX index falling below its mean by $2$ [@coudert2008does], $2.5$ [@li2015toward], or $3$ [@fu2019predicting] standard deviations. In terms of expressing crises as indicator functions, two major drawbacks emerge in the practical aspect. Despite that the paradigm of handling crises as crashes captures the associated acute loss, it fails to consider the extreme risk that comes along with the volatility jump. Moreover, the selection of crisis thresholds should be handled more delicately taking into account the trade-off between missing crises and false alarms resulted from over-/under-estimated thresholds [@babecky2014banking]. In terms of the predictive model, three types of methods are commonly applied to generate early warning signals for currency, banking and debt crises, namely the logit-probit regression [@frankel; @eichengreen; @demirg; @bussiere2006towards; @beckman] , the signaling approach [@klr; @kaimingrac; @berg1999predicting; @davis] and machine learning-based models [@nag; @kim2004korean; @celik; @yu2010multiscale; @giovanis2012study; @sevim2014developing]. Among the limited studies on stock markets [@fu2019predicting], [@coudert2008does] use logit and multi-logit models to predict stock and currency crises and find the leading effect of risk aversion indicators for stock early warning. [@li2015toward] shows the significance of S&P 500 futures and options in predicting stock crashes basing on a logit model. By combining the logit model and Ensemble Empirical Mode Decomposition, [@fu2019predicting] recently develop an EWS for daily stock crashes using investor sentiment indicators and achieve good in-sample and test-set results. Due to the non-linear nature of financial data, machine-learning algorithms are also recognized tools in the general field of stock market prediction. In the literature of EWS, artificial neural networks [@kim2004korean; @oh2006early; @kim2004usefulness; @yu2010multiscale; @sevim2014developing; @celik], fuzzy inference [@Lin2008fuzzy; @Nan2012fuzzy; @giovanis2012study; @fang2012adaptive] and support vector machines (SVM) [@hui2006research; @hu2008financial; @ahn2011usefulness] are proven accurate models for financial crisis prediction. Despite the promising accuracy demonstrated by those studies, few investigates the test-set early warning power of the model, that is the duration of forewarned period before the crisis onset. To fill in the gaps discussed above, the objectives of this study are threefold. First, we attempt to develop a robust crisis classifier to precisely identify stock market turbulence on daily basis. The crisis classifier consists of two key components, namely the switching ARCH (SWARCH) model [@hamil] and two-peak (or valley-of-two-peaks) method [@rosenfeld1983histogram]. Instead of focusing on the return horizon, the proposed classifier tackles the problem from the perspective of the volatility [@rodtue; @kim2013modeling; @Fink2016; @regimecopula2018; @benmim2019financial]. The switching ARCH (SWARCH) model is adopted to label crisis/non-crisis episodes with high/low volatility regimes that imply market turbulence/tranquility [@hamil; @hamlin; @susram; @susedw]. The model’s effectiveness in depicting Chinese stock crises is explicitly examined in the authors’ previous study on the contagion effect among housing, stock, interest rate and currency markets in China and the U.S. [@authors]. On the other hand, the two-peak method is an automatic thresholding approach [@jain1995machine] which selects classification thresholds automatically based on predetermined principles in order to obtain more robust segmentation. To classify stock turbulence, the two-peak method is performed on the histogram of SWARCH filtering possibilities to determine the optimal crisis cut-off. Second, a dynamic early warning system is developed integrating the crisis classifier and long short-term memory (LSTM) neural network [@jordan] to alert crisis onsets. As for the predictive model, LSTM is proven to be a state-of-art mechanism in the general field of financial forecasting [@chen2015lstm; @fischer2018deep; @wu2018adaboost; @cao2019financial], including volatility forecasting [@yu2018forecasting; @kim2018forecasting; @liu2019novel]. To the best of the authors’ knowledge, this study is the first that incorporates LSTM in an EWS. Last, a comprehensive evaluation of the EWS is conducted by first examining the crisis classifier and predictor separately. To be specific, we empirically study the precision and robustness of the crisis classifier in comparison to the most widely used approach which defines stock crises according to an indicator functions of CMAX. The LSTM crisis predictor is then evaluated upon two baseline models, i.e. the back-propagation neural network (BPNN) and support vector regression (SVR), regarding to the performance metrics including the rand accuarcy, binary cross-entropy loss, receiver operating curve (ROC), area under curve (AUC) and the SAR score. To evaluate the effectiveness and stability of the EWS as a whole, the proposed algorithm is performed in not only the test set but also cross-validation and back-testing. According to the evaluation, the integrated EWS achieves the state-of-art performance and warns stock turbulence in the test set with $96.6\%$ accuracy and on average $2.4$ days ahead of crisis onsets. The remaining part of this paper is organized as follows. Section \[sec:data\] describes the data included. Section \[structure\] explicitly introduces the structure of the EWS and the algorithm related to the dynamic prediction of stock turbulence. Section \[sec:result\] evaluates the model according to its performance, and Section \[conclusion\] summarizes the conclusion. Data {#sec:data} ==== Data Frequency Reflection Source ---------------------------------------------------------------------- ----------- -------------------- ----------------------------- Close price, log returns and realized volatilities of the SSEC index Daily Endogenous factors WIND database S&P500 Stock Price Index Daily US stock market Yahoo finance USD/CNY exchange rate Daily Currency US Federal Reserve Board Gold Price World Gold Council Oil Price International Monetary Fund Interest rate for China(IMF published), M1, M2, CPI : Data description. \[tab:data\] In this study, the Shanghai Stock Exchange Composite (SSEC) index is hired to reflect the Chinese stock market oscillation. Explanatory variables that are incorporated to predict stock crises are described in Table \[tab:data\] in terms of frequency, purpose and source. Specifically, endogenous factors include the close price, log return and realized volatility[^1] of the SSEC index. The rest of the variables are exogenous factors of four genres reflecting the U.S. stock market, currency level, global and domestic economies, respectively. The samples span from Dec 27, 1998 to Oct 7, 2018 and are split into $70\%$ training and $30\%$ test sets. Table \[tab:stats\_summary\] shows the full sample statistics of the explanatory variables. Mean St.Dev. Skewness Kurtosis Jarque-Bera -------------------------- ------------------ ------------------ ---------- ---------- ---------------- SSEC Close Price 2766.65 560.77 0.68 1.01 $291.46^{***}$ SSEC log return 0.02 1.49 -0.78 4.86 $2643.2^{***}$ SSEC realized volatility 1.7 0.31 1.86 4.05 $3069.1^{***}$ S&P500 Index 1682.81 529.84 0.19 -1.04 $124.03^{***}$ USD/CNY exchange rate 6.49 0.27 0.06 -1.46 $217.38^{***}$ Gold Price 1296.08 231.33 0.24 -0.14 $26.129^{***}$ Oil Price 73.25 22.88 -0.12 -1.41 $208.00^{***}$ Interest rate for China 3.06 0.22 0.73 2.22 $717.48^{**}$ M1 $3.38^{\dagger}$ $1.08^{\dagger}$ 0.47 -0.79 $151.87^{***}$ M2 $1.10^{\dagger}$ $3.91^{\dagger}$ 0.12 -1.21 $154.91^{***}$ CPI 95.83 6.78 -0.4 -1.04 $174.59^{***}$ : Statistics of explanatory variables. St.Dev. is the standard deviation. $***$ and $**$ denote the (null normal) hypothesis test at the $1\%$ and $5\%$ significance level. $\dagger$ denotes the unit of M1 and M2 is $10^{13}$ Chinese yuan. \[tab:stats\_summary\] An integrated early warning model {#structure} ================================= Crisis identification with SWARCH and two-peak method {#subsec:EWS} ----------------------------------------------------- ### High/low volatility regimes in the stock oscillation {#subsubsec:SWARCH} Stock crashes are inevitable results of volatility jumps. To explain this phenomenon, we propose to investigate the high/low volatility regime of the stock return based on the SWARCH model [@hamil]. The target is to provide a reliable solution to crisis warning from the perspective of risk. Following [@hamil], the log return of stock price with high/low volatility regimes could be formulated as a AR(1)-SWARCH(2,1) process given by: $$\begin{aligned} y_{t} &= u + \theta_{1}y_{t-1}+\epsilon_{t},\quad \epsilon_{t}|\mathcal{I}_{t-1}\sim N(0, h_{t});\label{swarch1}\\ \frac{h_{t}^{2}}{\gamma_{s_{t}}} &= \alpha_{0}+\alpha_{1}\frac{\epsilon_{t-1}^{2}}{\gamma_{s_{t-1}}}, s_{t}=\{1,2\}.\label{swarch2}\end{aligned}$$ Eq.(\[swarch1\]) describes an AR(1) process with a normal error term $\epsilon_t$ of variance $h_{t}$. The regime switching structure of the residual variance $h_{t}$ is given by Eq.(\[swarch2\]) where the $\alpha's$ are non-negative, the $\gamma's$ are scaling parameters that capture the change in each regime, $s_{t}$ is the state variable that $s_{t}=1$ indicates the low volatility state, and $s_{t}=2$ indicates the high volatility state. The probability law which results in the stock market switching between the high/low volatility regimes is assumed to be the constant transition probabilities of a two-state Markov chain, $$\begin{aligned} p_{ij} = Prob(s_{t}=j|s_{t-1}=i), i,j = \{1,2\}. \end{aligned}$$ The classification of high/low volatility regimes can be implemented on the basis of the filtering probability, which is a byproduct of the maximum likelihood estimation. The filtering probability based on historical observations till time $t$, $Y_{t}$, written as $$\begin{aligned} \label{eq:filt} P(s_{t}=i|Y_{t};\boldsymbol{\theta}_{t})\end{aligned}$$ where $\boldsymbol{\theta}_{t}$ is the vector of model parameters to be estimated. Given that $s_{t}=2$ is the state of high volatility, $P(s_{t}=2|Y_{t};\boldsymbol{\theta}_{t})$ can be interpreted as the conditional probability of crises based on the current information of time $t$. We thus define stock turbulence as the following binary function. $$\begin{aligned} \label{eq1} \text{Crisis}_{t} = \begin{cases} 1, & P(s_{t}=2|Y_{t};\boldsymbol{\hat{\theta}}_{t}) \geq c \\ 0, & \text{otherwise.} \end{cases} \end{aligned}$$ where $\boldsymbol{\hat{\theta}}_{t}$ is the estimated parameter vector and $c$ is the crisis threshold/cutoff point. In this way, stock crisis classification is structured through the mechanism that filtering probabilities of the system being in the high volatility regimes tend to increase as the stock price becomes more volatile, and there exists a threshold $c$ which identifies crises once it is exceeded. By Eq.\[eq1\], $c$ indicates the lowest-level likelihood of the high-volatility state that could be considered as crises. Hence the determination of $c$ plays a key role in the EWS. ### Crisis thresholding: two-peak method {#subsubsec:threshold} To balance the trade-off between sensitivity and false alarms [@babecky2014banking], this study adopts the two-peak method to automatically determine crisis thresholds. The two-peak method is developed with the general purpose of finding the optimal threshold in the context of binary classification, and is proven experimentally credible in solving image processing-related classification problems [^2]. According to the two-peak method, the optimal threshold of a binary system is the minimum value between the two peaks of the frequency density histogram [@Weszka1978A]. There are several alternative thresholding mechanisms that are built on the histogram, such as the Otsu’s method [@Ohtsu2007A] that solves the multi-threshold problem by considering the pixel variance. In this study, we use two-peak as it is the most straightforward of all, and the foundation of other approaches thereafter. Given that our crisis classifier has two state classes, i.e. crisis (1) and non-crisis (0), the two-peak method is applied to determine the crisis cutoff based on the SWARCH filtering probabilities of the high-volatility state $P(s_{t}=2|Y_{t};\boldsymbol{\hat{\theta}}_{t})$. Specifically, we first sketch the histogram of high-volatility filtering probabilities from time $0$ to $t$. The valley bottom between the two frequency peaks is then selected as the optimal cutoff point at $t$. To further enhance the robustness of our system, the two-peak method is performed on a recursive basis to obtain dynamic thresholds as the prediction moves forward (See Algorithm \[algorithm\] in the next section). Crisis warning with long-short term memory neural network {#subsec:LSTM} --------------------------------------------------------- The long-short term memory (LSTM) network [@jordan] belongs to the family of recurrent neural networks (RNNs) [@hochreit] and is designed to learn both long- and short-term dependencies for sequential forecasting. As a deep learning model, LSTM networks nowadays are widely used in the financial sector in a variety of areas from stock prediction to risk management. As an extension of classic RNNs, LSTM keeps its merit to allow the processing of sequential data with arbitrary lengths via the hidden state vector, at the same time enhances the learning power of long-distance dependency by introducing the so-called memory cell. As it is displayed in Figure \[fig:lstm\_cell\], the inputs of a LSTM cell at time $t$, namely $a_{t-1}$ and $C_{t-1}$, are memories that contain historical information passed through from the former cell in the form of activation and peephole functions. $\mathbf{\Gamma}_{f}$, $\mathbf{\Gamma}_{u}$, $\mathbf{\Gamma}_{o}$ are sigmoid functions of the forget gate, the update gate and the output gate that determine the information to be discarded, added and reproduced, respectively. $\tilde{C}_{t}$ is the new candidate output created by the $tanh$ layer, which is limited in the range $[-1,1]$. Finally, three outputs, $\hat{y}_{t+1}$, $a_{t}$ and $C_{t}$, are produced for the current cell at time $t$, where $a_{t}$ and $C_{t}$ are recurrently employed as the inputs of the next memory block[^3]. Note that the last sigmoid function in the upper right corner is only included in the last cell of the LSTM network, and is used to produce the network output $\hat{y}_{t+1}$ in $[0,1]$. ![The LSTM cell inner structure at time $t$.[]{data-label="fig:lstm_cell"}](lstm_cell.png){width="80.00000%"} ![LSTM with window size $l$. The LSTM cell structure in Fig. \[fig:lstm\_cell\] is the last cell of the window.[]{data-label="fig:lstm_struct"}](integ_lstm_window.png){width="\textwidth"} For each cell of LSTM, the formulae of the three gates, $\mathbf{\Gamma}_f, \mathbf{\Gamma}_{u}, \mathbf{\Gamma}_{o}$ and the new candidate state $\tilde{C}_{t}$ can be written as: $$\begin{aligned} \mathbf{\Gamma}_{f} &= \sigma(x_{t}U^{f} + a_{t-1}W^{f});\\ \mathbf{\Gamma}_{u}&=\sigma(x_{t}U^{u} + a_{t-1}W^{u});\\ \mathbf{\Gamma}_{o}&=\sigma(x_{t}U^{o} + a_{t-1}W^{o});\\ \tilde{C}_{t}& = tanh(x_{t}U^{g}+a_{t-1}W^{g})\end{aligned}$$ where $\sigma$ is the sigmoid function, $x_{t}$ is the input vector, $a_{t}$ is the activation, $U$ is the weighted matrix connecting inputs to the current layer, $W$ is the recurrent connection between the previous and current layers. Therefore, $\mathbf{\Gamma}_{f,u,o}$ implies the level of information that each gate processes after balancing between the previous activation and the current input. The candidate state $\tilde{C}_{t}$ is computed based on the current input and the previous hidden state, and later added to the next cell state $C_{t}$ on the basis of $C_{t-1}$. This study applies LSTM as the predictive model and infers stock market turmoils on daily basis using historical information of a fixed window size $l$. As Figure \[fig:lstm\_struct\] shows, each prediction is made from a network of $l$ LSTM memory blocks that sequentially process the input of both the explanatory variables $\{\boldsymbol{x}_{t-l+1},...,\boldsymbol{x}_{t}\}$ and the SWARCH filtering probability $\{P[s_{t-l+1}=2|Y_{t-l+1};\boldsymbol{\hat{\theta}}_{t-l+1}]$,...,$P[s_{t}=2|Y_{t};\boldsymbol{\hat{\theta}}_{t}]\}$ from time $t-l+1$ to $t$, for $t \geq l$. The output $\hat{y}_{t+1}$ is produced by a sigmoid function indicating the probability of high-volatility at $t+1$. Early warning signals are thus released for time $t+1$ once the value of $\hat{y}_{t+1}$ exceeds the two-peak threshold at $t$ (See Section \[subsubsec:threshold\]). The LSTM network consists of $13$ input layers (the number of the input variables), $32$ LSTM layers and the output layer, which brings $5921$ parameters to be trained. The batch size and epoch number are $20$ and $100$, respectively. Given the sample size of $T$ days, $T-l$ predictions will be made from $t=l+1$ onward. Figure \[fig:ews\] structures the integrated EWS regarding to its three key components, i.e. the crisis classifier, crisis predictor and warning generator. Specifically, the crisis classifier identifies stock market turmoils according to Eq. \[eq1\] based on the SWARCH filtering probability and the crisis cutoff determined by the two-peak method. The output of the crisis classifier then becomes the target variable and is fed into the LSTM crisis predictor together with other explanatory variables. Finally, early warning signals are generated as the predicted output exceeds the crisis cutoff. To make robust daily predictions, the system is performed on a dynamically-recursive basis. The procedure is described by Algorithm \[algorithm\] on the sample of size $T$. calculate log returns of SSEC index price, $\{logR_{t}, t=1,...,T\}$ set up the window size $l$ System evaluation {#sec:result} ================= In this section, a comprehensive evaluation is conducted by studying first the crisis classifier and predictor (see Figure \[fig:lstm\_struct\]) separately, then the early warning system as a whole. In the view of the crisis classifier that jointly uses the SWARCH and two-peak method, we intend to understand its precision and robustness with empirical evidences. Next, the LSTM predictor is evaluated with two baselines, i.e. the back-propagation neural network (BPNN) and support vector regression (SVR), according to the performance metrics consisting of the rand accuracy [@rand], binary cross-entropy loss [@Shannon1948A], receiver operating curve (ROC), area under curve (AUC)[@roc] and the SAR score [@sar2004]. Last, the early warning power of the entire system is investigated according to its test-set performance, cross-validation as well as back-testing. Evaluating the crisis classifier -------------------------------- The credibility of an EWS is rooted in a precise and robust crisis classifier. According to Figure \[fig:lstm\_struct\] and Algorithm \[algorithm\], stock crisis cutoffs are computed dynamically for each prediction taking into account the current market condition as well as past information. To validate the reliability of the proposed classification mechanism, we analyze the crisis identification results in terms of its precision and robustness. As crisis classification is a subjective topic heavily depending on the individual understanding of crisis, limited analysis could be done on quantitatively evaluating the accuracy due to the lack of true crisis labels. Given the target of the proposed EWS is to predict stock market turbulence, we investigate the precision of the crisis classifier with emphasis on the empirical evidence related to volatility regimes. Figure \[fig:filtering\_prob\] and Table \[crises\] summarizes the turmoils classified in the Chinese stock market by performing Algorithm \[algorithm\] on the full sample. In Figure \[fig:filtering\_prob\], crisis periods are highlighted in both the log return (grey in the upper panel) and filtering probability plots (red in the lower panel). As Figure \[fig:filtering\_prob\] suggests, the proposed hybrid algorithm captures all the recorded stock crises that are also reflected by volatile log return and filtering probability jumps. Table \[crises\] lists the starting and ending days of the detected turmoils with their associated critical events. The hybrid classifier identifies crises with promising results explaining not only major global turmoils including the 2008 global financial crisis and 2010 European debt crisis, but also local stock turbulence resulted from the industrial reformation in 2013, the high-leveraging bubble collapse in 2015 and the economic slowdown since 2018. ![Log return of the SSEC stock index (upper panel) and the corresponding high-volatility filtering probability (lower panel). Turmoil periods determined by Algorithm 1 are highlighted in grey and red.[]{data-label="fig:filtering_prob"}](log_filt_fullmatch.png){width="0.95\linewidth"} Event Identified Crisis Period ------- -------------------------- -- 2008/10/04 - 2009/11/06 2009/11/16 - 2010/03/28 2010/05/06 - 2010/09/16 2010/10/08 - 2011/03/17 2011/09/22 - 2012/02/17 2013/03/04 - 2013/08/12 2014/12/02 - 2016/04/27 2016/05/09 - 2016/05/11 2018/02/09 - 2018/03/06 2018/07/02 - 2018/08/03 2018/08/06 - 2018/08/31 2018/09/04 - 2018/09/26 : Turmoil periods that are identified by Algorithm 1 in the full sample and associated critical events. \[crises\] The robustness of a model broadly refers to its error-resisting strength and resilience in producing results as data changes. Therefore, robust crisis classifications are subject to a dynamical thresholding mechanism to handle turbulence with limited influence from sample variations. Table \[tab:cutoffs\] summarizes the statistics of crisis cutoffs that are determined in the full sample and test set by Algorithm \[algorithm\]. The number of cutoffs in a sample is given by the difference between the number of observations $T$ and the window size $l$. With windows of size $5$ (days), this study computes $2430$ and $725$ cutoffs in the full sample and test set of lengths $2434$ and $729$ (days), respectively. As Table \[tab:cutoffs\] displays, the cutoff distributions of the full sample and test set are both right skewed given the greater means ($0.515$, $0.429$) than the medians ($0.489$, $0.396$) and modes ($0.483$, $0.355$). In other words, the positive skewness indicates that cutoffs are more likely to take values below the mean and around the median/mode. Moreover, test-set cutoffs exhibit lower values with mean, median and mode approximating to $0.4$, whereas those in the full sample are closer to $0.5$. To explain this difference in the crisis cutoff distributions, Figure \[fig:cutoff\] shows the smoothed histograms of SWARCH filtering probabilities in the full (upper panel) and test (lower panel) sets. The optimal cutoffs determined at the end of Algorithm \[algorithm\] for the last day observation are circled in blue. Although the test set exhibits a greater proportion of tranquil days with a significantly higher right peak, the two-peak method detects the true valley at $0.35$ to threshold the crisis. -------------------------------------- ------- ------- -------- -------- ------- ------- Count Mean St.Dev Median Mode Range $\text{Cutoff}_{\text{full-sample}}$ 2430 0.515 0.128 0.489 0.483 1.00 $\text{Cutoff}_{\text{test-set}}$ 725 0.429 0.121 0.396 0.355 0.996 -------------------------------------- ------- ------- -------- -------- ------- ------- : Statistics of crisis cutoffs in the full sample and test set.[]{data-label="tab:cutoffs"} ![Cutoffs selected by the two-peak method in the full sample (upper panel) and test set (lower panel).[]{data-label="fig:cutoff"}](cutoff.png){width="0.9\linewidth"} Further with the argument that a robust classification model ought to produce stable classification results regardless of the sampled information, Table \[cmax\] compares stock crises identified by Algorithm \[algorithm\] with those defined on the CMAX indicator[^4]. Daily classifications are computed in both the full-sample and test set for each model. To examine the level of consistency between crises identified on different samples, Table \[cmax\] lists the number (Row 3) and percentage (Row 5) of days that the full-sample crises differ from the test-set crises during the period from 2015/10/13 to 2018/09/28 (729 days in total)[^5]. With $16$ days of deviation in a period of almost three years and a percentage of $2.19\%$[^6], the integrated EWS produces the most robust crisis classification result in comparison to the CMAX indicator on a range of parameters $\lambda=1, 1.5, 2, 2.5$. -------------------------------- --------------------- -------------------- ---------------------- -------------------- ---------------------- Integrated EWS CMAX$_{\lambda=1}$ CMAX$_{\lambda=1.5}$ CMAX$_{\lambda=2}$ CMAX$_{\lambda=2.5}$ No. of crises with full sample 191 203 148 3 0 No. of crises with test set 207 154 112 115 67 No. of non-matching days $\boldsymbol{16}$ 49 36 112 67 Total no. of days 729 729 729 729 729 % of non-matching days $\boldsymbol{2.19}$ 6.27 4.94 15.4 9.19 -------------------------------- --------------------- -------------------- ---------------------- -------------------- ---------------------- : Difference between crises identified on the full sample and test set during 2015/10/13 - 2018/09/28. \[cmax\] Evaluating the crisis predictor {#subsec:model_compare} ------------------------------- We now evaluate the crisis predictor based on LSTM in comparison to two baselines of BPNN and SVR. The associated performance metrics is discussed in Section \[subsec:perf\_measure\]. And Section \[subsec:perf\_outs\] presents the results. ### Evaluation metrics {#subsec:perf_measure} The evaluation metrics of the predictor include three classes of performance measures that are designed for classification models, i.e. (I) the rand accuracy (@rand, [-@rand]) and binary cross-entropy loss (@Shannon1948A, [-@Shannon1948A]), (II) the receiver operating curve (ROC) and area under curve (AUC) [@roc], and (III) the SAR score [@sar2004]. Prior to the performance evaluation, Table \[tab:truefalse\] lists the confusion matrix that is used by the rand accuracy, ROC and SAR score. Actual/Predicted 1: Crisis 0: Non-crisis ------------------ --------------------- --------------------- 1: Crisis True positive (TP) False negative (FN) 0: Non-crisis False positive (FP) True negative (TN) : Confusion matrix for daily stock early warning. \[tab:truefalse\] In general, true positive/negative corresponds to the true prediction of turmoil/tranquility, whereas false positive/negative corresponds to the false prediction. Moreover, the true positive rate (TPR) and false positive rate (FPR) are defined as the percentage of truly predicted crisis signals over the total number of actual crises, and the percentage of falsely predicted crisis signals over the total number of actual tranquility, respectively. $$\begin{aligned} \text{TPR} = \frac{\text{TP}}{\text{TP + FN}}, \quad \text{FPR} = \frac{\text{FP}}{\text{FP + TN}}. \end{aligned}$$ Evaluation Metric I: The rand accuracy is defined as the proportion of true results over the total number of cases examined: $$\begin{aligned} \label{eq4} \text{Accuracy} = \frac{\text{TP + TN}}{\text{TP + TN + FP + FN}}.\end{aligned}$$ The binary cross-entropy loss measures the performance of classification models in terms of the level that the predicted probability of getting $1$ deviates from the true label $0$ or $1$, and is expressed as: $$\begin{aligned} \label{eq5} \text{Loss} = -\frac{\sum_{i=1}^{n-l+1}(y_{i} log(\hat{y_{i}})+(1-y_{i})log(1-\hat{y_{i}}))}{n-l+1},\end{aligned}$$ where $y_{i}$ and $\hat{y}_{i}$ denote the true and predicted values, and $n$ is the sample size. As we set the label of crises to be $True$ ($=1$), an EWS model that warns all the crises regardless the number of $False$ alarms it creates, has zero loss indicating none of the crisis is lost. According to Eq. (\[eq4\]) and (\[eq5\]), a greater level of predictive power comes along with higher rand accuracy and lower binary cross-entropy loss. Evaluation Metric II: As one of the most classic performance measures, ROC plots the FPR (x-axis) against the TPR (y-axis) for each classifier. As a higher true positive rate is always more preferable given the level of the false positive rate, models with the ROC curve bending closer towards the upper-left corner are more preferable. To offer a quantitative representation of the graphic information carried by ROC, AUC computes the total area under the ROC curve and suggests the better model with the greater AUC value. Evaluation Metric III: Different from the widely-used F1-score, the SAR score [@sar2004] is developed as a more holistic performance measure due to the uncertainty of the correct evaluation metric. By taking into account three distinctive measures including the accuracy, AUC and root mean-squared error (RMSE), models with higher SARs are regarded as better-performing as they produce overall high accuracy/AUC and low RMSE. $$\begin{aligned} \label{eq6} \text{SAR} = \frac{1}{3}(\text{Accuracy} + \text{AUC} + (1- \text{RMSE})). \end{aligned}$$ ### Test-set performance {#subsec:perf_outs} To evaluate the predictive power of LSTM, BPNN and SVR, Table \[tab:acc\_loss\] preliminarily lists the test-set rand accuracy and binary cross-entropy loss of the three models following Algorithm \[algorithm\][^7]. Three window sizes $l= {22, 10, 5}$ are considered. As Table \[tab:acc\_loss\] suggests, LSTM with window size $l=5$ produces the optimal crisis prediction that yields the highest accuracy $0.952$ and lowest loss $0.27$ among all cases examined. Among the three predictive models, LSTM consistently demonstrates the strongest forecasting power of stock crises given different window sizes. Moreover, it is observed that with the last five days of information, all the three models achieve the best result (except the accuracy of SVR) in comparison to the predictions made with $22$ and $10$ days information. Therefore, the remaining of the evaluation is conducted with window size $5$. --------------------------- ----------- ------- ------- LSTM BPNN SVR Window size $l=22$ Accuracy 0.930 0.882 0.927 Binary cross-entropy loss 0.380 0.439 0.407 Window size $l=10$ Accuracy 0.941 0.865 0.920 Binary cross-entropy loss 0.326 0.305 0.405 Window size $l=5$ Accuracy **0.952** 0.899 0.912 Binary cross-entropy loss **0.270** 0.369 0.423 --------------------------- ----------- ------- ------- : Test-set rand accuracy and binary cross-entropy loss based on LSTM, BPNN and SVR with varying the window sizes[]{data-label="tab:acc_loss"} Figure \[fig:roc\] further shows the test-set ROC and SAR curves. In particular, Panel (a) shows the ROC curves and AUC values generated from the test-set predictions. As the ROC-oriented metric tells the model’s ability in classifying the binary states, LSTM enhances BPNN and SVR with its outstanding capacity in distinguishing turbulence/tranquility with the optimal ROC curve and AUC value of $0.997$. Panel (b-d) plot the SAR score against the crisis cutoff for the three predictive models. According to Algorithm \[algorithm\], the test-set score of each model is highlighted as the blue point in each panel corresponding to the last day cutoff obtained from the dynamic crisis classifier, whereas the red point is the highest score obtained by the predictive model regardless of the optimal cutoff. From the perspective of model scores, LSTM remains its dominating state with the highest test-set score (blue) of $0.9$, whereas BPNN and SVR score $0.74$ and $0.77$, respectively. Moreover, LSTM appears to be the most insensitive model to cutoff variations as the scores remain relatively high in a prolonged range shaped as a flat peak in Panel (b). With a similar shape in Panel (c), BPNN produces a SAR curve with reduced scores and a smaller peak, where the test-set score $0.74$ exhibits a large deviation from the best score of $0.86$. Despite that SVR produces close scores as BPNN, the sharp peak in Panel (d) suggests the model’s instability in predicting with varying cutoffs. \ Crisis early warning {#subsec:model_comp} -------------------- In this section, we examine the integrated EWS in terms of its early warning power with respect to the forewarned period ahead of the actual crisis onsets. By keeping BPNN and SVR as baselines, test-set forecasting, cross-validation and back-testing are implemented. In this way, we hope to gain a comprehensive understanding on the system’s crisis forecasting capacity, stability as well as effectiveness. ### Test-set performance {#test-set-performance} Figure \[fig:pred\] shows the predicted signals by the integrated EWS against their true crisis labels ($1$ for crisis and $0$ otherwise) by the SWARCH model. As Figure \[fig:pred\] displays, crisis onsets in the test set mainly occur in 2016 as a result of the lasting effect from the 2015 stock market crash, and in 2018 due to the financial instability in China. Overall, the proposed EWS with LSTM predictions depict the test-set set crises in a relatively precise manner with the first alarms (red line) before the actual onsets (blue dashed line). As the predictive model is replaced by BPNN, the EWS tends to delay in producing the first crisis signal despite of its ability in capturing ongoing crises. In contrast to LSTM and BPNN, SVR appears to suffer from both delayed warnings and false alarms in Figure \[fig:pred\]. ![Test-set early warning signals []{data-label="fig:pred"}](lstm_predsignal.png "fig:"){width="70.00000%"}\ ![Test-set early warning signals []{data-label="fig:pred"}](bpnn_predsignal.png "fig:"){width="70.00000%"}\ ![Test-set early warning signals []{data-label="fig:pred"}](svr_predsignal.png "fig:"){width="70.00000%"} To support the preceding claims with evidence, Table \[tab:correct\_pred\] summarizes the numerical results related to the test-set forecasting. The test set consists of $729$ days with $207$ crisis days (Row 2, Table \[tab:correct\_pred\]) and $6$ crisis onsets (Row 6, Table \[tab:correct\_pred\]). With respect to Table \[tab:correct\_pred\], EWS with LSTM demonstrates a promising capability of warning stock turbulence that is reflected by its dominating results in all aspects examined. In particular, LSTM-based EWS improves the baselines with $200$ days of correct predictions which yield a rate of $96.6\%$. On average, the model alerts stock turbulence $2.4$ days ahead of the actual crises and successfully warns $83.3\%$ of the onsets with $0\%$ false alarm. It is worth-mentioning that the missed onset occurs two days after its preceding crisis on July 25, 2018 and lasts for one day only. In line with the observations made from Figure \[fig:pred\], the major weakness of the BPNN-based EWS reveals due to its delay in generating crisis signals, which is suggested by a relatively high rate of correct daily predictions $94.6\%$ and a low rate of successfully predicted onsets $33.3\%$. Beside the delays, the high percentage of $30\%$ false alarms makes SVR the least reliable model for the early warning task in comparison to LSTM and BPNN. Model LSTM BPNN SVR ------------------------------- ---------- ------ ------ Total crises 207 207 207 Correct predictions **200** 196 184 % of correct predictions **96.6** 94.6 88.9 Total onsets 6 6 6 Predicted onsets **5** 2 2 % of correct predicted onsets **83.3** 33.3 33.3 % of false onset alarms 0.0 0.0 30.0 Avg. days-ahead onsets **2.4** 1.5 2.0 : Summary of test-set forecasting. % of correct predictions is the percentage of correctly predicted crisis signals, % of correct predicted onsets is the percentage of correctly forewarned onsets. \[tab:correct\_pred\] ### Cross validation To analyze the stability of the EWS, a $k$-fold cross validation is further conducted in the test set with varying values $k=3,5,8$[^8]. Rand accuracy and cross-entropy loss are used as the performance measures. ---------------------------------- ----------- ------- ------- LSTM BPNN SVR $k=3$ Accuracy (avg.) 0.919 0.896 0.909 Binary cross-entropy loss (avg.) **0.165** 0.314 0.658 $k=5$ Accuracy (avg.) **0.951** 0.911 0.923 Binary cross-entropy loss (avg.) 0.218 0.288 0.454 $k=8$ Accuracy (avg.) 0.913 0.858 0.884 Binary cross-entropy loss (avg.) 0.168 0.476 0.389 ---------------------------------- ----------- ------- ------- : Average test-set rand accuracy and binary cross-entropy loss from the $k$-fold cross validation \[tab:cv\_acc\_loss\] The governing performance of the LSTM-based EWS is proven to be robust in the cross validation. Given different $k$ values, LSTM invariably produces the greatest accuracy and lowest loss in comparison to the baselines. In particular, EWS with LSTM achieves the best test-set accuracy of $95.1\%$ in the $5$-fold validation. And even with $3$-fold validation, LSTM obtains an accuracy of $91.9\%$ and loss of $16.5\%$ in the test set. ### Back-testing In the back-testing, a simple trading strategy is adopted to the SSEC stock index with the aim to verify the effectiveness of the proposed EWS from a practical perspective. Assuming symmetric information between the market and the investors with a fair level of risk aversion, a market portfolio of SSEC index is constructed and held until the EWS alerts crises, and repurchased as the EWS suggests tranquility. Table \[tab:backtest\] summarizes the expectation and standard deviation of returns together with Sharp ratios in the full sample and test set. In the absence of early warning mechanisms, the market portfolio yields expected returns of $2.3\%$ and $-0.5\%$ and standard deviations $1.48$ and $1.156\%$ in the full sample and test set, respectively. The corresponding Sharp ratios are $1.6\%$ and $-0.4\%$. By exiting the market position with respect to early warned turbulence, the strategy significantly reduces the systematic risk (indicated by the $\sigma$), which naturally results in a higher level of Sharp ratio, regardless of the predictive model. More importantly, back-testing once more verifies that the LSTM-based EWS outperforms the baselines and holds the greatest effectiveness and stability. Specifically, the effectiveness of LSTM is proven by its dominating Sharp ratios which improve the market portfolio by $3.8\%$ and $2.4\%$ in the full sample and test set, respectively. Meanwhile, its stability is suggested by the monotonous positive impact on the market portfolio regarding to the three portfolio measures in the risk-return horizon. Albeit the moderate improvements achieved by BPNN (Sharp ratios $4.6\%$ and $0.2\%$ in the full sample and test set) and SVR (Sharp ratios $-0.1\%$ and $0.5\%$), the two models exhibit limitations due to their weaker and fluctuating results. ------------------ ------------ -------------- --------------- $E[R_{p}]$ $\sigma_{p}$ $SharpeRatio$ full-sample market portfolio 0.023 1.480 0.016 EWS-LSTM 0.039 0.718 **0.054** EWS-BPNN 0.045 0.983 0.046 EWS-SVR -0.001 0.687 -0.001 test set market portfolio -0.005 1.156 -0.004 EWS-LSTM 0.012 0.610 **0.020** EWS-BPNN 0.004 0.625 0.002 EWS-SVR 0.003 0.594 0.005 ------------------ ------------ -------------- --------------- : Back-testing in the full sample and test set. $E[R_{p}]$ is the expected return rate, $\sigma_{p}$ is the standard deviation and $SharpeRatio$ is given by $SharpeRatio=\frac{E[R_{p}]-R_{f}}{\sigma_{p}}$, where $R_{f}$ denotes the risk free interest rate and is set to zero in our study. \[tab:backtest\] Conclusions {#conclusion} =========== In this study, a novel EWS with a dynamic architecture integrating the SWARCH model, two-peak thresholding and LSTM is developed to identify and predict stock market turbulence. According to the models’ performance on the ten-year sample of Shanghai Stock Exchange Composite index, the following concluding remarks are emerged. 1. As one of the most powerful models handling sequential data, LSTM remains its outstanding position in the daily prediction task of stock crises. To be specific, the reliability of LSTM in this study is not only reflected by the high accuracy of $96.6\%$ and on average $2.4$ days of forewarned period, but also its stability of outperforming the baselines throughout the evaluation process in the test-set, cross-validation as well as back-testing. 2. In addition to a high-performing predictive model, a precise and robust crisis identification mechanism also plays the central role in facilitating the effectiveness and reliability of an EWS. By adopting the two-peak method to determine crisis cutoffs, the proposed EWS suggests a constructive alternative to current existing approaches, and yields promising crisis classifications in the Chinese stock market in comparison to the classic indicator function based on CMAX. 3. Stock market turbulence described by the SWARCH volatility regimes is proven to be a good crisis indicator in both theory and practice, as the proposed EWS depicts all the recorded major stock crises in the sample with significantly improved back-testing results than the market portfolio. For future study, we plan to further investigate the proposed EWS structure in terms of other crisis thresholding and prediction mechanisms. At the same time, we are interested in applying the integrated EWS to predict other types of financial crises, e.g. currency or banking crises, in different frequency domains. Acknowledgement {#acknowledgement .unnumbered} =============== We acknowledge the support by 2016 Jiangsu Science and Technology Programme: Young Scholar Programme (No. BK20160391). [^1]: The realized volatility at time $t$ is defined as $\sigma_{rv}=\sqrt{\frac{1}{N_{t}}\sum_{t=1}^{N_{t}}(p_{t}-\bar{p_{t}})}$, where the $N_{t}$ is the count of days after time $t$, $p_{t}$ is the log return at $t$ and $\bar{p_{t}}$ is the average of log return til $t$. [^2]: [@Prewitt1966] first introduce the two-peak method in the cell image analysis of distinguishing the gray-level difference between the background and the object. The performance of the method is further verified in [@rosenfeld1983histogram] by analyzing the histogram’s concavity structure. [^3]: The initial values of $C_{0}$ and $a_{0}$ are both zero. [^4]: The CMAX index is the most widely used crisis indicator in the literature concerning stock market early warning [@coudert2008does; @li2015toward; @fu2019predicting]. It defines stock crashes with an indicator function $1_{CMAX_t<\mu_t-\lambda\sigma_t} CMAX_t:1$, where $\mu_t$ and $\sigma_t$ are the mean and standard deviation of $CMAX_t$, and $\lambda$ is a market-dependent constant [@klr]. In this study, we consider four cases when $\lambda=1, 1.5, 2, 2.5$ as they give reasonable results for Chinese stock market crises. [^5]: This is the period when full sample and test set intersect. [^6]: We believe that the percentage deviation of $2.19\%$ could be further reduced with a larger sample of test set and cross validation. Relevant analyses on this aspect will be conducted in the future study. [^7]: To obtain the baseline results, Algorithm \[algorithm\] is implemented by replacing the LSTM in line 16 by BPNN and SVR. [^8]: Given the selection of $k$ deals with the trade-off between bias and variance, the cross validation is conducted up to $8$ folds in order to ensure the size of the test set is large enough to offer statistically representative of the model’s forecasting power.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the results of resonant photoemission spectroscopy experiments on the Mo$_{1-x}$Re$_{x}$ alloy compositions spanning over two electronic topological transitions (ETT) at the critical concentrations $x_{C1}$ = 0.05 and $x_{C2}$ = 0.11. The photoelectrons show an additional resonance ($R3$) in the constant initial state (CIS) spectra of the alloys along with two resonances ($R1$ and $R2$) which are similar to those observed in molybdenum. All the resonances show Fano-like line shapes. The asymmetry parameter $q$ of the resonances $R1$ and $R3$ of the alloys is observed to be large and negative. Our analysis suggests that the origin of large negative q is associated with phonon assisted inter band scattering between the Mo-like states and the narrow band that appeared due to the ETT.' address: - 'Free Electron Laser Utilization Laboratory, Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh - 452 013, India' - 'Free Electron Laser Utilization Laboratory, Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh - 452 013, India' - 'Free Electron Laser Utilization Laboratory, Raja Ramanna Centre for Advanced Technology, Indore, Madhya Pradesh - 452 013, India' author: - 'L. S. Sharath Chandra' - Shyam Sundar - Soma Banik - 'SK. Ramjan' - 'M. K. Chattopadhyay' - 'S. N. Jha' - 'S. B. Roy' nocite: '[@*]' title: 'Localization of electronic states resulting from electronic topological transitions in the Mo$_{1-x}$Re$_x$ alloys: A photoemission study' --- Introduction ============ Excellent mechanical properties at elevated temperatures and room temperature workability of the Mo$_{1-x}$Re$_{x}$ alloys find widespread applications in medical fields, aerospace and defense industries and welding production [@war93; @man03; @mao08]. These alloys were also found to be promising materials for superconducting applications [@shy13; @and89; @vib14]. This is due to the occurrence of the electronic topological transitions (ETT) which improves the mechanical and superconducting properties when rhenium is added to molybdenum [@shy15; @shy15a; @ign80; @dav70; @smi76]. An ETT is a transition where pockets of Fermi surface appear or disappear when an external parameter such as composition, pressure, and/or magnetic field is varied [@lif60; @vol17]. The ETT was theoretically predicted first by I.M. Lifshitz [@lif60] for pure metals subjected to elastic strains. The first experimental evidence for the ETT was given by Brandt et. al., by analyzing the pressure induced changes in the superconducting properties of the Tl-Hg alloys [@bra66]. In the Mo$_{1-x}$Re$_x$ alloys, the superconducting transition temperature ($T_C$) increases non-uniformly from 0.90 K for $x$ = 0 to about 12.6 K for the $x$ = 0.40 alloy [@ign80; @shy15a] without a change in the crystal structure. The range of compositions where the sharp change in $T_C$ is observed is associated with two ETTs at the critical concentrations $x_{C1}$ = 0.05 and $x_{C2}$ = 0.11 [@oka13; @ign07; @vel86; @gor91; @sko94; @ign02; @sko98]. Earlier, we have shown that the ETTs and the superconducting properties are coupled in the Mo$_{1-x}$Re$_x$ alloys [@shy15a; @shy16]. Our previous studies [@shy15; @shy16] revealed that the appearance of Re 5$d$ like states at the Fermi level above $x > x_{C2}$ leads to multi-band superconductivity [@tar19]. We have also shown that the scattering of $s$ like electrons to Re 5$d$ like states by the soft phonon modes is responsible for the enhancement of $T_C$ for $x > x_{C2}$ [@shy15a]. The observation of the fact that the stress required to generate a fixed amount of strain $>$ 3% is minimum around 7 at.% rhenium in molybdenum [@dav70] is due to the softening of phonons. The phonon softening improves the ductility of these alloys [@wad86]. Smith et al., have observed that the phonons soften along the N-H direction of the Brillouin zone when these alloys undergo ETT [@smi76]. This is the same location of the Brillouin zone where a pocket of the Fermi surface appears [@oka13] when more than 5 at.% of rhenium is added to molybdenum. These features are associated with the changes in its electronic structure due to the ETT and the resulting localization of the electronic states for a small group of carriers that appear against the background of a continuous electronic spectrum [@ign07]. This localization of electronic states responsible for the ETT occurs due to the random potential introduced in the system when the composition is changed [@mot90; @and58]. Localization of electronic states associated with the ETT also occurs when the pressure or magnetic field is a control parameter. This is due to the localization of electronic states at the band edges [@mot90]. However, the localization effects in the vicinity of the ETT is quite small and the detection requires very sensitive techniques [@ign07]. The helium ion channeling experiments on Mo$_{1-x}$Re$_x$ alloys revealed that the effective mass of the electrons in the states of the new Fermi surfaces formed during the ETT are higher than the other electrons [@dik06]. Ignat’eva and co-workers have also observed large oscillations in the pressure dependence of $T_C$ and in certain temperature derivatives of the normal state thermoelectric power and resistivity against a specific background which is related to the ETT in the Mo$_{1-x}$Re$_x$ alloys [@ign07]. They argued that the localization of electrons filling the new states in the new Fermi surfaces appearing during the ETT cause the observed oscillations in the physical quantities [@ign07]. Here, we provide a direct experimental evidence of the localization of these electrons in the newly formed Fermi surfaces appearing during the ETT. The localized states against a background continuum give rise to Fano resonance in many of the observables [@fan61; @mir10; @mis15]. The photoelectrons from the valance band state with orbital angular momentum $l_v$ can show a Fano resonance when the photons of energies ($E_P$) corresponding to an inner core shell having an orbital angular momentum $l_i$ = $l_v$-1 are used for the photoemission spectroscopy (PES) measurement [@dav86; @all92]. The interference between the electrons from the direct emission and those from the autoionization due to the super-Coster-Kronig transition can be explained by the Fano resonance [@dav86] as $$I(E_P) = I_{nr}(E_P) + I_0(E_P )\frac{(\epsilon +q)^2}{1+\epsilon^2}.$$ Here $I$($E_P$) is the photoemission intensity at a given binding energy of the valance band. The intensities $I_0$($E_P$) and $I_{nr}$($E_P$) correspond respectively to the transitions to the states of continuum which do and do not interact with the discrete autoionizing state. The $\epsilon$ = 2($E_P$-$E_R$)/$\Gamma$ is the reduced energy with resonance energy $E_R$ and width $\Gamma$. The asymmetry parameter $q$ of the Fano resonance depends on the ratio of probabilities of transition to a discrete state and transition to the continuum as well as on the hybridization between the discrete and the continuum [@dav86; @all92; @mir10; @mis15; @som16; @som17]. When $|q| >>$ 1, the transition to the continuum is very weak and the line shape is determined only by the discrete state. The $|q|\approx$ 1 indicates a strong hybridization between the discrete and continuum states and $|q|$ = 0 indicates that the states belong to the continuum [@dav86; @all92]. Therefore, the resonant PES technique (RPES) can be used to study the interaction of localized states with the continuum [@dav86; @all92; @som16; @som17]. We therefore study the RPES of the bcc Mo$_{1-x}$Re$_x$ alloys around $x$ = $x_{Ci}$. Recently, we have shown that multi-band superconductivity manifests in the Mo$_{1-x}$Re$_x$ alloys with $x > x_{C2}$ [@shy15]. The $T_C$, the Sommerfeld coefficient of specific heat ($\gamma$) [@shy15a] and the elastic constant ($C_{11}$) [@sha18] are observed to change abruptly across $x_{C2}$. Our studies revealed that the origin of the above observations is related to the phonon assisted inter-band $s$-$d$ scattering between the band that newly appeared at the Fermi level ($E_f$) due to the ETT and the rest of the bands [@shy15a]. We have also shown from RPES studies on the alloys with $x >> x_{C2}$ that the newly appeared band has Re5$d$ like character [@shy16]. Further analysis by Evans and Dowben revealed stronger Mo-Re orbital hybridization for $x >> x_{C2}$ in comparison with the Mo-Mo or Re-Re bonding [@eva17]. In this article, we present the appearance of a distinct Fano like resonance in the constant initial state (CIS) intensities in the spectra of RPES measurements on the Mo$_{1-x}$Re$_x$ alloys for the density of states (DOS) at the binding energy ($E_B$) $\approx$ -2 eV below the $E_f$ when $x \geq x_{C1}$ = 0.05. Our analysis suggests that the observation of a large negative $q$ is associated with the localization of electron like states in the newly appeared Fermi pocket as well as with the phonon assisted inter-band $s$-$d$ interaction. Experimental Details ==================== The arc melted polycrystalline samples of Mo$_{1-x}$Re$_{x}$ ($x$ = 0-0.15) alloys formed in the body centred cubic (bcc) phase (space group: Im$\bar{3}$m) [@shy15a]. The resonant photoemission measurements were performed at the Angle Resolved Photoelectron Spectroscopy beamline of Indus-1 Synchrotron, India. Base vacuum during resonant photoemission measurement was 3 $\times$ 10$^{-10}$ mbar. The samples were cleaned in situ by sputtering. The absence of carbon 1s peak at 284 eV and oxygen 1s peak at 531 eV was ensured before the measurements. The valence band photoemission spectra were recorded using Phoibos 150 electron energy analyser (SPECS) with a typical resolution of 135 meV in the range $E_P$ = 23 eV to 70 eV. In this energy range, the photoemission spectra is more bulk sensitive [@shy16]. Core levels were studied using X-ray photoemission spectroscopy (XPS) with Mg $K_\alpha$ source (XR 50, SPECS). Results and Discussions ======================= Figure 1 shows the XPS spectra of the Mo$_{1-x}$Re$_{x}$ ($x$ = 0-0.15) alloys corresponding to the Re5$p$, Mo4$p$ and Re4$f$ inner core shells [@shy16; @fuk80]. All these shells show spin-orbit (SO) splitting. The quantitative analysis is carried out using the XPSPEAK4.1 software. For elemental molybdenum, the positions of the Mo4$p_{3/2}$ and Mo4$p_{1/2}$ peaks are respectively at $E_B$ = -35.8 eV and -38 eV. The Re4$f$ inner core shell of the $x$ = 0.05 alloy shows two SO split peaks at $E_B$ = -41.8 eV and -44.2 eV respectively for the Re4$f_{7/2}$ and Re4$f_{5/2}$ states. The $E_B$ of Re4$f$ core shell moves towards $E_f$ with increasing $x$. The features corresponding to Re5$p_{3/2}$ and Re5$p_{1/2}$ inner core shells [@fuk80] at $E_B$ = -31 eV and -33 eV are weakly visible for the alloys. As $x$ increases, the intensity of the Re4$f$ shell becomes predominant while that of the Mo4$p$ shell reduces and becomes feeble at $x$ = 0.15. Thus, the contributions of molybdenum and rhenium to the valance band of Mo$_{1-x}$Re$_{x}$ ($x$ = 0-0.15) alloys across the ETT can be studied through RPES in the range $E_P$ = 20-70 eV. In this range, the resonant enhancement of the valance band states of the Mo$_{1-x}$Re$_{x}$ alloys can be obtained from the interference of electronic wave functions from (a) direct photoemission and (b) Auger emissions for the following transitions [@shy16]: \(i) Mo 4$p$-5$s$ transition via $$\begin{aligned} \begin{aligned} 4p^64d^55s^1 + h\nu~&\rightarrow(a)~4p^64d^55s^0 + e^-\\ &\rightarrow(b)~4p^54d^55s^2~\rightarrow~4p^64d^55s^0 + e^-\\ \end{aligned}\end{aligned}$$ (ii) Mo 4$p$-4$d$ transition via $$\begin{aligned} \begin{aligned} 4p^64d^55s^1 + h\nu~&\rightarrow(a)~4p^64d^45s^1 + e^-\\ &\rightarrow(b)~4p^54d^65s^1~\rightarrow~4p^64d^45s^1 + e^-\\ \end{aligned}\end{aligned}$$ and\ (iii) Re $5p$-$5d$ transition via $$\begin{aligned} \begin{aligned} 5p^65d^56s^2 + h\nu~&\rightarrow(a)~ 5p^65d^46s^2 + e^-\\ &\rightarrow(b)~ 5p^55d^66s^2~\rightarrow~5p^65d^46s^2 + e^-\\ \end{aligned}\end{aligned}$$ Figure 2 shows the valance band photoemission spectra of the Mo$_{1-x}$Re$_{x}$ ($x$ = 0-0.15) alloys as a function of $E_P$. The Fermi energy ($E_f$) is taken as $E_B$ = 0. The valance band spectra of molybdenum (Fig. 2(a) and 2(b)) show two broad features at about $E_B$ $\approx$ -2 eV and $E_B$ $\approx$ -6 eV. As the photon energy is increased from $E_P$ = 23 eV, the intensities of both the features are decreased. The intensity of the feature at $E_B$ $\approx$ -6 eV increases sharply and shows a resonance ($R1$) when the $E_P$ reaches the threshold energy (35 eV) for the Mo4$p$ inner core shell. The feature at $E_B$ $\approx$ -2 eV do not show appreciable resonance at $E_P \approx$ 35 eV. Above $E_P$ = 40 eV, the intensity of the feature at $E_B$ $\approx$ -2 eV increases slowly and shows a broad resonance ($R2$) around 45 eV. The intensity of the feature at $E_B$ $\approx$ -6 eV again increases above 56 eV as $E_P$ approaches the Mo4$s$ threshold. These results are consistent with our previous studies on Mo [@shy16]. In Fig. 2(b) we have marked the positions ($E_B$(DFT)) of the Mo$4d$, Mo$5s$ and Mo$5p$ states where the allowed states in the valence band are expected from the density functional theory [@shy16]. From the comparison of position of $E_B$ of the resonances with the $E_B$(DFT), we can conclude that the resonance $R1$ corresponds to the 4$p$ to 5$s$ transition [@shy16] while the delayed resonance $R2$ corresponds to the 4$p$ to 4$d$ transition. In comparison with the elemental molybdenum, substantial changes in the valance band photoemission of $x$ = 0.05 (Fig. 2(c) and 2(d)), 0.1(Fig. 2(e) and 2(f)) and 0.15 (Fig. 2(g) and 2(h)) alloys are observed in the $E_P$ range 23-38 eV. The resonance $R1$ becomes broader with increasing $x$ (compare Fig.2 (a), (c) and (e)) and the valance band states around $E_B$ = -2 eV show additional resonance (marked as $R3$ in Fig. 2(c)) at about $E_P \approx$ 30 eV. The sharpness of the resonance $R3$ increases with increasing $x$, and hence, is related to the Re partial DOS. We have also marked the $E_B$(DFT) of the Re$5d$, Re$6s$ and Re$6p$ sub bands [@shy16] in the Fig. 2(d). As considerable part of the Re$5d$ sub-band is centred around $\approx$ -2 eV below $E_f$, the resonance $R3$ can be assigned to Re $5p$-$5d$ transition. For $E_P >$ 38 eV, the valance band spectra of the alloys are similar to that of molybdenum. Figure 3 shows the Fano line shape fitting of the constant initial state (CIS) intensities at (a) $E_B$$\approx$ -2 eV and (b) $E_B$$\approx$ -6 eV of valance band photoemission of the Mo$_{1-x}$Re$_{x}$ alloys. The parameters $E_R$, $\Gamma$ and $q$ of the fitting for ($x$ = 0, 0.05, 0.10 and 0.15) are presented in table I. The CIS plot of molybdenum for $E_B$$\approx$ -2 eV shows a weak peak ($R1'$) at $E_P \approx$ 34 eV (Fig.3(a)). The strength of this resonance increases with increasing $E_B$ and is maximum for $E_B$$\approx$ -6 eV ($R1$ in Fig. 3(b)). We found that molybdenum has only 5$s$ states at $E_B$$\approx$ -6 eV and the resonance $R1$ results from the 4$p$ to 5$s$ transition. The $q$ of $R1$ is about 3 over the entire valance band which indicates that the hybridization is weak between 4$d$ and 5$s$ states in the $E_B$ range -5 eV to $E_f$. The enhanced intensity of $R1$ at about $E_B$ = -6 eV indicates that most of the $5s$ states are present in this narrow energy range. The shift of the $R2$ resonance of the $4p$-$4d$ transition at $E_B \approx$ -2 eV from the Mo$4p$ threshold to $E_P \approx$ 45 eV indicates the presence of electron-electron correlations in the Mo$4d$ band [@dav86]. The $q$ of the $R2$ is $<$1 which indicates that the Mo4$d$ states have nearly free electron like character. In the alloys, the resonance $R2$ becomes sharper ($\Gamma$($x \neq$0)$<$ 0.5$\Gamma$(0)) with a $q \approx$ 1.2 due to the preferential Mo-Re bonding over the Mo-Mo or Re-Re bonding [@eva17]. The shape of the resonance $R1$ in the alloys is quite different from that of molybdenum. This indicates that the Re states contribute to $R1$ of the alloys. The $q$ of $R1$ in the alloys is negative and the $|q|$ increases sharply with increasing $x$. The value of $|q|$ for $x$ = 0.15 is tending to infinity indicating the localization of discrete Re states associated with the $R1$ resonance. ------ -------------- --------------- -------------- $x$ $E_P$ (eV) $\Gamma$ (eV) $q$ $R1$ 0 35.1$\pm$0.2 3.1$\pm$0.4 2.9$\pm$0.4 0.05 34.9$\pm$0.4 13.5$\pm$0.5 -2.4$\pm$0.3 0.10 35$\pm$0.3 12.6$\pm$0.5 -3.8$\pm$0.5 0.15 35.6$\pm$0.2 8.8$\pm$0.5 $-\infty$ $R2$ 0 41.8$\pm$1 25.3$\pm$1.2 0.89$\pm$0.1 0.05 43.4$\pm$1 14.8$\pm$3.3 1.4$\pm$0.4 0.10 44.4$\pm$0.8 8.8$\pm$0.9 1.7$\pm$0.6 0.15 45.3$\pm$0.4 10.5$\pm$0.6 1.6$\pm$0.2 $R3$ 0.05 34.8$\pm$0.4 8.8$\pm$1.6 -1.1$\pm$0.2 0.10 34.6$\pm$0.2 5.6$\pm$0.7 -1.1$\pm$0.1 0.15 33.3$\pm$0.2 6$\pm$0.7 -2.9$\pm$0.4 ------ -------------- --------------- -------------- : \[tab:table1\]The parameters corresponding to the fitting of the constant initial state spectra of the Mo$_{1-x}$Re$_{x}$ ($x$ = 0-0.15) alloys using eq.1. The fitting is shown in Fig. 3. -0.5 cm The resonance $R3$ is observed only in the alloys and is quite different from the other resonances discussed above. We see that the $E_R$ of $R3$ decreases with increasing $x$ and approaches the threshold of the Re$5p$ core shell for $x$ = 0.15. The values of $q$ of $R1$ and $R3$ of the alloys are large and negative. The large value of $q$ suggest that the bands responsible for the resonances are narrow and localized. The sign of the $q$ depends on the nature of interaction (mixing) between the discrete states and the continuum [@mis15; @cha78]. The Fano line shapes with negative (positive) values of $q$ have been reported in the Raman lines originating from the discrete optical phonon mode and an electron (holes) continuum in $n$-Si ($p$-Si) [@cha78; @sax17]. The change in the sign of $q$ between $p$-Si to $n$-Si arises from the difference in the nature of electron-phonon and hole-phonon interactions. The elemental molybdenum is a compensated metal with higher mobility of holes in comparison with the electrons [@cox72; @cox73]. The addition of rhenium (with one extra electron in comparison with molybdenum) increases the number of electrons with an additional band crossing the Fermi level for $x > x_{C1}$ [@vel86; @gor91; @sko94; @ign02; @ign07; @sko98; @oka13]. We have shown that the scattering of electrons between this Re5$d$ like band and the rest of the bands is through the electron-phonon interaction (phonon assisted interband scattering (PAIS))[@shy15a]. Hence, the origin of negative $q$ observed for $R3$ in the alloys can be assigned to the phonon assisted $s-d$ interaction. We have also shown that the enhancement in the $T_C$ of the Mo$_{1-x}$Re$_{x}$ alloys is due to the enhancement of the electron phonon coupling constant corresponding to PAIS [@shy15a]. Therefore, the enhanced $T_C$ along with the two-gap superconductivity in the Mo$_{1-x}$Re$_{x}$ alloys are due to the localization of electrons in the new Fermi pockets formed due to the ETT. Conclusions =========== The addition of rhenium to molybdenum is known to improve the ductility (“The Rhenium Effect”). It is observed that the stress required to produce a fixed amount of strain higher than 3% is minimum around $x$ = 0.07 [@dav70]. Smith et al. have observed that the phonons soften along the N-H direction of the Brillouin zone of the Mo$_{1-x}$Re$_{x}$ alloys when these alloys undergo ETT [@smi76]. This is the same location of the Brillouin zone where a pocket of the Fermi surface appears when $x >$ 0.05. Earlier, we have shown that the large enhancement of $T_C$ for $x >$ 0.05 is due to the changes in the electronic structure across the ETT [@shy15]. We have now shown that the resonant photoemission technique may be effectively used to distinguish between the narrow localized states and the delocalized continuum states of the Mo$_{1-x}$Re$_{x}$ alloys resulting from the ETT at $x_{C1}$ = 0.05 and $x_{C2}$ = 0.11. The states that crosses the Fermi level due to the ETT show an additional resonance as compared to those observed for elemental molybdenum. The $q$ parameter of this resonance is large and negative indicating that these states are electron like and are localized. By comparing the present results with previous studies [@shy15; @ign07], the enhanced superconducting transition temperature and other functional properties of the Mo$_{1-x}$Re$_{x}$ alloys are linked to the interaction between these localized states with the rest of the delocalized states and the associated changes in the electron-phonon interaction. The authors thank Babita Vinayak Salaskar for her help during experiments, Tapas Ganguli for his interest in this work, and Pankaj Sagdeo, IIT Indore for helpful discussion. See, e g., papers by J. Wardsworth and J. P. Wittenauer and by R. L. Heenstand [*Evolution of Refractory Metals and Alloys*]{} eds. E N C Dalder, T Grobstein and C S Olson (Warrendale: The Minerals, Metals and Materials Society (1993) ). Mannheim R L and Garin J L 2003 [*J. Mater. Process. Technol.*]{} [**143-144**]{} 533 Mao P, Han K and Xin Y 2008 [*J. Alloys and Comp.*]{} [**464**]{} 190 Shyam Sundar, L. S. Sharath Chandra, V. K. Sharma, M. K. Chattopadhyay, and S. B. Roy, AIP Conf. Proc. [**1512**]{} 1092 (2013). A. Andreone, A. Barone, A. Di Chiara, F. Fontana, G. Mascolo, V. Palmieri, G. Peluso, G. Pepe, and U. Scotti Di Uccio, J. Supercond. [**2**]{} 493 (1989). V. Singh, B. H. Schneider, S. J. Bosman, E. P. J. Merkx, and G. A. Steele, Appl. Phys. Lett. [**105**]{} 222601 (2014). Shyam Sundar, L. S. Sharath Chandra, M. K. Chattopadhyay and S. B. Roy, J. Phys. Condens. Mater [**27**]{}, 045701 (2015). Shyam Sundar, L. S. Sharath Chandra, M. K. Chattopadhyay S. K. Pandey, D. Venkateshwarlu, R. Rawat, V. Ganesan, and S. B. Roy, New J. Phys. [**17**]{}, 053003 (2015). Ignat’eva T A and Cherevan’ Yu A 1980 [*Pis’ma Zh. Eksp. Teor. Fiz.*]{} [**31**]{} 389 Davidson D L and Brotzen F R 1970 [*Acta Metallurgica*]{} [**18**]{} 463 Smith H G, Wakabayashi N and Mostoller M 1976 *Proceedings of Second Rochester Conference on Superconductivity in d- and f - band metals* eds. Douglass D H (Prenum press, New York) p. 223 I. M. Lifshitz, J. Exptl. Theoret. Phys. [**11**]{}, 1130 (1960) \[Zh. Eksp. Teor. Fiz. [**38**]{}, 1569 (1960)\]. G. E. Volovik, Low Temp. Phys. [**43**]{} 47 (2017). N. B. Brandt, N. I. Ginzburg, B. G. Lazarev, L. S. Lazareva, V. I. Makarov, and T. A. Ignat’eva, J. Exptl. Theoret. Phys. [**22**]{}, 61 (1966) \[Zh. Eksp. Teor. Fiz. [**49**]{}, 85 (1965)\]. T. A. Ignat’eva, Phys. Solid State [**49**]{}, 403 (2007) \[Fiz. Tve. Tela [**49**]{}, 389 (2007)\]. M. Okada, E. Rotenberg, S. D. Kevan, J. Schafer, B. Ujfalussy, G. M. Stocks, B. Genatempo, E. Bruno, and E. W. Plummer, New J. Phys. [**15**]{}, 093010 (2013). A. N. Velikodny, N. V. Zavaritskii, T. A. Ignat’eva, and A. A. Yurgens, Pis’ma Zh. Eksp. Teor. Fiz. [**43**]{}, 597 (1986). Y. N. Gornsoostyrev, M. I. Katsnelson, G. V. Peschanskikh, and A. V. Trefilov, Phys. Stat. Sol. (b) [**164**]{}, 185 (2011). N. V. Skorodumova, S. I. Simak, Ya M. Blanter, and Yu Kh. Vekilov, Pis’ma Zh. Eksp. Teor. Fiz. [**60**]{}, 549 (1994). T. A. Ignat’eva, and A. N. Velikodny, Low Temp. Phys. [**28**]{}, 403 (2002). N. V. Skorodumova, S. I. Simak, I. A. Abrikosov, B. Johansson, and Yu. Kh. Vekilov, Phys. Rev. B [**57**]{}, 14673 (1998). Shyam Sundar, Soma Banik, L. S. Sharath Chandra, M. K. Chattopadhyay, Tapas Ganguli, G. S. Lodha, S. K. Pandey, D. M. Phase, and S. B. Roy, J. Phys. Condens. Mater [**28**]{}, 315502 (2016). V. Tarenkov, A. Dyachenko, V. Krivoruchko, A. Shapovalov and M. Belogolovskii, J. Supercond. Novel Mag. (2019). (DOI: 10.1007/s10948-019-05297-0). J. Wadsworth, T. G. Nieh, and J.J. Stephens, Scripta Metallug. [**20**]{}, 637 (1986). See. e. g. N. F. Mott, [*Metal-Insulator Transitions*]{} (2nd Ed., Taylor & Francis, London (1990)). P. W. Anderson, Phys. Rev. [**109**]{}, 1492 (1958). N. P. Dikiy, and T. A. Ignatyeva, Phys. Solid State [**48**]{}, 25 (2006). U. Fano, Phys. Rev. [**124**]{}, 1866 (1961). A. E. Miroshnichenko, S. Flach, Y. S. Kivshar, Rev. Mod. Phys. [**82**]{}, 2257 (2010). O. V. Misochko, and M. V. Lebedeva, J. Exptl. Theoret. Phys. [**120**]{}, 651 (2015) \[Zh. Eksp. Teor. Fiz. [**147**]{}, 750 (2015)\]. L. C. Devis, J. Appl. Phys. [**59**]{}, R25 (1986). J. W. Allen, in [*Ch6 of Synchrotron Radiation Research: Advances in surface and interface science*]{} (Plenum Press, New York) [**1**]{}, pp. 253-326 (1992). D. Mondal, Soma Bnaik, C. Kamal, M. Nand, S. N. Jha, D. M. Phase, A. K. Sinha, A. Chakrabarti, A. Banerjee, and T. Ganguli, J. Alloy. Comp. [**688**]{}, 187 (2016). Soma Banik, P. K. Das, A. Bendounan, I. Vobornik, A. Arya, N. Beaulieu, J. Fujii, A. Thamizhavel, P. U. Sastry, A. K. Sinha, D. M. Phase, and S. K. Deb, Sci. Rep. [**7**]{}, 4120 (2017). L. S. Sharath Chandra, Shyam Sundar, M. K. Chattopadhyay, and S. B. Roy, (unpublished (2018)). P. Evans, and P. A. Dowben, J. Phys. Condens. Mater [**29**]{}, 098001 (2017). Y. Fukuda, F. Honda, and J. W. Rabalais, Surf. Sci. [**93**]{}, 338 (1980). M. Chandrasekhar, J. B. Renucci, and M. Cardona, Phys. Rev. B [**17**]{}, 1623 (1978). S. K. Saxena, P. Yogi, S. Mishra, H. M. Rai, V. Mishra, K. Warshi, S. Roy, P. Mondal, P. R. Sagdeo, and R. Kumar, Phys. Chem. Chem. Phys. [**19**]{}, 3178895 (2017). W. R. Cox, and F. R. Brotzen, J. Phys. Chem. Solids [**33**]{}, 2311 (1972). W. R. Cox, D. J. Hayes, and F. R. Brotzen, Phys. Rev. B [**7**]{}, 3580 (1973).
{ "pile_set_name": "ArXiv" }
--- author: - 'Matthieu B[é]{}thermin' - Emanuele Daddi - Georgios Magdis - Claudia Lagos - Mark Sargent - Marcus Albrecht - Hervé Aussel - Frank Bertoldi - Véronique Buat - Maud Galametz - Sébastien Heinis - Olivier Ilbert - Alexander Karim - Anton Koekemoer - Cedric Lacey - 'Emeric Le Floc’h' - Felipe Navarrete - Maurilio Pannella - Corentin Schreiber - Vernesa Smolčić Myrto Symeonidis - Marco Viero bibliography: - 'biblio.bib' date: 'Received 19 September 2014 / Accepted 11 November 2014' title: Evolution of the dust emission of massive galaxies up to z=4 and constraints on their dominant mode of star formation --- Introduction ============ Galaxy properties evolve rapidly across cosmic time. In particular, various studies have shown that the mean star formation rate (SFR) at fixed stellar mass increases by a factor of about 20 between z=0 and z=2 [e.g., @Noeske2007; @Elbaz2007; @Daddi2007; @Pannella2009; @Magdis2010; @Karim2011; @Elbaz2011; @Rodighiero2011; @Whitaker2012; @Heinis2014; @Pannella2014]. This very high SFR can be explained by either larger reservoirs of molecular gas or a higher star formation efficiency (SFE). Large gas reservoirs have been found in massive galaxies at high redshift [e.g., @Daddi2008; @Tacconi2010; @Daddi2010a; @Tacconi2013; @Aravena2013], which could imply high SFRs with SFE similar to that of normal star-forming galaxies in the local Universe. On the other hand, follow-up of bright submillimeter galaxies (SMGs) revealed that their very intense SFR ($\sim$1000M$_\odot$/yr) is also driven by a SFE boosted by a factor of 10 with respect to normal star-forming galaxies in the local Universe [e.g., @Greve2005; @Frayer2008; @Daddi2009a; @Daddi2009b], likely induced by a major merger. This difference can be understood if we consider that galaxies are driven by two types of star formation activity: smooth processes fed by large reservoirs of gas in normal star-forming galaxies and boosted star-formation in gas rich mergers [@Daddi2010b; @Genzel2010].\ Using models based on the existence of this main-sequence of star-forming galaxies, i.e., a tight correlation between SFR and stellar mass, and outliers of this sequence with boosted sSFRs (SFR/M$_\star$) called starbursts hereafter, @Sargent2012 showed that the galaxies with the highest SFR mainly correspond to starbursts, while the bulk of the star formation budget ($\sim$85%) is hosted in normal star-forming galaxies. This approach allows us to better understand the heterogeneous characteristic of distant objects concerning their gas fraction and their SFE [@Sargent2014]. The quick rise of the sSFR would thus be explained by larger gas reservoirs in main-sequence galaxies. However, the most extreme SFRs observed in high-redshift starbursts would be caused by a SFE boosted induced by major mergers.\ At high redshift, the gas mass is difficult to estimate. Two main methods are used. The first is based on the measurement of the intensity of rotational transitions (generally with J$_{\rm upper}<3$) of $^{12}$CO and an assumed CO-to-H$_2$ conversion factor [@Daddi2008; @Tacconi2010; @Saintonge2013; @Tacconi2013]. The main limitation of this method is the uncertainty on this conversion factor, which is expected to be different from the local calibrations in high-redshift galaxies with strongly sub-solar metallicities [@Bothwell2010; @Engel2010; @Genzel2012; @Tan2013; @Genzel2014]. The second method is based on the estimate of the dust mass, which is then converted into gas mass using the locally-calibrated relation between the gas-to-dust ratio and the gas metallicity [e.g., @Munoz-Mateos2009; @Leroy2011; @Remy2014]. The main weakness of this method is the need of an accurate estimate of the gas metallicity and the possible evolution in normalization and scatter of the relation between gas-to-dust ratio and gas metallicity. This method was applied on individual galaxies at high redshift by @Magdis2011 [@Magdis2012b] and @Scoville2014, but also on mean spectral energy distributions (SEDs) measured through a stacking analysis [@Magdis2012b; @Santini2014]. This method has not been applied at redshifts higher than $\sim$2. The aim of this paper is to extend the studies of dust emission and gas fractions derived from dust masses to z$\sim$4 and analyze possible differences in trends as redshift increases.\ In this paper, we combine the information provided by the *Herschel* data and a mass-selected sample of galaxies built from the UltraVISTA data [@Ilbert2013] in COSMOS to study the mean dust emission of galaxies up to z=4 (Sect.\[data\]). We measure the mean SED of galaxies on the main sequence and strong starbursts using a stacking analysis. We then deduce the mean intensity of the radiation field and the mean dust mass in these objects using the @Draine2007 model (Sect.\[stackfit\]). We discuss the observed evolution of these quantities in Sect.\[results\] and the consequences on the nature of star formation processes at high redshift in Sect.\[discussion\]. Throughout this paper, we adopt a $\Lambda$CDM cosmology with $\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$, $H_0 = 70$km/s/Mpc and a @Chabrier2003 initial mass function (IMF).\ Data ==== Stellar mass and photometric redshift catalog using UltraVISTA data {#masscat} ------------------------------------------------------------------- Deep Y, J, H, and K$_{\rm s}$ data (m$_{\rm AB, 5\sigma} \sim$ 25 for the Y band and 24 for the others) were produced by the UltraVISTA survey [@McCracken2012]. The photometric redshift and the stellar mass of the detected galaxies were estimated using Le PHARE [@Arnouts1999; @Ilbert2006] as described in @Ilbert2013. The precision of the photometric redshifts at 1.5$<$z$<$4 is $\sigma_{\rm \Delta z / (1+z)}$ = 0.03. According to @Ilbert2013, this catalog is complete down to $10^{10.26}$M$_\odot$ at z$<$4. X-ray detected active galactic nuclei (AGNs) are also removed from our sample of star-forming galaxies, since the mid-infrared emission of these objects could be strongly affected the AGN. Luminous X-ray obscured AGNs might still be present in the sample. However, their possible presence appear to have limited impact on our work as no mid-infrared excess is observed in the average SEDs measured by stacking (see Fig.\[fig:sedms\] and \[fig:sedsb\] and Sect.\[results\]).\ As this paper studies star-forming galaxies, we focused only on star-forming galaxies selected following the method of @Ilbert2010 based on the rest-frame $\rm NUV-r^{+}$ versus $\rm r^{+}-J$ and similar to the UVJ criterion of [@Williams2009]. The flux densities in each rest-frame band are extrapolated from the closest observer-frame band to minimize potential biases induced by the choice of template library. At z$>$1.5, 40-60% of the objects classified as passive by this color criterion have a sSFR$>10^{-11}$yr$^{-1}$ according to the SED fitting of the optical/near-IR data [@Ilbert2013 their Fig.3]. However, the sSFRs obtained by SED fitting are highly uncertain, because of the degeneracies with the dust attenuation. These peculiar objects are at least 10 times less numerous than the color-selected star-forming sample in all redshift bins. Including them or not in the sample has a negligible impact ($\sim0.25$$\sigma$) on the mean SEDs measured by stacking (see Sect.\[stackfit\]). We thus based our study only on the color-selected population for simplicity.\ *Spitzer*/MIPS data ------------------- The COSMOS field (2 deg$^2$) was observed by *Spitzer* at 24$\mu$m with the multiband imaging photometer (MIPS). A map and a catalog combined with the optical and near-IR data was produced from these observations [@Le_Floch2009]. The 1$\sigma$ point source sensitivity is $\sim$15$\mu$Jy and the full width at half maximum (FWHM) of the point spread function (PSF) is $\sim$6".\ *Herschel*/PACS data -------------------- The PACS (photodetecting array camera and spectrometer, @Poglitsch2010) evolutionary probe survey (PEP, @Lutz2011) mapped the COSMOS field with the *Herschel*[^1] space observatory [@Pilbratt2010] at 100 and 160$\mu$m with a point-source sensitivity of 1.5 mJy and 3.3 mJy and a PSF FWHM of 7.7“ and 12”, respectively. Sources and fluxes of the PEP catalog were extracted using the position of 24$\mu$m sources as a prior. This catalog is used only to select strong starbursts up to z$\sim$3. The 24$\mu$m prior should not induce any incompleteness of the strong-starburst sample, since their minimum expected 24$\mu$m flux is at least 2 times larger than the detection limit at this wavelength [^2].\ *Herschel*/SPIRE data --------------------- We also used *Herschel* data at 250$\mu$m, 350$\mu$m, and 500$\mu$m taken by the spectral and photometric imaging receiver (SPIRE, @Griffin2010) as part of the *Herschel* multitiered extragalactic survey (HerMES, @Oliver2012). The FWHM of the PSF is 18.2“, 24.9”, and 36.3", the 1$\sigma$ instrumental noise is 1.6, 1.3, and 1.9mJy/beam, and the 1$\sigma$ confusion noise is 5.8, 6.3, and 6.8mJy/beam [@Nguyen2010] at 250$\mu$m, 350$\mu$m, and 500$\mu$m, respectively. In this paper, we used the sources catalog extracted using as a prior the positions, the fluxes, the redshifts, and mean colors measured by stacking of 24$\mu$m sources, as described in @Bethermin2012b.\ LABOCA data ----------- The COSMOS field was mapped at 870$\mu$m by the large APEX bolometer Camera (LABOCA) mounted on the Atacama Pathfinder Experiment (APEX) telescope[^3] (PI: Frank Bertoldi, Navarrete et al. in prep.). We retrieved the raw data from the ESO Science Archive facility and reduced them with the publicly available CRUSH (version 2.12–2) pipeline [@Kovacs2006; @Kovacs2008]. We used the algorithm settings optimized for deep field observations[^4]. The output of CRUSH includes an intensity map and a noise map. The mapped area extends over approximately 1.4 square degrees with a non-uniform noise that increases toward the edges of the field. In this work we use the inner $\sim$0.7deg$^2$ of the map where a fairly uniform sensitivity of $\sim$4.3 mJy/beam is reached (Pannella et al. in prep.) with a smoothed beam size of $\sim$27.6". Contrary to SPIRE data, which are confusion limited, LABOCA data are noise limited and the maps are thus beam-smoothed to minimize their RMS. AzTEC data ---------- An area of 0.72deg$^2$ was scanned by the AzTEC bolometer camera mounted on the Atacama submillimeter telescope experiment (ASTE). The sensitivity in the center of the field is 1.23mJy RMS and the PSF FWHM after beam-smoothing is 34" [@Aretxaga2011].\ Methods {#stackfit} ======= Sample selection ---------------- ![\[fig:massdistr\] Stellar mass distribution of our samples of star-forming galaxies in the various redshift bins we used. Only galaxies more massive than our cut of $3\times 10^{10}\,\rm M_\odot$ are represented. The first bin contain fewer objects than the second one because our cut fall at the middle of the first one. The arrows indicate the mean stellar mass in each redshift bin.](Mass_distrib.eps) In this paper, we base our analysis on mass-selected samples of star-forming galaxies (see Sect.\[masscat\]). We chose the same stellar mass cut of $3 \times 10^{10}$M$_\odot$ at all redshifts to be complete up to z$\sim$4. We could have used a lower mass cut at lower redshifts, but we chose this single cut for all redshifts to be able to interpret the observed evolution of the various physical parameters of the galaxies in our sample in an easier way. This cut is slightly higher than the 90% completeness limit at z$\sim$4 cited in @Ilbert2013 [1.8$\times$10$^{10}$M$_\odot$] and implies an high completeness of our sample, which limits potential biases induced by the input catalog on the results of our stacking analysis [e.g., @Heinis2013]. The exact choice of our stellar mass cut has negligible impact on the mean SEDs measured by stacking: we tested a mass cut of $2 \times 10^{10}$M$_\odot$ and $5 \times 10^{10}$M$_\odot$ and found that, after renormalization at the same L$_{\rm IR}$, the SEDs are similar ($\chi_{\rm red}^2$ = 0.57 and 0.79, respectively). These results agree with @Magdis2012b, who did not find any evidence of a dependence of the main-sequence SED on stellar mass at fixed redshift. The mass distribution of star-forming galaxies does not vary significantly with redshift, except in normalization (@Ilbert2013 and Fig.\[fig:massdistr\]). The average stellar mass at all redshifts is between $10^{10.75}$M$_\odot$ and $10^{10.80}$M$_\odot$ (Fig.\[fig:massdistr\] and Table\[tab:physpar\]).\ Star-forming galaxies whose stellar mass is larger than our cut do not correspond to the same populations at z=4 and z=0. The massive objects at z=4 are formed in dense environments, corresponding to the progenitors of today’s clusters and massive groups [e.g., @Conroy2009; @Moster2010; @Behroozi2012a; @Bethermin2013; @Bethermin2014]. Most of these objects are in general quenched between z=4 and z=0 [e.g., @Peng2010]. In contrast, our mass cut at z=0 corresponds to Milky-Way-like galaxies. At all redshift, this cut is just below the mass corresponding to the maximal efficiency of star formation inside halos (defined here as the ratio between stellar mass and halo mass, @Moster2010 [@Behroozi2010; @Bethermin2012b; @Wang2013; @Moster2013]).\ Our stellar mass cut is slightly below the knee of the mass function of star-forming galaxies [@Ilbert2013]. The population we selected thus hosts the majority ($>$50%) of the stellar mass in star-forming galaxies. Since there is a correlation between stellar mass and SFR, we are thus probing the population responsible for a large fraction the star formation (40-65% depending on the redshift according to the @Bethermin2012b model, see also @Karim2011). Our approach is thus different from @Santini2014 who explore in detail how the SEDs evolve at z$<$2.5 in the SFR-M$_\star$ plane using a combination of UV-derived and 24$\mu$m-derived SFRs. We aim to push our analysis to higher redshifts and we thus use this more simple and redshift-invariant selection to allow an easier interpretation and to limit potential selection biases. In addition to this mass selection, we divide our sample by intervals of redshift. The choice of their size is a compromise between large intervals to have a good signal-to-noise ratio at each wavelength and small intervals to limit the broadening of the SEDs because of redshift evolution within the broad redshift bin.\ We also removed strong starbursts from our sample (sSFR$>$10 sSFR$_{\rm MS}$) and studied them separately. These objects are selected using the photometric catalogs described in Sect.\[data\]. For the sources which are detected at 5$\sigma$ at least in two *Herschel* bands, we fitted the SEDs with the template library of @Magdis2012b allowing the mean intensity of the radiation field $\langle U \rangle$ to vary by $\pm$0.6dex (3$\sigma$ of the scatter used in the @Bethermin2012c model). These criteria of two detections at different wavelengths and the high reliability of the detections prevent biasing of the starbursts towards positive fluctuations of the noise in the maps and limit the flux boosting effect. We then estimated the SFR from the infrared luminosity, L$_{\rm IR}$, using the @Kennicutt1998 relation. We performed a first analysis using the same evolution of the main-sequence (sSFR$_{\rm MS}$ versus z) as in @Bethermin2012c to select sSFR$>$10 sSFR$_{\rm MS}$ objects. We then fit the measured evolution of the main-sequence found by a first stacking analysis (see Sect.\[sect:stacking\] and Sect.\[sect:sedfit\]) to prepare the final sample for our analysis. We could have chosen a lower sSFR cut corresponding to 4 times the value at the center of the main-sequence as in @Rodighiero2011, but the sample would be incomplete at z$>$1 because of the flux limit of the infrared catalogs.\ ![\[SBcomp\] The thick red solid line represents the luminosity limit corresponding to a criterion of a 5$\sigma$ detection in at least two *Herschel* bands. The other solid lines are the limits for a detection at only one given wavelength (purple for 100$\mu$m, blue for 160$\mu$m, turquoise for 250$\mu$m, green for 350$\mu$m, orange for 500$\mu$m). The dashed, dot-dash, and three-dot-dash lines indicate the infrared luminosity of a galaxie of $3 \times 10^{10}$M$_\odot$ (our mass cut) at the center of the main sequence, a factor of 4 above it, and a factor of 10 above it, respectively.](LIRlim.eps) Fig.\[SBcomp\] shows the luminosity limit corresponding to a detection at 5$\sigma$ at two wavelengths or more. This was computed using both the starburst and the main-sequence templates of the @Magdis2012b SED library. This library contains different templates for main-sequence and starburst galaxies. The main-sequence template evolves with redshift, but not the starburst one. The lines correspond to the highest luminosity limit found using these two templates for each wavelength, which is the most pessimistic case. We also computed the infrared luminosity associated with a galaxy of $3\times 10^{10}\,\rm M_\odot$, i.e., our mass limit, on the main sequence (dashed line), a factor of 4 above it (dot-dash line), and a factor of 10 above it (three-dot-dash line). All the M$_\star > 3\times 10^{10}\,\rm M_\odot$ strong starbursts (sSFR$>$10 sSFR$_{\rm MS}$) should thus be detected in two or more *Herschel* bands below z=4. There is only one starburst detected in the 3$<$z$<$4 bin. We thus do not analyze this bin, because of its lack of statistical significance. The other bins contain 3, 6, 6, and 8 strong starbursts, respectively, by increasing redshift.\ The sample of main-sequence galaxies is contaminated by the starbursts which have sSFR$<$10 sSFR$_{\rm MS}$ . We expect that this contamination is negligible, since the contribution of all starbursts to the infrared luminosity density is lower than 15% [@Rodighiero2011; @Sargent2012]. To check this hypothesis, we statistically corrected for the contribution of the remaining starbursts with sSFR$<$10 sSFR$_{\rm MS}$ based on the @Bethermin2012b counts model. We assumed both the SED library used for the model and the average SED of strong starbursts found in this study. We found that this statistical subtraction only affected our measurements at most at the 0.2$\sigma$ level. Consequently, we have neglected this contamination in the rest of our study.\ [ccccccccc]{} Redshift & S$_{24}$ & S$_{100}$ & S$_{160}$ & S$_{250}$ & S$_{350}$ & S$_{500}$ & S$_{850}$ & S$_{1100}$\ & $\mu$Jy & mJy& mJy& mJy& mJy& mJy& mJy & mJy\ \ 0.25$<$z$<$0.50 & 410$\pm$23 & 11.87$\pm$0.76 & 23.30$\pm$1.49 & 12.54$\pm$0.97 & 6.43$\pm$0.53 & 2.64$\pm$0.32 & -0.18$\pm$0.23 & 0.21$\pm$0.08\ 0.50$<$z$<$0.75 & 247$\pm$13 & 6.37$\pm$0.43 & 13.82$\pm$0.86 & 9.45$\pm$0.72 & 5.88$\pm$0.46 & 2.57$\pm$0.25 & 0.54$\pm$0.15 & 0.18$\pm$0.06\ 0.75$<$z$<$1.00 & 221$\pm$10 & 4.19$\pm$0.26 & 9.79$\pm$0.60 & 7.75$\pm$0.59 & 5.92$\pm$0.45 & 3.06$\pm$0.25 & 0.53$\pm$0.19 & 0.30$\pm$0.06\ 1.00$<$z$<$1.25 & 144$\pm$7 & 3.31$\pm$0.23 & 8.22$\pm$0.50 & 6.93$\pm$0.53 & 5.78$\pm$0.46 & 3.00$\pm$0.25 & 0.21$\pm$0.15 & 0.30$\pm$0.05\ 1.25$<$z$<$1.50 & 96$\pm$5 & 2.36$\pm$0.14 & 6.70$\pm$0.42 & 5.99$\pm$0.45 & 5.46$\pm$0.41 & 3.17$\pm$0.25 & 0.44$\pm$0.13 & 0.32$\pm$0.04\ 1.50$<$z$<$1.75 & 110$\pm$6 & 1.80$\pm$0.12 & 4.81$\pm$0.33 & 4.79$\pm$0.38 & 4.64$\pm$0.36 & 3.00$\pm$0.25 & 0.54$\pm$0.11 & 0.34$\pm$0.04\ 1.75$<$z$<$2.00 & 113$\pm$5 & 1.31$\pm$0.10 & 3.51$\pm$0.25 & 4.10$\pm$0.32 & 4.11$\pm$0.33 & 2.94$\pm$0.24 & 0.72$\pm$0.12 & 0.32$\pm$0.04\ 2.00$<$z$<$2.50 & 101$\pm$5 & 1.16$\pm$0.08 & 3.28$\pm$0.22 & 4.17$\pm$0.32 & 4.38$\pm$0.34 & 3.25$\pm$0.25 & 0.73$\pm$0.12 & 0.48$\pm$0.04\ 2.50$<$z$<$3.00 & 59$\pm$3 & 0.79$\pm$0.07 & 2.59$\pm$0.22 & 3.41$\pm$0.29 & 3.85$\pm$0.31 & 3.03$\pm$0.26 & 0.87$\pm$0.17 & 0.55$\pm$0.05\ 3.00$<$z$<$3.50 & 47$\pm$5 & 0.61$\pm$0.10 & 2.28$\pm$0.33 & 2.90$\pm$0.30 & 3.65$\pm$0.35 & 2.95$\pm$0.31 & 0.56$\pm$0.18 & 0.44$\pm$0.07\ 3.50$<$z$<$4.00 & 29$\pm$7 & 0.22$\pm$0.20 & 1.68$\pm$0.55 & 2.60$\pm$0.45 & 3.01$\pm$0.51 & 2.52$\pm$0.50 & 0.24$\pm$0.33 & 0.30$\pm$0.14\ \ 0.50$<$z$<$1.00 & 1241$\pm$329 & 57.48$\pm$15.98 & 86.33$\pm$18.31 & 41.57$\pm$7.83 & 16.52$\pm$3.53 & 9.64$\pm$4.73 & 6.91$\pm$5.92 & 2.40$\pm$1.57\ 1.00$<$z$<$1.50 & 264$\pm$77 & 30.59$\pm$3.26 & 64.44$\pm$6.97 & 38.44$\pm$4.92 & 24.79$\pm$3.98 & 13.90$\pm$4.97 & 0.12$\pm$2.62 & 1.36$\pm$0.78\ 1.50$<$z$<$2.00 & 912$\pm$179 & 23.51$\pm$5.04 & 62.46$\pm$13.80 & 42.47$\pm$8.02 & 30.99$\pm$9.27 & 21.46$\pm$7.09 & 2.10$\pm$3.37 & 3.90$\pm$1.16\ 2.00$<$z$<$3.00 & 629$\pm$193 & 13.15$\pm$4.91 & 39.56$\pm$7.77 & 32.25$\pm$4.37 & 35.72$\pm$5.40 & 28.52$\pm$5.20 & 7.98$\pm$2.97 & 5.08$\pm$1.02\ [cccccccc]{} Redshift & log(M$_\star$) & log(L$_{\rm IR}$) & SFR & log(M$_{\rm dust}$) & $\langle U \rangle$ & log(M$_{\rm mol}$) & f$_{\rm mol}$\ & log(M$_\odot$) & log(L$_\odot$) & M$_\odot$/yr & log(M$_\odot$) & & log(M$_\odot$) &\ \ 0.25$<$z$<$0.50 & 10.77 & 10.92$_{-0.04}^{+0.03}$ & 8.3$_{-0.7}^{+0.6}$ & 8.09$_{-0.16}^{+0.12}$ & 5.50$_{-1.50}^{+3.10}$ & 10.04$_{-0.22}^{+0.19}$ & 0.16$_{-0.06}^{+0.07}$\ 0.50$<$z$<$0.75 & 10.76 & 11.19$_{-0.04}^{+0.08}$ & 15.6$_{-1.5}^{+3.3}$ & 8.24$_{-0.15}^{+0.19}$ & 7.23$_{-2.47}^{+3.82}$ & 10.23$_{-0.21}^{+0.24}$ & 0.23$_{-0.07}^{+0.11}$\ 0.75$<$z$<$1.00 & 10.75 & 11.45$_{-0.09}^{+0.07}$ & 27.9$_{-5.4}^{+4.7}$ & 8.44$_{-0.24}^{+0.16}$ & 7.80$_{-2.69}^{+5.44}$ & 10.48$_{-0.28}^{+0.22}$ & 0.35$_{-0.13}^{+0.12}$\ 1.00$<$z$<$1.25 & 10.77 & 11.56$_{-0.04}^{+0.10}$ & 36.4$_{-3.3}^{+9.7}$ & 8.29$_{-0.11}^{+0.28}$ & 15.05$_{-6.68}^{+5.74}$ & 10.34$_{-0.18}^{+0.32}$ & 0.27$_{-0.07}^{+0.16}$\ 1.25$<$z$<$1.50 & 10.76 & 11.69$_{-0.04}^{+0.07}$ & 48.6$_{-4.2}^{+8.9}$ & 8.37$_{-0.10}^{+0.22}$ & 16.52$_{-6.47}^{+5.45}$ & 10.46$_{-0.18}^{+0.26}$ & 0.33$_{-0.08}^{+0.15}$\ 1.50$<$z$<$1.75 & 10.77 & 11.77$_{-0.05}^{+0.05}$ & 58.9$_{-5.9}^{+7.5}$ & 8.45$_{-0.21}^{+0.18}$ & 16.96$_{-6.15}^{+10.90}$ & 10.55$_{-0.26}^{+0.23}$ & 0.37$_{-0.13}^{+0.13}$\ 1.75$<$z$<$2.00 & 10.79 & 11.81$_{-0.03}^{+0.05}$ & 64.4$_{-4.3}^{+8.2}$ & 8.49$_{-0.25}^{+0.18}$ & 16.96$_{-6.15}^{+15.24}$ & 10.63$_{-0.29}^{+0.23}$ & 0.41$_{-0.15}^{+0.13}$\ 2.00$<$z$<$2.50 & 10.79 & 11.99$_{-0.02}^{+0.03}$ & 97.4$_{-5.3}^{+7.7}$ & 8.53$_{-0.19}^{+0.13}$ & 22.58$_{-6.27}^{+14.42}$ & 10.81$_{-0.24}^{+0.20}$ & 0.51$_{-0.13}^{+0.11}$\ 2.50$<$z$<$3.00 & 10.80 & 12.11$_{-0.04}^{+0.03}$ & 130.0$_{-12.6}^{+10.7}$ & 8.48$_{-0.11}^{+0.23}$ & 33.75$_{-14.29}^{+12.85}$ & 10.88$_{-0.18}^{+0.27}$ & 0.55$_{-0.10}^{+0.15}$\ 3.00$<$z$<$3.50 & 10.77 & 12.25$_{-0.05}^{+0.05}$ & 178.5$_{-18.4}^{+22.4}$ & 8.48$_{-0.12}^{+0.10}$ & 48.99$_{-11.32}^{+23.99}$ & 10.99$_{-0.19}^{+0.18}$ & 0.62$_{-0.11}^{+0.09}$\ 3.50$<$z$<$4.00 & 10.80 & 12.34$_{-0.12}^{+0.07}$ & 219.0$_{-54.4}^{+40.2}$ & 8.39$_{-0.50}^{+0.33}$ & 72.98$_{-36.98}^{+167.95}$ & 11.06$_{-0.52}^{+0.36}$ & 0.65$_{-0.29}^{+0.16}$\ \ 0.50$<$z$<$1.00 & 10.57 & 12.25$_{-0.08}^{+0.08}$ & 179.1$_{-150.5}^{+215.0}$ & 8.65$_{-0.04}^{+0.19}$ & 29.80$_{-11.77}^{+9.60}$ & 10.04$_{-0.24}^{+0.30}$ & 0.29$_{-0.10}^{+0.16}$\ 1.00$<$z$<$1.50 & 10.60 & 12.55$_{-0.05}^{+0.03}$ & 350.8$_{-314.5}^{+376.4}$ & 8.99$_{-0.01}^{+0.09}$ & 26.92$_{-6.92}^{+2.88}$ & 10.23$_{-0.23}^{+0.25}$ & 0.45$_{-0.13}^{+0.14}$\ 1.50$<$z$<$2.00 & 10.64 & 12.93$_{-0.18}^{+0.07}$ & 860.1$_{-567.4}^{+1006.8}$ & 9.24$_{-0.09}^{+0.62}$ & 37.68$_{-28.40}^{+11.32}$ & 10.48$_{-0.25}^{+0.66}$ & 0.58$_{-0.14}^{+0.28}$\ 2.00$<$z$<$3.00 & 10.69 & 13.10$_{-0.24}^{+0.07}$ & 1260.0$_{-728.1}^{+1487.1}$ & 9.64$_{-0.47}^{+0.37}$ & 22.22$_{-12.94}^{+50.77}$ & 10.34$_{-0.52}^{+0.44}$ & 0.75$_{-0.28}^{+0.14}$\ ![image](SED_obs.eps) ![image](SED_zslice_fit_SBcomp.eps) ![image](SED_zslice_fit_SB10.eps) Stacking analysis {#sect:stacking} ----------------- We use a similar stacking approach as in @Magdis2012b to measure the mean SEDs of our sub-samples of star-forming galaxies from the mid-infrared to the millimeter domain. Different methods are used at the various wavelength to optimally extract the information depending if the data are confusion or noise limited. At 24$\mu$m, 100$\mu$m, and 160$\mu$m, we produced stacked images using the IAS stacking library [@Bavouzet2008; @Bethermin2010a]. The flux is then measured using aperture photometry with the same parameters and aperture corrections as @Bethermin2010a at 24$\mu$m. At 100$\mu$m and 160$\mu$m, we used a PSF fitting technique. A correction of 10% is applied to take into account the effect of the filtering of the data on the photometric measurements of faint, non-masked sources [@Popesso2012]. At 250$\mu$m, 350$\mu$m, and 500$\mu$m, the photometric uncertainties are not dominated by instrumental noise but by the confusion noise caused by neighboring sources [@Dole2003; @Nguyen2010]. We thus measured the mean flux of the sources computing the mean flux in the pixels centered on a stacked source following @Bethermin2012b. This method minimizes the uncertainties and a potential contamination caused by the clustering of galaxies [@Bethermin2010b]. Finally, we used the same method, but on the beam-convolved map, for LABOCA and AzTEC data as they are noise limited and lower uncertainties are obtained after this beam smoothing. LABOCA and AzTEC maps do not cover the whole area. We thus only stack sources in the covered region to compute the mean flux densities of our various sub-samples. The source selection criteria being exactly the same inside and outside the covered area, this should not introduce any bias.\ These stacking methods can be biased if the stacked sources are strongly clustered or very faint. This bias is caused by the greater probability of finding a source close to another one in the stacked sample compared to a random position. This effect has been discussed in detail by several authors [e.g., @Bavouzet2008; @Bethermin2010b; @Kurczynski2010; @Bethermin2012b; @Bourne2012; @Viero2013b]. In @Magdis2012b, the authors estimated that this bias is lower than the 1$\sigma$ statistical uncertainties and was not corrected. The number of sources to stack in COSMOS compared to the GOODS fields used by @Magdis2012b is much larger and hence the signal-to-noise ratio is much better. The bias caused by clustering is thus non-negligible in COSMOS. Because of the complex edge effects caused by the absence of data around bright stars, the methods using the position of the sources to deblend the contamination caused by the clustering cannot be applied [@Kurczynski2010; @Viero2013b]. Consequently, we developed a method based on realistic simulations of the *Spitzer*, *Herschel*, LABOCA, and AzTEC maps to correct this effect, which induces biases up to 50% at 500$\mu$m around z$\sim$2. The technical details and discussion of these corrections are presented in Appendix\[Annexestacking\].\ The uncertainties on the fluxes are measured using a bootstrap technique [@Jauzac2011]. This method takes into account both the errors coming from the instrumental noise, the confusion, and the sample variance of the galaxy population [@Bethermin2012b]. These uncertainties are combined quadratically with those associated with the calibration and the clustering correction.\ Mean physical properties from SED fitting {#sect:sedfit} ----------------------------------------- We interpreted our measurements of the mean SEDs using the @Draine2007 model as in @Magdis2012b. This model, developed initially to study the interstellar medium in the Milky Way and in nearby galaxies, takes into account the heterogeneity of the intensity of the radiation field. The redshift slices we used have a non-negligible width. To account for this, we convolve the model by the redshift distribution of the galaxies before fitting the data. The majority of the redshifts in our sample are photometric. We thus sum the probability distribution function (PDF) of the redshifts of all the sources in a sub-sample to estimate its intrinsic redshift distribution. The uncertainties on the physical parameters are estimated using the same Monte Carlo method as in @Magdis2012b. The uncertainties on each parameter takes into account the potential degeneracies with the others, i.e., they are the marginalized uncertainties on each individual parameters. Our good sampling of the dust SEDs (8 photometric points between 24$\mu$m and 1.1mm including at least six detections) allows us to break the degeneracy between the dust temperature and the dust mass which is found if only (sub-)mm datapoints are used.\ Instead of using the three parameters describing the distribution of the intensity of the radiation field U of the @Draine2007 model (the minimal radiation field $\rm U_{min}$, the maximal one $\rm U_{max}$, and the slope of the assumed power-law distribution between these two values $\alpha$), we considered only the mean intensity of the radiation field $\langle U \rangle$ for simplicity. The other parameters derived from the fit and used in this paper are the bolometric infrared luminosity integrated between 8 and 1000$\mu$m (L$_{\rm IR}$) and the dust mass (M$_d$). The SFR is derived from L$_{\rm IR}$ using the @Kennicutt1998 conversion factor ($1 \times 10^{-10}$$\rm M_\odot \, yr^{-1} \, L_\odot^{-1}$ after conversion from Salpeter to Chabrier IMF), since the dust-obscured star formation vastly dominates the unobscured component at z$<$4 given the mass-scale considered [@Heinis2013; @Heinis2014; @Pannella2014]. The sSFR is computed using the later SFR and the mean stellar mass extracted from the @Ilbert2013 catalog. The uncertainties on the derived physical parameters presented in the various figures and tables of this paper are the uncertainties on the average values. The dispersion of physical properties inside a population is difficult to measure by stacking and we did not try to compute it in this paper (see Sect.\[discussion\]).\ The residuals of these fits are presented in Appendix\[sect:residuals\]. Tables\[tab:fluxes\] and \[tab:physpar\] summarize the average photometric measurements and the recovered physical parameters, respectively.\ Results ======= Evolution of the mean SED of star-forming galaxies -------------------------------------------------- Figure\[fig:sedobs\] summarizes the results of our stacking analysis. For the main-sequence sample, the flux density varies rapidly with redshift in the PACS 100$\mu$m band, while it is almost constant in the SPIRE 500$\mu$m band. The peak of the flux density distribution in the rest frame moves from $\sim$120$\mu$m to 70$\mu$m between z=0 and z=4. This shift with redshift was already observed at z$\lesssim$2 for mass-selected stacked samples [@Magdis2012b] or a *Herschel*-detected sample [@Lee2013; @Symeonidis2013]. We found no evidence of an evolution of the position of this peak ($\sim$70$\mu$m) for the sample of strong starbursts.\ Figure\[fig:sedms\] and \[fig:sedsb\] show the mean intrinsic luminosity (in $\nu$L$_\nu$ units, the peak of the SEDs is thus shifted toward shorter wavelengths compared with L$_\nu$ units) of our samples of massive star-forming galaxies (since this sample is dominated by main-sequence galaxies, hereafter we call it main-sequence sample) and the fit by the @Draine2007 model. We also observe a strong evolution of the position of the peak of the thermal emission of dust in main-sequence galaxies from $\sim$80$\mu$m at z$\sim$0.4 to $\sim$30$\mu$m at z$\sim$3.75 in $\nu$L$_\nu$ units. The SEDs of strong starbursts have a much more modest evolution (from 50$\mu$m at to 30$\mu$m). The mean luminosity of the galaxies also increases very rapidly with redshift for both main-sequence and strong starburst galaxies.\ At z$>$2, we find that the peak of the dust emission tends to be broader than at lower redshift. The broadening of the mean SEDs induced by the size of the redshift bins has a major impact only on the mid-infrared, where the polycyclic aromatic hydrocarbon (PAH) features are washed out (see black and blue lines in Fig.\[fig:sedms\] and \[fig:sedsb\]), and cannot fully explain why the far-IR peak is broader at higher redshifts. The @Draine2007 model reproduces this broadening by means of a higher $\gamma$ coefficient, i.e., a stronger contribution of regions with a strong heating of the dust. This is consistent with the presence of giant star-forming clumps in high-redshift galaxies [e.g., @Bournaud2007; @Genzel2006]. The best-fit models at high z presents two breaks around 30$\mu$m and 150$\mu$m, which could be artefacts caused by the sharp cuts of the U distribution at its extremal values in the @Draine2007 model.\ Evolution of the specific star formation rate --------------------------------------------- From the fit of the SEDs, we can easily derive the evolution of the mean specific star formation rate of our mass-selected sample with redshift. The results are presented in Fig.\[fig:ssfr\]. The strong starbursts lie about a factor of 10 above the main-sequence, demonstrating that this population is dominated by objects just above our cut of 10 sSFR$_{\rm MS}$. Our results can be fitted by an evolution in redshift as (0.061$\pm$0.006Gyr$^{-1}$)$\times$(1+z)$^{2.82\pm0.12}$ at z$<$2 and as (1+z)$^{2.2\pm0.3}$ at z$>$2. We compared our results with the compilation of measurements of @Sargent2014 at M$_\star = 5 \times 10^{10}$M$_\odot$. At z$<$1.5, our results agree well with the previous measurements. Between z=1.5 and z=3.5, our new measurements follow the lower envelop of the previous measurements. This mild disagreement could have several causes.\ First of all, the clustering effect was not taken into account by the previous analyses based on stacking. This effect is stronger at high redshift, because the bias[^5] of both infrared and mass-selected galaxies increases with redshift [e.g., @Bethermin2013]. In addition, the SEDs peak at a longer wavelength, where the bias is stronger owing to beam size (see Sect.\[sect:simu\]). The tension with the results based on UV-detected galaxies could be explained by a slight incompleteness of the UV-detected samples at low sSFR or a small overestimate of the dust corrections. There could also be effects caused by the different techniques and assumptions used to determine the stellar masses in the various fields (star formation histories, PSF-homogenized photometry or not, presence of nebular emission in the highest redshift bins, template libraries, etc.). Finally, this difference could also be an effect of the variance. These discrepancies on the estimates of sSFRs will be discussed in detail in @Schreiber2014.\ \[fig:sSFR\] ![\[fig:ssfr\] Evolution of the mean sSFR in main-sequence galaxies (blue triangles) and strong starbursts (red squares). The gray diamonds are a compilation of measurements at the same mass performed by @Sargent2014. The blue line is the best fit to our data.](sSFR_z.eps) Evolution of the mean intensity of the radiation field {#sect:U} ------------------------------------------------------ ![\[fig:U\] Evolution of the mean intensity of the radiation field $\langle U \rangle$ in main-sequence galaxies (blue triangles) and strong starbursts (red squares). The black diamonds are the measurements presented in @Magdis2012b based on a similar analysis but in the GOODS fields. The orange asterisk is the mean value found for the local ULIRG sample of @Da_Cunha2008b (see also @Magdis2012b). The black circle is the average value in HRS galaxies [@Ciesla2014]. The solid and dashed lines represent the evolutionary trends expected for a broken and universal FMR, respectively (see Sect.\[sect:U\]). The blue dotted line is the best fit of the evolution of the main-sequence galaxies ($(3.0\pm1.1) \times (1+z)^{1.8 \pm 0.4}$) and the red dotter line the best fit of the strong starburst data by a constant ($31\pm3$).](U_z.eps) The evolution of the mean intensity of the radiation field has different trends in main-sequence galaxies than in strong starbursts (see Fig.\[fig:U\]). This quantity is strongly correlated to the temperature of the dust. We found a rising $\langle U \rangle$ with increasing redshift in main-sequence galaxies up to z=4 ($(3.0\pm1.1) \times (1+z)^{1.8\pm0.4}$), confirming and extending the finding of @Magdis2012b at higher redshift. Other studies [e.g., @Magnelli2013; @Genzel2014] found an increase of the dust temperature with redshift in mass-selected samples.\ The evolution of $\langle U \rangle$ we found can be understood from a few simple assumptions on the evolution of the gas metallicity and the star-formation efficiency (SFE) of galaxies. As shown by @Magdis2012b, $\langle U \rangle$ is proportional to L$_{\rm IR}$/M$_{\rm dust}$. We can also assume that $$L_{\rm IR} \propto \textrm{SFR} \propto M_{\rm mol}^{1/s},$$ where the left-side of the proportionality is the well-established @Kennicutt1998 relation. The right-side of the proportionality is the integrated version of the Schmidt-Kennicutt relation which links the SFR to the mass of molecular gas in a galaxy (M$_{\rm mol}$). @Sargent2014 found a best-fit value for $s$ of 0.83 compiling a large set of public data about low- and high-redshift main-sequence galaxies. The molecular gas mass can also be connected to the gas metallicity Z and the dust mass [e.g., @Leroy2011; @Magdis2012b], $$M_{\rm dust} \propto Z(M_\star, \textrm{SFR}) \times M_{\rm mol},$$ where $Z(M_\star, \textrm{SFR})$ is the gas metallicity which can be connected to M$_\star$ and SFR through the fundamental metallicity relation (FMR, @Mannucci2010). There is recent evidence showing that this relation breaks down at high redshifts. For instance, @Troncoso2014 measured a $\sim$0.5dex lower normalization at z$\sim$3.4 compared to the functional form of the FMR at low redshift. @Amorin2014 found the same offset in a lensed galaxy at z = 3.417. At z$\sim$2.3, @Steidel2014 [see also @Cullen2014] found an offset of 0.34–0.38dex in the mass-metallicity relation and only half of this difference can be explained by the increase of SFR at fixed stellar mass using the FMR. Finally, a break in the metallicity relation is also observed in low mass (log(M$_\star$/M$_\odot$)$\sim$8.5) damped Lyman $\alpha$ absorbers around z=2.6 [@Moller2013]. In our computations, we consider two different relations: a universal FMR where metallicity depends only on M$_\star$ and SFR, and a FMR relation with a correction of $0.30 \times (1.7-z)$dex at z$>$1.7 (hereafter broken FMR), which agrees with the measurements cited previously. Combining these expressions, we can obtain the following evolution: $$\langle U \rangle \propto \frac{L_{\rm IR}}{M_{\rm dust}} \propto \frac{M_{\rm mol}^{\frac{1}{s}-1}}{Z(M_\star, \textrm{SFR})} \propto \frac{\textrm{SFR}^{1-s}}{Z(M_\star, \textrm{SFR})}.\\$$ We computed the expected evolution of $\langle U \rangle$ using the fit to the evolution of sSFR presented in Sect.\[fig:sSFR\] and assuming the mean stellar mass of our sample is $6\times10^{10}$M$_\odot$, the average mass of the main-sequence sample[^6]. We used the value of @Magdis2012b at z=0 to normalize our model. The results are presented in Fig.\[fig:U\] for a universal and a broken FMR. The broken FMR is compatible with all of our data points at 1$\sigma$. The universal FMR implies a significant underestimation of $\langle U \rangle$ at high redshifts (3 and 2$\sigma$ in the two highest redshift bins).\ We checked that the dust heating by the cosmic microwave background (CMB) is not responsible for the quick rise the quick rise in main-sequence galaxies. The CMB temperature at z=4 is 13.5K. The dust temperature that our high-redshift galaxies would have for a virtually z=0 CMB temperature, $T_{\rm dust}^{z=0}$, is estimated following @Da_Cunha2013 $$T_{\rm dust}^{z=0} = \left ( (T_{\rm dust}^{\rm meas})^{4+\beta} - (T_{\rm CMB}^{z=0})^{4+\beta} \left [ (1+z)^{4+\beta} -1 \right ] \right )^{\frac{1}{4+\beta}},$$ where $T_{\rm CMB}^{z=0}$ is the temperature of the CMB at z=0 and $T_{\rm dust}^{\rm meas}$ is the measured dust temperature at high redshift. This temperature is estimated fitting a gray body with an emissivity of $\beta$=1.8 to our photometric measurements at $\lambda_{\rm rest}>$50$\mu$m. The CMB has a relative impact which is lower than 2$\times 10^{-4}$ at all redshifts and thus this effect is negligible. These values are small compared to @Da_Cunha2013, who assumed a dust temperature of 18K. The warmer dust temperatures we measured suggests that the CMB should be less problematic than anticipated.\ Concerning the evolution of $\langle U \rangle$ in strong starbursts, we found no evidence of evolution ($\propto (1+z)^{-0.1\pm1.0}$) and our results can be fitted by a constant $\langle U \rangle$ of 31$\pm$3. Our value of $\langle U \rangle$ at 0.5$<$z$<$3 is similar to the measurements on a sample of local ULIRGs [@Da_Cunha2008b]. This suggests that high-redshift strong starbursts are a more extended version of the nuclei of local ULIRGs, as also suggested by the semi-analytical model of @Lagos2012. At z$\sim$2.5, the main-sequence galaxies and the strong starbursts have similar $\langle U \rangle$ values. However, we do not interpret the origins of these high values of $\langle U \rangle$ in the same way (see Sect.\[sect:mdms\], \[sect:fgas\], and \[discussion\]). At z$>2.5$, we cannot constrain with our analysis if $\langle U \rangle$ in strong starbursts rises as in main-sequence galaxies or stays constant.\ Evolution of the ratio between dust and stellar mass {#sect:mdms} ---------------------------------------------------- We also studied the evolution of the mean ratio between the dust and the stellar mass in the main-sequence galaxies and the strong starbursts. The results are presented Fig.\[fig:MdMs\]. In main-sequence galaxies, this dust-to-stellar-mass ratio rises up to z$\sim$1 and flattens above this redshift. Strong starbursts typically have 5 times higher ratio. Our measurements are compatible within 2$\sigma$ with the slowly rising trend of $(1+z)^{0.05}$ found by @Tan2014 for a compilation of individual starbursts. However, our data favors a steeper slope.\ ![\[fig:MdMs\] Mean ratio between dust and stellar mass as a function of redshift in main-sequence galaxies (blue triangles) and strong starbursts (red squares). The orange asterisk is the mean value found for the local ULIRG sample of @Da_Cunha2008b (see @Magdis2012b). The black circle is the average value in HRS galaxies [@Ciesla2014]. The solid and dashed lines represent the evolutionary trends expected for a broken and universal FMR, respectively (see Sect.\[sect:U\]). The red dot-dashed line is the best-fit of the evolution found for a sample of individually-detected starbursts of @Tan2014. The predictions of the models of @Lagos2012 and @Lacey2014 after applying the same mass cut and sSFR selection are overplotted with a three-dot-dash line and a long-dash line, respectively, with the same color code as the symbols.](Md_Mstar_z.eps) We modeled the evolution of this ratio in main-sequence galaxies using the same simple considerations as in Sect.\[sect:U\]. The evolution of the mean dust-to-stellar-mass ratio can be written as $$\frac{M_{\rm dust}}{M_\star} \propto \frac{Z(M_\star, \textrm{SFR}) \times M_{\rm mol}}{M_\star} \propto \frac{Z(M_\star, \textrm{SFR}) \times \textrm{SFR}^\beta}{M_\star}.$$ One can see that $M_{\rm dust}/M_\star$ is the result of a competition between the rising SFR with increasing redshift and the decreasing gas metallicity. The results are compatible with the broken FMR at 1$\sigma$. The relation obtained with the universal FMR rises too rapidly at high redshift.\ We also compared our results with predictions of two semi-analytical models. The @Lagos2012 and @Lacey2014 models are based on GALFORM. The main difference between these two models is that [@Lagos2012] adopt a universal IMF (a Galactic-like IMF; @Kennicutt1983), while @Lacey2014 adopt a non-universal IMF. In the latter star formation taking place in galaxy disks has a Galactic-like IMF, while starbursts have a more top-heavy IMF. This is done to reproduce the number counts of submillimeter galaxies found by surveys.\ We select galaxies in the models in the same way we do in the observations based on their stellar mass and distance from the main sequence. An important consideration is that to derive stellar masses in the observations we fix the IMF to a Chabrier IMF, which is different to the IMFs adopted in both models. In order to correct for this we multiply stellar masses in the @Lagos2012 model by 1.1 to go from a Kennicutt IMF to a Chabrier IMF. However, this is non-trivial for the @Lacey2014 model, since it adopts two different IMFs. In order to account for this we correct the fraction of the stellar mass that was formed in the disk by the same factor of 1.1, and divide the fraction of stellar mass that was formed during starbursts by 2. The latter factor is taken as an approximation to go from their adopted top-heavy IMF to a Chabrier IMF, but this conversion is not necessarily unique, and it depends on the dust extinction and stellar age (see @Mitchell2013 for details). In this paper we make a unique correction, but warn the reader that a more accurate approach would be to perform SED fitting to the predicted SEDs of galaxies and calculating the stellar mass in the same way we would do for observations.\ Compared to the observations of main-sequence galaxies, the @Lagos2012 model reproduces observations well in the redshift range 1$<$z$<$3, while at $z<1$ and $z>3$ it overpredicts the dust-to-stellar mass ratio. There are different ways to explain the high dust-to-stellar mass ratios: high gas metallicities, high gas masses or stellar masses being too low for the dust masses. In the case of the @Lagos2012 model the high dust-to-stellar mass ratios are most likely coming from massive galaxies being too gas rich since their metallicities are close to solar, which is what we observe in local galaxies of the same stellar mass range. The @Lacey2014 model predicts dust-to-stellar mass ratios that are twice too high compared to the observations in the whole redshift range. In this case this is because the gas metallicities of MS galaxies in the Lacey model are predicted to be supersolar on average (close to twice the solar metallicity, 12+log(O/H)$\sim$9.0), resulting in dust masses that are higher than observed.\ In the case of starbursts, the high values inferred for the dust-to-stellar mass ratio in the observations are difficult to interpret. The @Lagos2012 model underpredicts this quantity by a factor of $\sim$5 and the @Lacey2014 model by a factor of $\sim$2. At first the ratio of 1.5-2% inferred in the observations seems unphysical. However, since the gas fraction (defined here as $\rm M_{\rm mol}/(M_{\rm mol}+M_\star)$) in these high-redshift starbursts is around 50% (see Sect.\[sect:fgas\], but also, e.g., @Riechers2013 and @Fu2013), the high values observed for the dust-to-stellar mass ratio can be reached if the gas-to-dust ratio is 50-67. Values similar to the latter are observed in metal-rich galaxies (12+log(O/H)$\sim$9, e.g., @Remy2014). This high metal enrichment in strong starbursts compared to main-sequence galaxies could be explained by several mechanisms: - the transformation of gas into stars is quicker and the metals are not diluted by the accretion of pristine gas; - a fraction of the external layers of low-metallicity gas far from the regions of star formation could be ejected by the strong outflows caused by these extreme starbursts; - a top-heavy IMF could produce quickly lots of metals through massive stars without increasing too rapidly the total stellar mass because of mass losses. This high ratio in strong starbursts is discussed in details in @Tan2014.\ When it comes to the comparison with the models, one can understand the lower dust-to-stellar mass ratios predicted by the model as resulting from the predicted gas metallicities. @Lagos2012 predict that the average gas metallicity in strong starbursts is close to 0.4 solar metallicities (12+log(O/H)$\sim$8.3), which is about 4 times lower than we can infer from a gas-to-dust mass ratio of $\approx 50$ (see previous paragraph). While the @Lacey2014 model predicts gas metallicities for starbursts that are on average close to solar metallicity (12+log(O/H)$\sim$8.7), 2 times too low for the inferred metallicity of the strong starbursts we observe. We note that both models predict main sequence galaxies having higher metallicities than bright starbursts of the same stellar masses. This seems to contradict the observations and may be at the heart of why the models struggle to get the dust-to-stellar mass ratios of both the main sequence and starburst populations at the same time. ![\[fig:gasfrac\] Evolution of the mean molecular gas fraction in massive galaxies ($>3\times10^{10}$M$_\odot$). The starbursts are represented by red squares and the main-sequence galaxies by blue triangles or light blue diamonds depending on wether the gas fraction is estimated using a broken or an universal FMR, respectively. These results are compared with previous estimate using dust masses of @Magdis2012b [black plus] and @Santini2014 [gray area], using CO for two z$>$3 galaxies [@Magdis2012a black crosses], and the compilation of CO measurements of @Saintonge2013 [black asterisks]. The predictions of the models of @Lagos2012 and @Lacey2014 for the same mass cut are overplotted with a three-dot-dash line and a long-dash line, respectively.](fgas_z.eps) Evolution of the fraction of molecular gas {#sect:fgas} ------------------------------------------ Finally, we deduced the mean mass of molecular gas from the dust mass using the same method following @Magdis2011 and @Magdis2012b. They assumed that the gas-to-dust ratio depends only on gas metallicity and used the local relation of @Leroy2011[^7]: $$\textrm{log} \left ( \frac{M_{\rm dust}}{M_{\rm mol}} \right ) = (10.54 \pm 1.0) - (0.99\pm0.12) \times {12 + \textrm{log(O/H)}}. \label{eq:gdr}$$ Given the relatively high stellar mass of our samples, and the rising gas masses and ISM pressures to high redshifts [@Obreschkow2009], we expect the contribution of atomic hydrogen to the total gas mass to be negligible and we neglect it in the rest of the paper, considering total gas mass or molecular gas mass to be equivalent. For main-sequence galaxies, the gas metallicity is estimated using the FMR as explained in Sect.\[sect:U\]. We converted the values provided by the FMR from the KD02 to the PP04 metallicity scale using the prescriptions of @Kewley2008 before using it in Eq.\[eq:gdr\].\ The gas metallicity in strong starbursts cannot be estimated using the FMR. Indeed, this relation predicts that, at fixed stellar mass, objects forming more stars are less metallic. This effect is expected in gas regulated systems, because a higher accretion of pristine gas involves a stronger SFR, but also a dilution of metals [e.g., @Lilly2013]. This phenomenon is not expected to happen in starbursts, since their high SFRs are not caused by an excess of accretion, but more likely by a major merger. These high-redshift starbursts are probably progenitors of current, massive, elliptical galaxies [e.g., @Toft2014]. We thus assumed that their gas metallicity is similar and used a value of 12+log(O/H) = 9.1$\pm$0.2 (see a detailed discussion in @Magdis2011 and @Magdis2012b).\ We then derived the molecular gas fraction in main-sequence galaxies, defined in this paper as $\rm M_{mol} / (M_\star + M_{mol})$. The results are presented in Fig.\[fig:gasfrac\]. We found a quick rise up to z$\sim$2. At higher redshifts, the recovered trend depends on the assumptions on the gas metallicity. The rise of the gas fraction in main-sequence galaxies continues at higher redshift if we assume the broken FMR favored by the recent studies, but flattens with a universal FMR. If the broken FMR scenario is confirmed, there could thus be no flattening or reversal of the molecular gas fraction at z$>$2 contrary to what is claimed in @Magdis2012a, @Saintonge2013, and @Tan2013. Our estimations agree with the previous estimates of @Magdis2012b at z=1, but are 1$\sigma$ lower at z=2, because the bias introduced by clustering was corrected in our study. Our results also agree at 1$\sigma$ with the analysis of @Santini2014 at the same stellar mass up to z=2.5 after converting the stellar mass from a Salpeter to a Chabrier IMF convention. However, our estimates are systematically higher than theirs and agree better with the CO data. Our measurements also agree with the compilation of CO measurements of @Saintonge2013 and the two galaxies studied at z$\sim$3 by @Magdis2012a. These measurements are dependent on the assumed $\alpha_{\rm CO}$ conversion factor, and on the completeness corrections. The good agreement with this independent method is thus an interesting clue to the reliability of these two techniques.\ Strong starbursts have molecular gas fractions 1$\sigma$ higher than main-sequence galaxies, but follows the same trend. @Sargent2014 predicted that starbursts on average should have a deficit of gas compared to the main sequence (but that gas fraction are expected to rise continuously as the sSFR-excess with respect to the MS increases). Here we selected only the most extreme starbursts with an excess of sSFR of a factor of 10 instead of the average value of $\sim$4. These extreme starbursts may only be possible by the mergers of two gas-rich galaxies galaxies already above the main-sequence before the merger. This could explain this small positive offset compared to the main-sequence sample.\ We also compared our results with the models of @Lagos2012 and @Lacey2014 presented in Sect.\[sect:mdms\]. Both models agree well with our measurements of the gas fraction for starburst galaxies at all redshifts and main-sequence galaxies at 1.5$<$z$<$3. Both the @Lagos2012 and @Lacey2014 models overpredict the molecular gas fraction at z$<$0.5 at a 1-2$\sigma$ level. At reshifts $z>3$, the @Lacey2014 model agrees better with the universal FMR scenario at z$>$3, while the @Lagos2012 model is more compatible with the broken FMR. The fact that both models predict molecular gas fractions that in overall agree with the observations supports our interpretation in Sect. \[sect:mdms\], which points to the model of metal enrichment as the source of discrepancy in the dust-to-stellar mass ratios.\ Evolution of the depletion time ------------------------------- We estimated the mean depletion time of the molecular gas, defined in our analysis as the ratio between the mass of molecular gas and the SFR. Figure\[fig:tdep\] shows our results. The depletion time in strong starbursts does not evolve with redshift and is compatible with 100Myr, the typical timescale of the strong boost of star formation induced by major mergers [e.g., @Di_Matteo2008]. This timescale is longer in main-sequence galaxies and slightly (1$\sigma$) evolves with redshift at z$<$1. It decreases from 1.3$_{-0.5}^{+0.7}$Gyr at z$\sim$0.375 to $\sim$500Myr around z$\sim$1.5 and is stable at higher redshift in the case of a broken FMR (but continues to decrease with redshift for a universal FMR). This timescale is similar to the maximum duration high-redshift massive galaxies can stay on the main-sequence before reaching the quenching mass around 10$^{11}$M$_\odot$ [@Heinis2014]. The mass of molecular gas and stars contained in these high-redshift objects is already sufficient to reach this quenching mass without any additional accretion of gas.\ ![\[fig:tdep\] Evolution of the mean molecular gas depletion time. The symbols are the same as in Fig.\[fig:gasfrac\].](tdep_z.eps) Discussion ========== What is the main driver of the strong evolution of the specific star formation rate? ------------------------------------------------------------------------------------ ![\[fig:iKS\] Relation between the mean SFR rate and the mean molecular gas mass in our galaxy samples, i.e., integrated Kennicutt-Schmidt law. The solid line and the dashed line are the center of the relation fitted by @Sargent2014 on a compilation of data for main-sequence galaxies and starbursts, respectively. The dotted lines represent the 1$\sigma$ uncertainties on these relations.](iKS.eps) The triangles and diamonds represent the average position of massive, main-sequence galaxies in this diagram assuming a broken FMR and an universal FMR, respectively. The squares indicates the average position of strong starbursts. We checked the average position of our selection of massive galaxies in the integrated Kennicutt-Schmidt diagram (SFR versus mass of molecular gas) to gain insight on their mode of star formation. In this diagram, normal star-forming galaxies and starbursts follow two distinct sequences. For comparison, we used the fit of a recent data compilation performed by @Sargent2014. The results are presented in Fig.\[fig:iKS\].\ The average position of our sample of strong starbursts is in the 1$\sigma$ confidence region of @Sargent2014 for starbursts. They are systematically below the central relation, but the uncertainty is dominated by the systematic uncertainties on their gas metallicity. In addition, @Sargent2014 suggested that the SFEs of starbursts follow a continuum of values depending on their boost of sSFR. Our objects are thus not expected to be exactly on the central relation. The interpretation of the results for main-sequence galaxies is dependent on the hypothesis on the gas metallicity. In the scenario of a broken FMR favored by recent observations, the average position of main-sequence galaxies at all redshifts falls on the relation of normal star-forming galaxies. This suggests that the star formation is dominated by galaxies forming their stars through a normal mode at all redshifts below z=4. In the case of a non-evolving FMR, the massive high-redshift galaxies do not stay on the normal star-forming sequence and have higher SFEs.\ If the scenario of a broken FMR favored by the most recent observations is consolidated, the strong star-formation observed in massive high-redshift galaxies would thus be caused by huge gas reservoirs probably fed by an intense cosmological accretion. This strong accretion of primordial gas dilute the metals produced by the massive stars [e.g., @Bouche2010; @Lilly2013]. Consequently, the gas-to-dust ratio is much lower at high redshift than at low redshift. Since the star-formation efficiency is only slowly evolving (SFR$\propto$M$_{\rm mol}^{1.2}$), the number of UV photons absorbed per mass of dust is thus higher and the dust temperature is warmer as observed in our analysis (see Sect.\[sect:U\]). This scenario provides thus a consistent interpretation of evolution of both the sSFR and the dust temperature of massive galaxies with redshift.\ Limitations of our approach --------------------------- Our analysis provided suggestive results. However, it relies on several hypotheses, which cannot be extensively tested yet. In this section, we discuss the potential limitation of our analysis.\ The evolution of the metallicity relations at z$>$2.5 was measured only by a few pioneering works, which found that the normalization of the FMR evolves at z$>$2.5. We used a simple renormalization depending on redshift to take this evolution into account. The redshift sampling of these studies is relatively coarse and we used a simple linear evolution with redshift. Future studies based on larger samples will allow a finer sampling of the evolution of the gas metallicity in massive galaxies at high redshift. However, the current results are very encouraging. The current assumption of a broken FMR allows us to recover naturally both the evolution of the $\langle U \rangle$ parameter and the integrated Schmidt-Kennicutt relation at high redshift.\ The gas metallicity of strong starbursts was more problematic to set. We can reasonably guess it assuming they are progenitors of the most massive galaxies. However, direct measurements of their gas metallicity are difficult to perform using optical/near-IR spectroscopy because of their strong dust attenuation. The millimeter spectroscopy of fine-structure lines with ALMA will be certainly an interesting way to determine the distribution of gas metallicity of strong starbursts in the future [e.g., @Nagao2011].\ The validity of the calibration of the gas-to-dust ratio versus gas metallicity relation in most extreme environment is also uncertain and difficult to test with the current data sets. @Saintonge2013 found an offset of a factor 1.7 for a population of lensed galaxies and discussed the possible origins of the tension between the gas content estimated from CO and from dust. However, we found no offset with the integrated Kennicutt-Schmidt relation in our analysis and a good agreement with the compilation of CO measurements of gas fractions. The lensed galaxies of @Saintonge2013 could be a peculiar population because they are UV-selected and then biased toward dust-poor systems. They could also be affected differential magnification effects or *Herschel*-selection biases. The hypotheses performed to estimate the gas metallicity are also different between their and our analysis (standard mass-metallicity relation versus broken FMR).\ Finally, the stacking analysis only provides an average measurement of a full population. Thus it is difficult to estimate the heterogeneity of the stacked populations. Bootstrap techniques can be applied to estimate the scatter on the flux density at a given wavelength [@Bethermin2012b]. However, because of the correlation between $\langle U \rangle$ and M$_d$, this technique cannot be applied to measure the scatter on each of these parameters.\ Conclusion ========== We used a stacking analysis to measure the evolution of the average mid-infrared to millimeter emission of massive star-forming galaxies up to z=4. We then derived the evolution of the mean physical parameters of these objects. Our main findings are the following. - The mean intensity of the radiation field $\langle U \rangle$ in main-sequence galaxies, which is strongly correlated with their dust temperature, rises rapidly with redshift: $\langle U \rangle = (3.0\pm1.1) \times (1+z)^{1.8 \pm 0.4}$. This evolution can be interpreted considering the decrease in the gas metallicity of galaxies at constant stellar mass with increasing redshift. We found no evidence for an evolution of $\langle U \rangle$ in strong starbursts up to z=3.\ - The mean ratio between the dust mass and the stellar mass in main-sequence galaxies rises between z=0 and z=1 and exhibit a plateau at higher redshift. The strong starbursts have a higher ratio by a factor of 5.\ - The average fraction of molecular gas ($\rm M_{mol} / (M_\star + M_{mol})$) rises rapidly with redshift and reaches $\sim$60% at z=4. A similar evolution is found in strong starbursts, but with slightly higher values. These results agree with the pilot CO surveys performed at high redshift.\ - We compare with two state-of-the-art semi-analytic models that adopt either a universal IMF or a top-heavy IMF in starbursts and find that the models predict molecular gas fractions that agree well with the observations but the predicted dust-to-stellar mass ratios are either too high or too low. We interpret this as being due to the way metal enrichment is dealt with in the simulations. We suggest different mechanisms that can help overcome this issue. For instance, outflows affecting more metal depleted gas that is in the outer parts of galaxies.\ - The average position of the massive main-sequence galaxies in the integrated Kennicutt-Schmidt diagram corresponds to the sequence of normal star-forming galaxies. This suggests that the bulk of the star-formation up to z$\sim$4 is dominated by the normal mode of star-formation and that the extreme SFR observed are caused by huge gas reservoirs probably induced by the very intense cosmological accretion. The strong starbursts follow another sequence with a 5–10 times higher star-formation efficiency.\ We thank the anonymous referee for providing constructive comments. We acknowledge Morgane Cousin, Nick Lee, Nick Scoville, and Christian Maier for their interesting discussions/suggestions, Laure Ciesla for providing an electronic table of the physical properties of the HRS sample, and Amélie Saintonge for providing her compilation of data. We gratefully acknowledge the contributions of the entire COSMOS collaboration consisting of more than 100 scientists. The HST COSMOS program was supported through NASA grant HST-GO-09822. More information on the COSMOS survey is available at <http://www.astro.caltech.edu/cosmos>. ased on data obtained from the ESO Science Archive Facility. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme ID 179.A-2005 and on data products produced by TERAPIX and the Cambridge Astronomy Survey Unit on behalf of the UltraVISTA consortium. MB, ED, and MS acknowledge the support of the ERC-StG UPGAL 240039 and ANR-08-JCJC-0008 grants. AK acknowledges support by the Collaborative Research Council 956, sub-project A1, funded by the Deutsche Forschungsgemeinschaft (DFG). Estimation and correction on the bias caused by the galaxy clustering on the stacking results {#Annexestacking} ============================================================================================= As explained in Sect.\[sect:stacking\], the standard stacking technique can be strongly affected by the bias caused by the clustering of the galaxies. We use two independent methods to estimate and correct it. Estimation of the bias using a simulation based on the real catalog {#sect:simu} ------------------------------------------------------------------- We performed an estimate of the bias induced by the clustering using a realistic simulation of the COSMOS field based on the positions and stellar masses of the real sources. The flux of each source in this simulation is estimated using the ratio between the mean far-IR/(sub-)mm fluxes and the stellar mass found by a first stacking analysis. The galaxies classified as passive are not taken into account in this simulation. This technique assumes implicitly a flat sSFR-M$_\star$ relation, since we use a constant SFR/M$_\star$ ratio versus stellar mass at fixed redshift. However, we checked that using a more standard sSFR$\propto$M$_\star^{-0.2}$ relation [e.g., @Rodighiero2011] has a negligible impact on the results. We applied no scatter around this relation in our simulation for simplicity. As mean stacking is a linear operation, the presence or not of a scatter has no impact on the results [@Bethermin2012b].\ A simulated map is thus produced using all the star-forming galaxies of the @Ilbert2013 catalog. In order to avoid edge effects (absence of sources and thus a lower background caused by the faint unresolved sources in the region covered by the optical/near-IR data), we fill the uncovered regions drawing with replacement sources from the UltraVISTA field and putting them at a random position. The number of drawn sources is chosen to have exactly the same number density inside and outside the UltraVISTA field.\ Finally, we measured the mean fluxes of the M$_\star>$3$\times$10$^{10}$M$_\odot$ sources by stacking in the simulated maps, using exactly the same photometric method as for the real data. We finally computed the relative bias between the recovered flux and the input flux ($S_{\rm out}/S_{\rm in}-1$). The results are shown Fig.\[fig:clusbias\] (blue triangles). The uncertainties are computed a bootstrap method. As expected, the bias increases with the size of the beam. We can see a rise of the bias with redshift up to z$\sim$2. This trend can be understood considering the rise of the clustering of the galaxy responsible for the cosmic infrared background [@Planck_CIB_2013] and a rather stable number density of emitters especially below z=1 [@Bethermin2011; @Magnelli2013; @Gruppioni2013]. At higher redshift, we found a slow decrease. This trend is probably driven by the decrease in the infrared luminosity density at high redshift [@Planck_CIB_2013; @Burgarella2013] combined with the decrease in the number density of infrared emitters [@Gruppioni2013].\ ![\[fig:clusbias\] Relative bias induced by the clustering as a function of redshift at the various wavelengths we used in our analysis. The FWHM of the beam is provided in brackets. The blue triangles are the estimations from the simulation (Sect.\[sect:simu\]) and the red diamonds are provided by the fit of the clustering component in map space (Sect.\[sect:fitclus\]). These numbers are only valid for a complete sample of M$_\star > 3 \times 10^{10}$M$_\odot$ galaxies.](Stacking_bias_Mcut310.eps) Estimation of the bias fitting the clustering contribution in the stacked images {#sect:fitclus} -------------------------------------------------------------------------------- The method presented in the previous section only takes into account the contamination of the stacks by known sources. However, faint galaxy populations could have a non negligible contribution, despite their total contribution to the infrared luminosity and their clustering are expected to be small. We thus used a second method to estimate the bias caused by the clustering which takes into account a potential contamination by these low-mass galaxies. This method is based on a simultaneous fit in the stacked images of three components: a point source at the center of the image, a clustering contamination, and a background. This technique was already successfully used by several previous works based on *Herschel* and *Planck* data [@Bethermin2012b; @Heinis2013; @Heinis2014; @Welikala2014].\ In presence of clustering, the outcome of a stacking is not only a PSF with the mean flux of the population and a constant background (corresponding to the surface brightness of all galaxy populations i.e., the cosmic infrared background). There is in addition a signal coming from the greater probability of finding another neighboring infrared galaxy compared to the field because of galaxy clustering. The signal in the stacked image can thus be modeled by [@Bavouzet2008; @Bethermin2010b] $$m(x,y) = \alpha \times \textrm{PSF}(x,y) + \beta \times (\textrm{PSF} \ast w)(x,y) +\gamma,$$ where $m$ is the stacked image, PSF the point spread function, and $w$ the auto-correlation function. The symbol $\ast$ represents the convolution. $\alpha$, $\beta$, and $\gamma$ are free parameters corresponding to the intensity of the mean flux of the population, the clustering signal, and the background, respectively. This method works only if the PSF is well-known, the extension of the sources is negligible compared to the PSF, and the effects of the filtering are small at the scale of the stacked image. Consequently, we applied this method only to the SPIRE data for which these hypotheses are the most solid. The uncertainties on the clustering bias ($\beta / \alpha$ for the photometry we chose to use for SPIRE data) are estimated fitting the model described previously on a set of stacked images produced from 1000 bootstrap samples. The results are shown in Fig.\[fig:clusbias\] (red diamonds).\ Corrections of the measurements ------------------------------- In Fig.\[fig:clusbias\], we can see that the two methods provide globally consistent estimates. This confirms that the low-mass galaxies not included in the UltraVISTA catalog have a minor impact. We found few outliers for which the two methods disagree. In particular, in the 1.5$<$z$<$1.75 bin, the estimation from the simulation is higher than the trend of the redshift evolution at all wavelengths, and the results from the profile fitting are lower. This could be caused, as instance, by a structures in the field or a systematic effect in the photometric redshift. Because of these few catastrophic outliers, we chose to use a correction computed from a fit of the redshift evolution of the bias instead of an individual estimate in each redshift slice.\ The evolution of the bias with redshift is fitted independently at each wavelength. We chose to use a simple, second-order, polynomial model ($a z^2 + bz + c$). We used only the results from the simulation to have a consistent treatment of the various wavelengths. The scatter of the residuals is larger than the residuals, probably because bootstrap does not take into account the variance coming from the large-scale structures. We thus used the scatter of the residuals to obtain a conservative estimate of the uncertainties on the bias. In Fig.\[fig:clusbias\], the best fit is represented by a solid line and the 1$\sigma$ confidence region by a dashed line.\ In a few case, the bias at z$>$3 can converge to unphysical negative values. We then apply no corrections, but combine the typical uncertainty on the bias to the error bars. A special treatment is also applied to the samples of strong starbursts. Their flux is typically 10 times brighter in infrared by construction (their sSFR is 10 times larger than the main sequence). In contrast, the clustering signal is not expected to be significantly stronger, because the clustering of massive starbursts and main-sequence galaxies is relatively similar [@Bethermin2014]. We thus divide the bias found for the full population of galaxy by a factor of 10 to estimate the one of the starbursts for simplicity.\ Testing another method ---------------------- We also tried to apply the <span style="font-variant:small-caps;">simstack</span> algorithm [@Viero2013b] to our data. This algorithm is adapted from @Kurczynski2010 and uses the position of the known sources to deblend their contamination. Contrary to @Kurczynski2010, <span style="font-variant:small-caps;">simstack</span> can consider a large set of distinct galaxy populations. The mean flux of the each population is used to estimate how sources contaminate their neighbors. All populations are treated simultaneously. This is the equivalent of PSF-fitting codes but applied to a full population instead of each source individually. Unfortunately, this method is not totally unbiased in our case. We found biases up to 15% running <span style="font-variant:small-caps;">simstack</span> on the simulation presented in Sect.\[sect:simu\], probably because the catalog of mass-selected sources is not available around bright sources. At the edge of the optical/near-IR-covered region, the flux coming from the sources outside the covered area is not corrected, when the flux from all neighbors is taken into account at the middle of zone where the mass catalog is extracted. Indeed, the algorithm works correctly if we put on the simulation only sources present in the input catalog.\ Fit residuals {#sect:residuals} ============= Figures\[fig:res\] and \[fig:res\_sb10\] shows the residuals of the fits of our mean SEDs derived by stacking. We did not find any systematic trend, except a 2$\sigma$ underestimation of the millimeter data in main-sequence galaxies at z$>$3.\ ![image](SED_res.eps) ![image](SED_SB10_res.eps) [^1]: [*Herschel*]{} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. [^2]: The minimum expected flux for our mass-selected sample of strong starbursts is computed using the three-dot-dash curve in Fig.\[SBcomp\] and the @Magdis2012b starburst template. [^3]: APEX project IDs: 080.A–3056(A), 082.A–0815(A) and 086.A–0749(A). [^4]: More details on the CRUSH settings can be found at: [http://www.submm.caltech.edu/\$\\sim\$sharc/crush/v2/README](http://www.submm.caltech.edu/$\sim$sharc/crush/v2/README) [^5]: The bias $b$ is defined by $w_{\rm gal} = b^2 w_{\rm DM}$, where $w_{\rm gal}$ and $w_{\rm DM}$ are the projected two-point correlation function of galaxies and dark matter, respectively. The higher the bias is, the stronger is the clustering density of galaxies compared to dark matter. [^6]: We could have used the mean stellar masses in each redshift bin provided in Table\[tab:physpar\]. However, assuming a single value of the stellar mass at all redshift has a negligible impact on the results and the tracks are smoother. [^7]: converted to PP04 convention
{ "pile_set_name": "ArXiv" }
--- abstract: 'How to efficiently utilize temporal information to recover videos in a consistent way is the main issue for video inpainting problems. Conventional 2D CNNs have achieved good performance on image inpainting but often lead to temporally inconsistent results where frames will flicker when applied to videos (see [video:Edge-Connect](https://www.youtube.com/watch?v=87Vh1HDBjD0&list=PLPoVtv-xp_dL5uckIzz1PKwNjg1yI0I94&index=1)); 3D CNNs can capture temporal information but are computationally intensive and hard to train. In this paper, we present a novel component termed Learnable Gated Temporal Shift Module (LGTSM) for video inpainting models that could effectively tackle arbitrary video masks without additional parameters from 3D convolutions. LGTSM is designed to let 2D convolutions make use of neighboring frames more efficiently, which is crucial for video inpainting. Specifically, in each layer, LGTSM learns to shift some channels to its temporal neighbors so that 2D convolutions could be enhanced to handle temporal information. Meanwhile, a gated convolution is applied to the layer to identify the masked areas that are poisoning for conventional convolutions. On the FaceForensics and Free-form Video Inpainting (FVI) dataset, our model achieves state-of-the-art results with simply 33% of parameters and inference time. The source code is available on <https://github.com/amjltc295/Free-Form-Video-Inpainting>.' bibliography: - 'egbib.bib' title: | Learnable Gated Temporal Shift Module for\ Deep Video Inpainting --- \[htb\] ![Our model takes videos with free-form masks (first row) and fills in the missing areas with proposed LGTSM to generate realistic completed results (second row) compared to the original videos (third row). It could be applied to video editing tasks such as video object removal, as shown in the first two columns. Best viewed in color and zoom-in. See corresponding videos in the following links: [object removal](https://www.youtube.com/watch?v=585xZjcmUlA&list=PLPoVtv-xp_dIc_qhYe5lrOyWCLEu7hWoL&index=4&t=0s), [free-form masks](https://www.youtube.com/watch?v=08MNqZla29g&list=PLPoVtv-xp_dL5uckIzz1PKwNjg1yI0I94&index=37), and [faces](https://www.youtube.com/watch?v=3uiOOyimBHw&list=PLPoVtv-xp_dKLIZkMBvhhCu97AsXQ9e_D&index=2&t=0s). []{data-label="fig:teaser"}](teaser.pdf "fig:"){width="\linewidth"} Introduction ============ Free-form video inpainting, to fill in arbitrary missing regions in a video, is a very challenging task. It could be widely used for movie post-processing, damaged video recovery and video editing. For humans, it takes tremendous efforts to recover these missing areas like Fig. \[fig:teaser\], while an autonomous method may complete it easily. The key for free-form video inpainting is to model spatial-temporal features. That is, a model needs to capture the content of masked areas according to its surroundings and fill in these areas with related pixels. Traditional patch-based methods [@Huang-SigAsia-2016; @newson2014video] fill in these areas by finding similar patches from other parts of the videos. However, the searching algorithms usually have high computational complexity and the missing area may not be found for complex objects or masks (see Fig. \[fig:visual\_comparison\]). On the other hand, deep learning methods [@yu2018free; @nazeri2019edgeconnect; @liu2018image; @chang2019free; @chang2019vornet] could fill in unseen masked areas by the encoding and decoding process, based on the structures learned from the training data. Still, compared to the success in image inpainting, deep learning methods struggle to model these video features due to the additional temporal dimension. Using 3D convolution to model spatial-temporal features is the most intuitive way but it requires plenty parameters and is hard to train. In this paper, we propose a novel component termed Learnable Gated Temporal Shift Module (LGTSM) to handle free-form video masks with 2D convolutions, motivated by the TSM originally for action recognition. Though inspired by TSM, we found that TSM perfectly suitable for free-form video inpainting as it cannot totally make use of neighboring frames from the beginning layers nor handle irregular masks, which makes us propose LGTSM. Specifically, in each layer, LGTSM learns to shift a part of feature channels in a frame to its neighboring frame, and then attends on masked/inpainted/unmasked areas by a gating convolutional filter. LGTSM enables 2D convolutions to process masked videos and generate state-of-the-art results as 3D convolutions with only 33% parameters and inference time. Our paper makes the following contributions: - We propose the Gated Temporal Shift Module that could recover videos with free-form masks in 2D convolutions by temporally shifting and gating features in each layer, which reduce the model size and computational time to 33% compared to 3D convolutions. - Given that video inpainting requires more information from neighboring frames, we propose a novel Learnable Gated Temporal Shift Module that could learn the temporally shifting kernels and achieved state-of-the-art performance. - We propose the TSMGAN loss which significantly improves the model performance for free-form video inpainting. Related Work ============ #### Image Inpainting. Recently, deep learning based methods have taken over the image inpainting task. Xie [@xie2012image] firstly apply convolutional neural networks (CNNs) on small-region image inpainting and denoising. Pathak [@pathak2016context] then extended [@xie2012image] to larger region with an encoder-decoder structure. Moreover, [@pathak2016context] adopted the generative adversarial network (GAN) [@goodfellow2014generative], where a generator learning to create realistic images and a discriminator striving to tell fake ones are trained together to improve the image quality. Subsequently, [@yu2018generative; @yu2018free; @nazeri2019edgeconnect] also developed new GAN architectures with different components for image inpainting. Among these deep methods, Yu [@yu2018free] proposed gated convolutions for image inpainting that uses an additional gating convolution to learn the difference between masked, inpainted and unmasked areas in each layer. We integrate such gating mechanism to our model. Also, Nazeri [@nazeri2019edgeconnect] developed a two-stage model that generates image edges first before recovering the whole images conditioned on edges. Their model achieved state-of-the-art results, and we set it as one of our baselines. #### Video Inpainting. Generally, video inpainting could be viewed as an extension of image inpainting with temporal constraints (i.e content in different frames need to be consistent.) However, different from image inpainting, patch-based methods [@granados2012not; @Huang-SigAsia-2016; @newson2014video; @wexler2007space] still play a role in video inpainting as more patches are available in videos. Among them, Huang [@Huang-SigAsia-2016] jointly estimate optical flow and colors in the masked region to fix the moving camera problem and reached state-of-the-art results, so we also set it as one of our baselines. Although patch-based methods have made great success in video inpainting, they are highly limited in computational time due to the search algorithms. In addition, the masked areas still need to be patchable; these methods do not work on complex objects such as faces. To address these problems, Wang [@wang2018videoinp] proposed the first deep learning based method for video inpainting, with a two-stage CombCN that uses 3D convolutions to generate coarse but temporally consistent videos and then refines with 2D convolutions. Their model could learn to recover face videos so we also set it as a baseline. ![Explanation of the learnable shifting kernels in the proposed LGTSM. (a) Input features for the layer. We will do shifting operation on channel $\times$ time dimensions. (b) Original TSM from [@lin2018temporal]. (c) Equivalent TSM by temporal shifting kernels. (d) In the proposed LGTSM, temporal shifting kernels are also learnable and the size could be different.[]{data-label="fig:LTSM"}](learnable_temporal_shift.pdf){width="\linewidth"} #### Temporal Modeling. As most state-of-the-art deep video inpainting learning methods adopt the encoder-decoder structure, the key is to model the spatial-temporal structures in videos. Over the past few years, a variety of deep learning architectures have been proposed to model video structures, especially for action recognition. These architectures include applying temporal pooling [@karpathy2014large] or recurrent networks [@yue2015beyond] on top of 2D convolutions to model temporal features structures, combining optical flows and RGB frames to make two-stream networks [@simonyan2014two], directly using 3D convolutions [@tran2015learning] and variants of these models [@feichtenhofer2016convolutional; @carreira2017quo]. For more details, we refer readers to [@carreira2017quo]. Despite the great performance, many architectures for action recognition cannot be applied to video inpainting since 1) the input video is corrupted, so it is hard to derive optical flows or apply naive convolutions 2) these architectures only need an encoder, while for video inpainting requires a decoder to recover the missing areas. Aside from architecture-level temporal modeling, there are also works that focus on the module-level [@lin2018temporal; @li2018temporal], which is more applicable for video inpainting. Lin [@lin2018temporal] proposed Temporal Shit Module that shifts part of feature channels in each frame to its neighboring frames so that 2D convolutions could handle temporal information. Similarly, Li [@li2018temporal] developed Temporal Bilinear (TB) that applies factorized bilinear operation on features to model interactions between frames. Note that these models are for action recognition that all input frames are valid, while for video inpainting, many pixels are masked. Based on these ideas of integrating temporal information to 2D convolutions, we propose the Learnable Gated Temporal Shift Module for video inpainting. Proposed Method =============== Learnable Gated Temporal Shift Module {#GatedTSM} ------------------------------------- Models with 3D convolutions could capture temporal information in an intuitive way but are hard to train due to the large number of parameters. To this end, we extend the residual Temporal Shift Module (TSM) [@lin2018temporal], originally designed for action recognition, to video inpainting. TSM tackles temporal information in 2D convolutions. The input activation $F_{t, x, y}$ for each convolutional layer in video inpainting is in shape of (B, C, L, H, W) where B is the batch size, C is the channel number, L is the temporal length, H is the height and W is the width of the input activation. For each frame in L, TSM shifts a portion of channels to its previous and next frame before the convolutional operations, as shown in Fig. \[fig:LTSM\](b). These shifted channels contain features from other frames, so together with unshifted features, the original 2D convolutions could learn the temporal structures accordingly. However, for free-form video inpainting, not every feature point is valid as many areas are masked. These masked areas are harmful to naive TSM as convolutions cannot tell the difference between valid and invalid feature points. To address this issue, we design the Gated Temporal Shift Module (GTSM) for free-form video inpainting (see Fig. \[fig:blocks\]). Specifically, in addition to the TSM module, a gating convolutional filter $W_g$ is applied to input features $F_{t, x, y}$ to obtain a gating $Gating_{t, x, y}$. This gating will serve as a soft validity map to identify the masked/unmasked/inpainted areas for the output features $Features_{t, x, y}$ from the TSM module with original convolutional filter $W_f$. Mathematically, GTSM could be expressed as: $$Gating_{t, x, y} = \sum \sum W_g \cdot F_{t, x, y}$$ $$Features_{t, x, y} = \sum \sum W_f \cdot TSM(F_{t-1, x, y}, F_{t, x, y}, F_{t+1, x, y})$$ $$Output_{t, x, y} = \sigma(Gating_{t, x, y}) \phi(Features_{t, x, y})$$ where $\sigma$ is the sigmoid function that transforms gating to values between 0 (invalid) and 1 (valid), and $\phi$ is the activation function for the convolution. Note that the TSM module could be easily modified to online settings (without peeking future frames) for real-time applications. Note that the temporal shifting operation in TSM is similar to applying forward/backward shifting kernels on the channel-temporal map, as shown in Fig. \[fig:LTSM\](c). In TSM, these kernels are fixed; a frame could only get features from its one-frame neighbors in each layer. Such fixed shifting kernels in TSM are insufficient to make use of further neighboring frames as temporal information could only be aggregated through deeper layers. Unlike action recognition, video inpainting models sorely need information from the beginning layers to capture the spatial-temporal structures so that deeper layers could recover the missing areas accordingly. Therefore, we also propose the Learnable Gated Temporal Shift Module (LGTSM), where the temporal shifting kernels are also learnable and the kernel size could be larger (see Fig. \[fig:LTSM\](d)). With LGTSM, the model could learn to shift and scale features from specific temporal neighbors in each layer (or not). For example, the model could get features from more temporally further neighbors in the first few layers and remain unshifted in the deeper layers. This greatly enhances the model capability with very little cost. In practice, the shifting operation only uses an additional buffer in size of 1/4 channels, so it has little cost in terms of computational time and run-time memory compared to traditional 2D convolutions. Note that the number of kernels could also be flexible (there are only two for TSM: forward and backward). Moreover, LGTSM could learn temporal information with very few extra parameters, and we found that it could achieve state-of-the-art results as the 3D convolutions with only 33% parameters and inference time. ![Module design. We integrate (a) Residual TSM [@lin2018temporal] and (b) gated convolution [@yu2018free] to (c) Gated Temporal Shift Module (GTSM) and design learnable temporal shifting kernel (Fig. \[fig:LTSM\]c) to make (d) the proposed Learnable Gated Temporal Shift Module (LGTSM).[]{data-label="fig:blocks"}](block_stacking.pdf){width="\linewidth"} Loss Functions -------------- Design the combination of loss functions to train a video inpainting model is non-trivial due to the uncertainty of the free-form masks and the high complexity of video features. We use $l_1$ loss for low-level features, perceptual loss and style loss for image content, and propose TSMGAN loss for handle high-level features and enhance realness. In this section, we will introduce these loss functions to train our model. #### **Masked $l_1$ loss.** The $l_1$ loss focuses on the pixel-level features, which is widely used for generative tasks on image [@yu2018generative; @yu2018free; @liu2018image; @nazeri2019edgeconnect], and videos [@wang2018videoinp; @chang2019free]: $$L_{l_1} = \mathds{E}_{t,x,y}[ |O_{t,x,y} - V_{t, x, y}|]$$ #### **Perceptual loss and style loss.** $l_1$ loss often leads to blurry results as it only focus on low-level features. To address this problem, we adopt the perceptual and style loss [@gatys2015neural] to keep image contents. Similar loss functions could be found in many generative tasks such as image inpainting [@liu2018image; @nazeri2019edgeconnect] and super-resolution [@johnson2016perceptual; @ledig2017photo]. The perceptual loss could be viewed as the $l_1$ loss in feature level: $$\label{equ6} L_{perc} = \sum_{t=1}^{n} \sum_{p=0}^{P-1} \frac{|\Psi^{O_{t}}_p - \Psi^{V_{t}}_p|}{N_{\Psi^{V_{t}}_p}} \vspace{-2mm}$$ where $V_t$ is the input video, $\Psi^{V_{t}}_p$ is the activation from the $p$th selected layer of the pretrained network, and $N_{\Psi^{V_{t}}_p}$ is the number of elements in the $p$th layer. We choose layer $relu_{2\_2}$, $relu_{3\_3}$ and $relu{4\_3}$ from the VGG [@simonyan2014very] network pre-trained on ImageNet [@russakovsky2015imagenet]. Style loss is a variant of perceptual loss, with an auto-correlation (Gram matrix) applied to the features first: $$L_{style} = \sum_{t=1}^{n} \sum_{p=0}^{P-1} \frac{1}{C_p C_p} \frac{|(\Psi^{{O_t}_p})^T(\Psi^{{O_t}_p}) - (\Psi^{V_{t}}_p)^T(\Psi^{V_{t}}_p))|}{C_p H_p W_p}$$ where $\Psi^{O_{t}}_p$ and $\Psi^{{V_t}_p}$ are both features from the pre-trained VGG network, as the ones in the perceptual loss \[equ6\]. #### **TSMGAN loss.** All the aforementioned loss functions are for image only, which do not take temporal consistency into consideration. Therefore, we develop TSMGAN to learn temporal consistency. We set up a generative adversarial network (GAN) with Gated Temporal Shit Module integrated on both the generator and discriminator as stated in \[GatedTSM\]. The TSMGAN discriminator is composed of six 2D convolutional layers with TSM. Also, we apply the recently proposed spectral normalization [@miyato2018spectral] to both the generator and discriminator as [@nazeri2019edgeconnect] to enhance the training stability. The TSMGAN loss $L_D$ for the discriminator to tell if the input video z is real or fake and $L_G$ for the generator to fool the discriminator are defined as: $$\begin{aligned} L_{D} &= \mathds{E}_{x\sim P_{data}(x)}[ReLU(1+D(x))] \\ &+ \mathds{E}_{z\sim P_{z}(z)}[ReLU(1-D(G(z)))] \end{aligned}$$ $$\begin{aligned} L_{G} = -\mathds{E}_{z\sim P_{z}(z)}[D(G(z))] \end{aligned}$$ As [@yu2018free], the kernel size is $5 \times 5$, stride $2 \times 2$ and the shifting operation is applied to all 11 convolutional layers for the TSMGAN discriminator, so the receptive field of each output feature point includes the whole video. It could be viewed as several GANs on different feature points, and a local-global discriminator structure [@yu2018generative] is thus not needed. Besides, the TSMGAN learns to classify real or fake for each spatial-temporal feature point from the last convolutional layer in the discriminator, which mostly consists of high-level features. Since the $l_1$ loss already focuses on low-level features, using TSMGAN could improve the model in an efficient way. #### **Overall loss.** The overall loss function to train the model is defined as: $$\begin{aligned} L_{total} &= \lambda_{l_1} L_{l_1} + \lambda_{perc} L_{perc} + \lambda_{style} L_{style} + \lambda_{G} L_{G} \end{aligned}$$ where $\lambda_{l_1}$, $\lambda_{perc}$, $\lambda_{style}$ and $\lambda_{G}$ are the weights for $l_1$ loss, perceptual loss, style loss and TSMGAN loss, respectively. Network Design -------------- The model has a U-net like generator and a TSMGAN discriminator. The generator is composed of 11 convolutional layers with the proposed Gated Temporal Shift Module, including down-sampling, dilated and up-sampling ones. Similar structures are also adopted for state-of-the-art image inpainting models [@yu2018free; @liu2018image]. Unlike U-net, there is no skip connection as there are many masked areas in the down-sampling layers. For down-sampling and up-sampling layers, we apply bilinear interpolation before convolutions. Dataset Metric Mask TCCDS EC CombCN 3DGated LGTSM --------- ----------- -------- ---------- ------------ ------------ ------------ -------- Curve 0.0031\* 0.0022 0.0012 **0.0008** 0.0012 Object 0.0096\* 0.0074 **0.0047** 0.0048 0.0053 BBox 0.0055 0.0019 **0.0016** 0.0018 0.0020 Curve 0.0566\* 0.0562 0.0483 **0.0276** 0.0352 Object 0.1240\* 0.0761 0.1353 **0.0743** 0.0770 BBox 0.1260 **0.0335** 0.0708 0.0395 0.0432 Curve 1.281\* 0.848 0.704 **0.472** 0.601 Object 1.107\* 0.946 0.913 **0.766** 0.782 BBox 1.013 **0.663** 0.742 **0.663** 0.681 Curve 0.0220\* 0.0048 **0.0021** 0.0025 0.0028 Object 0.0110\* 0.0075 **0.0048** 0.0056 0.0065 Curve 0.2838\* 0.1204 0.0795 **0.0522** 0.0569 Object 0.2595\* 0.1398 0.2054 **0.1078** 0.1086 Curve 2.1052\* 1.0334 0.7669 **0.6096** 0.6436 Object 1.2979\* 1.0754 1.0908 **0.9050** 0.9323 - 20M/6M 16M/- 36M/18M **12M**/6M 0.05$^\#$ 55 **120** 23 80 : Quantitative comparison with baseline models on the FaceForensics and FVI dataset based on [@chang2019free]. The results of FVI dataset are averaged of seven mask-to-frame ratios; detailed results of each ratio could be found in the supplementary materials. \*TCCDS failed on some cases and the results are averaged of successful ones. $^\#$runs on CPU; others are on GPU.[]{data-label="tab:quantitative_FVI"} Experimental Results ==================== Setups ------ #### Datasets. To compare with the baselines [@Huang-SigAsia-2016; @wang2018videoinp; @nazeri2019edgeconnect; @chang2019free], we train and test our model on the FaceForensics [@rossler2018faceforensics] and Free-form Video Inpainting (FVI) [@chang2019free] dataset. Both datasets are based on videos from YouTube, so they are close to real world scenarios. FaceForensics is composed of 1,004 videos with face, news-caster or newsprogram tags. There are only frontal faces cropped to $128 \times 128$ in the FaceForensics dataset, so it is rather simple for learning based models. Amongst, 150 videos are for evaluation while the rest are for training. On the other hand, the FVI dataset contains 15,000 high-resolution videos with human activities, animals, natural scenes, etc. It also provides algorithms to generate free-form video masks for training. We re-size videos to $320 \times 180$ and split 100 videos for evaluation following the setup in [@chang2019free]. Note the FVI dataset is considered more challenging as the videos are very diverse. #### Training and Testing. Empirically we found that the model converges slower when directly trained as a whole. Thus, during training, we first pre-train the generator without the TSMGAN loss until convergence, and then fine-tune with the TSMGAN. We initialize the temporal shifting kernels in the LGTSM with the values that are equivalent to the original TSM. The pre-train stage takes about 1 day, while the fine-tune stage takes about 3 days on the FVI dataset, which is 3 times faster, reducing training time from 10 days to 3 days, than the model with 3D convolutions [@chang2019free], demonstrating the merits of the proposed module. For other implementation details, please see the supplementary materials. ![Visual comparison with the baselines on the FVI testing set with object-like masks. Best viewed in color and zoom-in. See [video](https://www.youtube.com/watch?v=87Vh1HDBjD0&list=PLPoVtv-xp_dL5uckIzz1PKwNjg1yI0I94&index=32&t=0s).[]{data-label="fig:visual_comparison"}](fig_compare.pdf){width="\linewidth"} Quantitative Results -------------------- As [@chang2019free], mean square error (MSE) and Learned Perceptual Image Patch Similarity (LPIPS) [@zhang2018unreasonable] are used to evaluate the image quality; the Fréchet Inception Distance (FID) [@heusel2017gans] with pre-trained I3D [@carreira2017quo] is used to evaluate video quality and temporal consistency. We compare with state-of-the-art baselines with different strategies: the patch-based video inpainting method TCCDS by Huang [@Huang-SigAsia-2016], two-stage deep image inpainting method Edge-Connect (EC) by Nazeri [@nazeri2019edgeconnect], two-stage deep video inpainting method CombCN by Wang [@wang2018videoinp], and one-stage deep video inpainting method with 3D gated convolutions (3DGated) by Chang [@chang2019free]. We train all learning based models on the FaceForensics and FVI datasets with free-form masks from [@chang2019free] for fair comparison. The averaged results of 7 ranges of mask-to-frame ratio from 0 - 10% to 60 - 70% on the FaceForensics and FVI testing are shown in Table \[tab:quantitative\_FVI\]. We could see that our model is on par with the state-of-the-art method 3DGated [@chang2019free] in terms of perceptual distance (LPIPS and FID) and video quality (FID) with only 33% of parameters and inference time (note that the results are averaged; our model performs better for some mask-to-frame ratios). TCCDS failed on many cases since the masks are irregular and it cannot properly recover partially masked objects. Edege-connect (EC) performs better on FaceForensics dataset with bounding box masks because faces are all aligned and the generated edges could be stable. Still, it has serious temporal inconsistent problem under other circumstances (see Fig. \[fig:visual\_comparison\]). Although CombCN has lowest MSE scores, it could only generates blurry results for the FVI dataset (see Fig. \[fig:visual\_comparison\]) and require more parameters for the generator than our model. Qualitative Results ------------------- From Fig. \[fig:visual\_comparison\] we could observe that the proposed model outperforms TCCDS (wrong patches), Edge-connect (temporally inconsistent) and CombCN (blurry), while almost the same as 3DGated. More visual comparison could be found in the supplementary materials. Ablation Study -------------- In order to validate the contribution of each component, we also conduct an ablation study on the FVI dataset (Table \[tab:ablation\_study\]). We could observe that both the gated convolution and TSMGAN play important roles in our model; without them, the model will have a significant drop of performance. The proposed learnable shifting kernel further improves the performance with almost no additional parameters and achieve state-of-the-art results. Visual comparison be found in the supplementary materials and [videos](https://www.youtube.com/watch?v=B8aCnWQw-9Y&list=PLPoVtv-xp_dLRHUthPXMhZX-O6IBq1RYV&index=14&t=0s). [|ccccc|ccc|]{} ------- 3D conv. ------- : Ablation study on the FVI dataset with object-like masks. Number of parameters are shown as generator/discriminator. \*We increase the channel of vanilla convolution to fairly compare with gated convolutions.[]{data-label="tab:ablation_study"} & ----- TSM ----- : Ablation study on the FVI dataset with object-like masks. Number of parameters are shown as generator/discriminator. \*We increase the channel of vanilla convolution to fairly compare with gated convolutions.[]{data-label="tab:ablation_study"} & ----------- Learnable shifting ----------- : Ablation study on the FVI dataset with object-like masks. Number of parameters are shown as generator/discriminator. \*We increase the channel of vanilla convolution to fairly compare with gated convolutions.[]{data-label="tab:ablation_study"} & ------- Gated conv. ------- : Ablation study on the FVI dataset with object-like masks. Number of parameters are shown as generator/discriminator. \*We increase the channel of vanilla convolution to fairly compare with gated convolutions.[]{data-label="tab:ablation_study"} & ----- -- GAN ----- -- : Ablation study on the FVI dataset with object-like masks. Number of parameters are shown as generator/discriminator. \*We increase the channel of vanilla convolution to fairly compare with gated convolutions.[]{data-label="tab:ablation_study"} & LPIPS$\downarrow$ & FID $\downarrow$ & Param. $\downarrow$\ ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$& & & ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & 0.1209 & 1.034 & 36M+18M\ &${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & & & ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & 0.2048 & 1.303 & 12M\*+6M\ &${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & & ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & & 0.1660 & 1.198 & 12M\ &${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & & ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & 0.1256 & 1.091 & 12M+6M\ &${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ &${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & ${ \textpdfrender{ TextRenderingMode=FillStroke, LineWidth=.5pt, }{\checkmark}}$ & **0.1213** & **1.039** & 12M+6M\ Discussion and Future Work ========================== Our model achieves state-of-the-art performance with LGTSM and 2D convolutions. However, the performance is still a bit lower than the 3D convolutions in [@chang2019free]. How to design a module that could handle temporal information in a more efficient way is still a challenging future work. Another future work is to extend the input videos to higher or arbitrary resolution. For now, deep learning models are limited to a fixed resolution, which is not sufficient for exquisite videos. Simply applying these models to different parts of a high-resolution video may cause in spatial inconsistency. Conclusion ========== This paper presented a novel Learnable Gated Temporal Shift Module (LGTSM) for free-form video inpainting. LGTSM learns to shift some channels to its temporal neighbors in each frame and apply gating filter to attend on masked/inpainted/unmasked areas, which enables 2D convolutions to process temporal information and tackle poisoning masked areas at the same time. In addition, LGTSM is highly efficient, using only 33% of parameters and inference time compared to the state-of-the-art model with 3D convolutions. Experiments on the FaceForensics and FVI dataset suggest that the proposed model reach state-of-the-art performance in terms of evaluation metrics and visual results. Acknowledgement =============== This work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 108-2634-F-002-004. We also benefit from the NVIDIA grants and the DGX-1 AI Supercomputer. We are grateful to the National Center for High-performance Computing.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The neutron and proton drip lines represent the limits of the nuclear landscape. While the proton drip line is measured experimentally up to rather high $Z$-values, the location of the neutron drip line for absolute majority of elements is based on theoretical predictions which involve extreme extrapolations. The first ever systematic investigation of the location of the proton and neutron drip lines in the covariant density functional theory has been performed by employing a set of the state-of-the-art parametrizations. Calculated theoretical uncertainties in the position of two-neutron drip line are compared with those obtained in non-relativistic DFT calculations. Shell effects drastically affect the shape of two-neutron drip line. In particular, model uncertainties in the definition of two-neutron drip line at $Z\sim 54, N=126$ and $Z\sim 82, N=184$ are very small due to the impact of spherical shell closures at $N=126$ and 184.' address: - 'Department of Physics and Astronomy, Mississippi State University, MS 39762' - 'Fakultät für Physik, Technische Universität München, D-85748 Garching, Germany' author: - 'A. V. Afanasjev' - 'S. E. Abgemava' - 'D. Ray' - 'P. Ring' title: 'Nuclear landscape in covariant density functional theory.' --- Proton and neutron drip lines ,covariant density functional theory ,two-particle separation energies At present, the nuclear masses of approximately 3000 out of roughly 7000 nuclei expected between nuclear drip lines are known [@AME2012]. Nuclear existence ends at the drip lines. While the proton drip line has been delineated in experiment up to protactinium ($Z=91$), the position of the neutron drip line beyond $Z=8$ is determined only in model calculations. Different models and different parameterizations show rather large variations in predictions of the neutron drip line. Moreover, because of experimental limitations even in foreseeable future it will be possible to define the location of neutron-drip line for the majority of elements only in model calculations. In such a situation it is important to estimate the errors in the location of the predicted neutron drip line introduced by the use of the various calculations. In this context we have to distinguish the results and related theoretical uncertainties obtained within the same model, but with different parameterizations and the results and uncertainties obtained with different models. Theoretical uncertainties(errors) in the prediction of physical observables have several sources of origin. Within one class of models they are the consequences of specific assumptions and the optimization protocols. The differences in the basic assumptions of different model classes is another source. They lead to theoretical uncertainties which can be revealed only by a systematic comparison of a variety of models. The first attempt to estimate theoretical uncertainties in the definition of two-neutron drip line within one class of models has been performed within the Skyrme density functional theory (SDFT) in Ref. [@Eet.12] employing the set of six parametrizations. These results were compared with those obtained in other classes of non-relativistic models such as the microscopic-macroscopic finite range droplet model (FRDM) [@MNMS.95] and the Skyrme Hartree-Fock-Bogoliubov (HFB) calculations of Ref. [@GCP.10] with the HFB-21 parametrization. It turns out that the two-neutron drip lines of the FRDM and Skyrme-HFB calculations are located either within the SDFT error band or very close to it. Similar calculations exist also for non-relativistic DFT models based on the finite range Gogny forces D1S [@DGLGHPPB.10] and D1M [@GHGP.09]. The question of theoretical errors in the definition of the neutron drip line is still not resolved since the important class of nuclear structure models known under name covariant density functional theory (CDFT) [@Serot1986_ANP16-1; @Reinhard1989_RPP52-439; @Ring1996_PPNP37-193; @VALR.05; @Meng2006_PPNP57-470] has not been applied so far in a reliable way to the study of this quantity. Typically, non-relativistic and relativistic DFT differ significantly in the prediction of separation energies close to the drip lines and, in general, of isovector properties far from stability [@V.05]. This may lead to neutron drip lines which differ substantially from non-relativistic models. The goals of the present manuscript are (i) the systematic study of two-proton- and two-neutron-drip lines within the relativistic Hartree-Bogoliubov (RHB) framework [@Kucharek1991_ZPA339-23; @GonzalesLlarena1996_PLB379-13] using several state-of-the-art CDFT parametrizations, (ii) the estimate of theoretical errors in the location of the drip lines within CDFT framework, and (iii) the comparison of the drip lines obtained in relativistic and non-relativistic DFT and thus the estimate of global theoretical errors. To our knowledge, there were only two previous attempts to study the neutron-drip line in the CDFT frawework [@HSet.97; @GTM.05]. However, both of them employ quite crude approximations to the physics of drip line nuclei with a rather limited validity. For example, the pairing correlations have been completely ignored in the studies of Ref.  [@HSet.97] and the treatment of pairing via BCS approximation in Ref. [@GTM.05] is questionable in the region of drip line since this approximation does not take into account the continuum properly and leads to the formation of a neutron gas [@DFT.84] in nuclei near neutron-drip line. In addition, these calculations use at most 14 fermionic shells for the harmonic oscillator basis, which according to our study and the one of Ref. [@RA.11] is not sufficient for a correct description of binding energies of actinides and superheavy nuclei and the nuclei in the vicinity of neutron-drip line. The RHB framework with a finite range pairing force is a proper tool for that purpose. It has been applied very successfully with the parameter set NL3 [@LVR.01; @LVR.04] and the parameter set DD-PC1 [@Ferreira2011_PLB701-508] at the proton drip line and it has the proper coupling to the continuum at the neutron drip line. In the present manuscript, the RHB framework is used for a systematic studies of ground state properties of all even-even nuclei from the proton- to neutron drip line. The separable version [@TMR.09; @Tian2009_PRC80-024313] of the finite range Brink-Booker part of the Gogny D1S force is used in the particle-particle channel; its strength variation across the nuclear chart is defined by means of the fit of rotational moments of inertia calculated in the cranked RHB framework to experimental data via the procedure of Ref.  [@AA.13]. The need for such $A$-dependent variation of the strength of the Brink-Booker part of the Gogny D1S force in the CDFT application has recently been discussed in Refs. [@AA.13; @WSDL.13]. As the absolute majority of nuclei are known to be axially and reflection symmetric in their ground states, we consider only axial and parity-conserving intrinsic states and solve the RHB-equations in an axially deformed oscillator basis [@Gambhir1990_APNY198-132; @Ring1997_CPC105-77]. The truncation of the basis is performed in such a way that all states belonging to the shells up to $N_F = 20$ fermionic shells and $N_B = 20$ bosonic shells are taken into account. This provides sufficient numerical accuracy. As the absolute majority of nuclei are known to be axially and reflection symmetric in their ground states, we consider only axial and parity-conserving intrinsic states. For each nucleus the potential energy curve in large deformation range from $\beta_2=-0.4$ up to $\beta_2=1.0$ is obtained by means of constraint on the quadrupole moment $Q_{20}$. Then, the correct ground state configuration and its energy are defined; this procedure is especially important for the cases of shape coexistence. In axial reflection-symmetric calculations for superheavy nuclei with $Z\geq 100$, the superdeformed minimum is frequently lower in energy than the normal deformed one [@AAR.12]. As long as triaxial and octupole deformations are not included, this minimum is stabilized by the presence of an outer fission barrier. Including such deformations, however, it often turns out that this minimum either disappears or becomes a saddle point, unstable against fission [@AAR.12]. Since these deformations are not included in the present calculations, we restrict our consideration to spherical or normal-deformed ground states in the $Z\geq 100$ nuclei. This also facilitates the comparison with non-relativistic results which favor such ground states for these nuclei. Three existing classes of covariant density functional models are used throughout this paper: the nonlinear meson-nucleon coupling model (NL), the density-dependent meson-exchange model (DD-ME), and a density-dependent point coupling model (DD-PC); see their comparison in Ref. [@AAR.12]. The main differences among them lay in the treatment of the range of the interaction, the mesons, and the density dependence. The interaction in the first two classes has a finite range, while the third class uses a zero-range interaction with one additional gradient term in the scalar-isoscalar channel. The mesons are absent in the density-dependent point coupling model. The density dependence is explicit in the last two models, while it shows up via the nonlinear meson-couplings in the first case. Each of these model classes is represented here by the energy density functional (EDF) that is considered to be the state-of-the-art. The NL model is represented here by the NL3\* [@NL3*] EDF which has the smallest number of parameters amongst considered EDF fitted to data. The DD-ME model is represented by the DD-ME2 [@DD-ME2] and the DD-ME$\delta$ [@DD-MEdelta] EDFs. The DD-ME$\delta$ EDF differs from others by the inclusion of the $\delta$-meson, which leads to different proton and neutron effective masses. In addition, the parameters of the DD-ME$\delta$ EDF are largely based on microscopic [*ab initio*]{} calculations in nuclear matter; only four of its parameters are fitted to finite nuclei. On the contrary, all parameters of other EDF were adjusted to experimental data based on the properties of finite nuclei. The DD-PC model is represented by the DD-PC1 [@DD-PC1] EDF. In contrast to the other functionals, which are fitted to spherical nuclei, this EDF is fitted to a large set of deformed nuclei. ![image](prot-drip-all.eps){width="18cm"} Fig. \[chart\] shows the nuclear landscape as obtained with these CDFT parametrizations. The particle stability (and, as a consequence, a drip line) of a nuclide is specified by its separation energy, namely, the amount of energy needed to remove particle(s). Since our investigation is restricted to even-even nuclei, we consider two-neutron $S_{2n}=B(Z,N-2)-B(Z,N)$ and two-proton $S_{2p}=B(Z-2,N)-B(Z,N)$ separation energies. Here $B(Z,N)$ stands for the binding energy of a nucleus with $Z$ protons and $N$ neutrons. If the separation energy is positive, the nucleus is stable against two-nucleon emission; conversely, if the separation energy is negative, the nucleus is unstable. Thus, two-neutron and two-proton drip lines are reached when $S_{2n}\leq 0$ and $S_{2p}\leq 0$, respectively. EDF measured --------------- ------------------ ------------------ ------------------------- ------------------------- $\Delta E_{rms}$ $\Delta E_{rms}$ $\Delta (S_{2n})_{rms}$ $\Delta (S_{2p})_{rms}$ NL3\* 2.97 3.01 1.21 1.28 DD-ME2 2.42 2.48 1.09 0.99 DD-ME$\delta$ 2.31 2.42 1.11 1.11 DD-PC1 2.02 2.17 1.25 1.13 : The rms-deviations $\Delta E_{rms}$, $\Delta (S_{2n})_{rms}$ ($\Delta (S_{2p})_{rms}$) between calculated and experimental binding energies $E$ and two-neutron(-proton) separation energies $S_{2n}$ ($S_{2p}$), respectively. They are given in MeV for indicated CDFT parametrizations with respect of “measured” and “measured+estimated” sets of experimental masses. \[deviat\] The accuracy of the description of separation energies depend on the accuracy of the description of mass differences. The global RHB calculations of masses with employed parametrizations lead to the rms-deviations $\Delta E_{rms}$ between calculated and experimental binding energies which are listed in Table \[deviat\]. The detailed results of these calculations will be presented in a forthcoming manuscript [@AARR.13]. The masses given in the AME2012 mass evaluation [@AME2012] can be separated into two groups; one represents nuclei with masses defined only from experimental data, the other contains nuclei with masses depending in addition on either interpolation or extrapolation procedures. For simplicity, we call the masses of the nuclei in the first and second groups as measured and estimated. There are 640 measured and 195 estimated masses of even-even nuclei in the AME2012 mass evaluation. One can see in Table \[deviat\] that the addition of estimated masses leads only to a slight decrease of the accuracy of the description of experimental data. Two-neutron $S_{2n}$ and two-proton $S_{2p}$ separation energies are described with typical accuracy of 1 MeV (Table \[deviat\]). One can see that not always the parametrization which provides the best description of masses gives the best description of two-particle separation energies. This is because the separation energies are related to the derivatives of binding energies with respect of particle number. Fig. \[chart-odd\] shows that theoretical uncertainties (i. e. the spread of the predictions due to different EDF) are rather small for two-proton drip line. In addition, the results of the calculations are very close to experimental data. This is because the proton-drip line lies close to the valley of stability, so that extrapolation errors towards it are small. Another reason is the fact the Coulomb barrier provides a rather steep potential reducing considerably the coupling to the proton continuum. This leads to a relatively low density of the single-particle states in the vicinity of the Fermi level. The situation is different for the two-neutron drip line. In the majority of the cases, the theoretical uncertainties in the location of this line are much larger than for the two-proton drip one and they are generally increasing with the increase of mass number. This is commonly attributed to poorly known isovector properties of EDF [@Eet.12]. Although this factor contributes, such an explanation is somewhat oversimplified from our point of view. That is because for some combinations of $Z$ and $N$ there is basically no (or very little) dependence of the predictions for the location of the two-neutron drip line on the CDFT parametrization. Such a weak (or vanishing) dependence is especially pronounced at spherical neutron shell closures with $N=126,184$ and 258 around proton numbers $Z=54,80$ and 110. It is interesting that the impact of shell structure at these particle numbers on the shape of the two-neutron drip line is more pronounced than that for the two-proton drip line due to $Z=50$ and 82 proton shell gaps. However, moving away from these spherical shell closures the spread of theoretical predictions for the two-neutron drip line increases. This move also induces the deformation in the nuclei. Thus, there is a close correlation between the nuclear deformation at the neutron-drip line and the uncertainties in the prediction of neutron-drip line; the regions of large uncertainties corresponds to transitional and deformed nuclei. This is caused by the underlying densities of the single-particle states. The spherical nuclei under discussion are characterized by large shell gaps and a clustering of highly degenerate single-particle states around them. Deformation removes this high degeneracy of single-particle states and leads to a more equal distribution of the single-particle states with energy. Moreover, the density of bound neutron single-particle states close to the neutron continuum is substantially larger than that on the proton-drip line. As a consequence, inevitable inaccuracies in the DFT description of the deformed single-particle state energies which are present even in the valley of beta-stability [@AS.11] will lead to larger uncertainties in the predictions of the neutron-drip line. ![Two-neutron separation energies $S_{2n}$, neutron chemical potentials $\lambda_n$, and quadrupole deformations $\beta_2$ of the Th$(Z=90)$ isotopes obtained in the RHB(DD-ME2) calculations.[]{data-label="reemer"}](reemergence.eps){width="8cm"} For some isotope chains, there are regions of two-neutron stability (not shown in Fig.  \[chart\]) at neutron numbers beyond the primary two-neutron drip line. The physical mechanism behind the appearance of these regions is illustrated in Fig. \[reemer\] on the example of the Th isotope chain. Two-neutron separation energies $S_{2n}$ and the neutron chemical potential $\lambda_{2n}$ are positive and negative in two-neutron bound nuclei ($N\leq 184$), respectively. The $S_{2n}$ and $\lambda_{2n}$ values become negative and positive for two-neutron unbound nuclei ($186\leq N\leq 192$), respectively. A further increase of the neutron number triggers an increase of quadrupole deformation $\beta_2$ leading to a lowering of the neutron chemical potential $\lambda_n$ which again becomes negative. As a consequence, two-neutron binding reappears ($S_{2n}>0$) at $N=194-206$. Further increase of $N$ beyond 206 leads to two-neutron unbound nuclei. The appearance of these regions, however, strongly depends on the CDFT parametrization. For example, such regions exist at $(Z=62,N=132-146)$, $(Z=88,N=194-206)$ for DD-PC1, at $(Z=74,N=176-184)$, $(Z=90,N=194-206)$ for DD-ME2 and at $(Z=62,N=132-142)$, $(Z=74,N=178-184)$ and $(Z=90,N=204-206)$ for DD-ME$\delta$. However, the regions of stability beyond the primary drip line are absent in the RHB(NL3\*) calculations. A similar reappearance of two-neutron binding with increasing neutron number beyond primary two-neutron drip line exists also in many SDFT parametrizations [@Eet.12]. Both in CDFT and SDFT, the regions of two-neutron binding reappearance represent the peninsulas emerging from the nuclear mainland. Ref. [@Eet.12] suggested that such behavior is due to the presence of shell effects at neutron closures that tend to lower binding energy along the localized bands of stability. This is certainly true in some cases. However, our analysis presented above suggests that local changes of the shell structure induced by deformation changes play also an important role. Similar to the CDFT(NL3\*) results, there are also some Skyrme EDF which do not show the reappearance of two-neutron binding [@Eet.13]. It is interesting to compare theoretical CDFT uncertainties in the definition of the two-proton and two-neutron drip lines with the ones obtained in non-relativistic calculations. Fig.  \[chart-shade\] presents such a comparison. We use so-called ’2012 Benchmark uncertainties” [@Eet.13] obtained in Ref.  [@Eet.12] for Skyrme DFT employing six parametrizations; these uncertainties are shown by the combination of yellow and blue shaded areas in Fig. \[chart-shade\]. The CDFT uncertainties are represented by the combination of the plum and blue shaded areas. One can see that the CDFT and SDFT uncertainties in the definition of two-proton drip line are small; they tightly overlap at $Z\leq 70$ while for higher $Z$ the CDFT uncertainties are shifted slightly towards neutron deficient nuclei as compared with the SDFT ones. The uncertainties for two-neutron drip line are larger but still they are similar in two models in many regions. In particular, the two-neutron drip line at $Z\sim 54, N=126$ and $Z\sim 82, N=184$ is well defined not only in the CDFT and SDFT calculations, but also in the mic+mac (FRDLM) and Gogny D1S calculations. This uniqueness is due to corresponding well pronounced spherical shell closures in the model calculations. The predictions of the DD-ME2, DD-ME$\delta$ and DD-PC1 parametrizations are close to each other (Fig. \[chart\]) and are within the ’2012 Benchmark uncertainties’. The NL3\* parametrization typically shifts the two-neutron drip line to a higher $N$-value exceeding in some regions ’2012 Benchmark uncertainties’. However, the same is true for recently fitted Skyrme TOV-min parametrization [@Eet.13], the two-neutron drip line of which is very similar to the one obtained in the RHB(NL3\*) calculations. The biggest difference between CDFT and Skyrme DFT calculations appears at $N=258,Z\sim 110$ (see Fig. \[chart-shade\]) where the two-neutron drip line is uniquely defined in the CDFT calculations due to large spherical gap at $N=258$. This gap is also present in many Skyrme EDF but it does not prevent a significant spread of Skyrme DFT predictions for the two-neutron drip line in this region. This again underlines the importance of shell structure in the predictions of the details of the two-neutron drip line. A similar difference between CDFT and SDFT exists also in superheavy nuclei with $Z\approx 120-126, N\approx 172-184$ where different centers of islands of stability are predicted by these models [@BRRMG.98; @AKF.03]. These results are contrary to the fact that both models generally agree for lighter $Z\leq 100$ nuclei. The DD-\* CEDF predict two-neutron drip line at lower $N$ as compared with the NL3\* one (see Fig. \[chart\]). It is tempting to associate this feature with different symmetry energies $J$ ($J\sim 32$ MeV for DD\* and $J\sim 39$ MeV for NL3\*). However, a detailed analysis of 14 two-neutron drip lines obtained in relativistic and non-relativistic calculations does not reveal clear correlations between the location of two-neutron drip line and the nuclear matter properties of the employed force. In conclusion, a detailed analysis of two-neutron drip lines in covariant and non-relativistic DFT has been performed. These results clearly indicate that the shell structure is not washed near or at two-neutron drip line. In particular, model uncertainties in the definition of two-neutron drip line at $Z\sim 54, N=126$ and $Z\sim 82, N=184$ are very small due to the impact of spherical shell closures at $N=126$ and 184. The largest difference between covariant and Skyrme DFT exist in superheavy nuclei, where the first model (contrary to second) predicts significant impact of the $N=258$ spherical shell closure. The spread of theoretical predictions grows up on moving away from these spherical closures. The development of deformation causes it. Both poorly known isovector properties of the forces and inevitable inaccurcies in the description of deformed single-particle states in the DFT framework contribute to that. The number of particle-bound even-even $Z\leq 120$ nuclei is 2040, 2050, 2057 and 2216 in the DD-PC1, DD-ME2, DD-ME$\delta$ and NL3\* parametrizations, respectively. This is close to the numbers obtained in SDFT. Thus, our calculations support the estimate of Ref. [@Eet.12] that around 7000 different (including odd- and odd-odd ones) nuclides have to exist. The authors would like to thank J. Erler for valuable discussions. This work has been supported by the U.S. Department of Energy under the grant DE-FG02-07ER41459 and by the DFG cluster of excellence Origin and Structure of the Universe  (www.universe-cluster.de). This research was also supported by an allocation of advanced computing resources provided by the National Science Foundation. The computations were partially performed on Kraken at the National Institute for Computational Sciences (http://www.nics.tennessee.edu/). [99]{} M. Wang, G. Audi, A. H. Wapstra, F. G. Kondev, M. MacCormick, X. Xu and B. Pfeiffer, Chinese Physics C [**36**]{}, 1603 (2012). J. Erler, N. Birge, M. Kortelainen, W. Nazarewicz, E. Olsen, A. M. Perhac, M. Stoitsov, Nature [**486**]{}, 509 (2012). P. M[ö]{}ller, J. R. Nix, W. D. Myers, W. J. Swiatecki, At. Data Nucl. Data Tabl. [**59**]{}, 185 (1995). S. Goriely, N. Chamel and J. M. Pearson, Phys. Rev. C [**82**]{}, 035804 (2010). J.-P. Delaroche, M. Girod, J. Libert, H. Goutte, S. Hilaire, S. Peru, N. Pillet, and G. F. Bertsch, Phys. Rev. C [**81**]{}, 014303 (2010). S. Goriely, S. Hilaire, M. Girod, and S. P[é]{}ru, Phys. Rev. Lett. [**102**]{}, 242501 (2009). B. D. Serot and J. D. Walecka, Adv. Nucl. Phys. [**16**]{}, 1 (1986). P.-G. Reinhard, Rep. Prog. Phys. [**52**]{}, 439 (1989). P. Ring, Prog. Part. Nucl. Phys. [**37**]{}, 193 (1996). D. Vretenar, A. V. Afanasjev, G. A. Lalazissis, and P. Ring, Phys. Rep. [**409**]{}, 101 (2005). J. Meng, H. Toki, S. G. Zhou, S. Q. Zhang, W. H. Long, and L. S. Geng, Prog. Part. Nucl. Phys. [**57**]{}, 470 (2006). D. Vretenar, Nucl. Phys. [**A 751**]{}, 264c (2005). H. Kucharek and P. Ring, Z. Phys. A [**339**]{}, 23 (1991). T. Gonzalez-Llarena, J. L. Egido, G. A. Lalazissis, and P. Ring, Phys. Lett. B [**339**]{}, 23 (1991). D. Hirata, K. Sumiyoshi, I. Tanihata, Y. Sugahara, T. Tachibana and H. Toki, Nucl. Phys. [**A616**]{}, 438c (1997). L. Geng, H. Toki, and J. Meng, Prog. Theor. Phys., [**113**]{}, 785 (2005). J. Dobaczewski, H. Flocard, and J. Treiner, Nucl. Phys. [**A422**]{}, 103 (1984). P.-G. Reinhard, B. K. Agrawal, Int. Jour. Mod. Phys. [**E20**]{}, 1379 (2011). G. A. Lalazissis, D. Vretenar, P. Ring, Nucl. Phys. [**A679**]{}, 481 (2001). G. A. Lalazissis, D. Vretenar, P. Ring, Phys. Rev. [**C69**]{}, 173011 (2004). L. S. Ferreira and E. Maglione and P. Ring, Phys. Lett. B [**701**]{}, 508 (2011). Y. Tian, Z. Y. Ma, and P. Ring, Phys. Lett. [**B676**]{}, 44 (2009). Y. Tian, Z. Y. Ma, and P. Ring, Phys. Rev. C [**80**]{}, 024313 (2009). A. V. Afanasjev and O. Abdurazakov, Phys. Rev. C [**88**]{}, 014320 (2013). L. J. Wang, B. Y. Sun, J. M. Dong, and W. H. Long, Phys. Rev. C [**87**]{}, 054331 (2013). Y. K. Gambhir, P. Ring, and A. Thimet, Ann. Phys. (N. Y.) [**198**]{}, 132 (1990). P. Ring, Y. K. Gambhir, and G. A. Lalazissis, Comp. Phys. Comm. [**105**]{}, 77 (1997). H. Abusara, A. V. Afanasjev, and P. Ring, Phys. Rev. [**C85**]{}, 024314 (2012). G. A. Lalazissis, S. Karatzikos, R. Fossion, D. Pe[ñ]{}a Arteaga, A. V. Afanasjev, and P. Ring, Phys. Lett. [**B671**]{}, 36 (2009). G. A. Lalazissis, T. Nik[š]{}i[ć]{}, D. Vretenar, and P. Ring, Phys. Rev. [**C71**]{}, 024312 (2005). X. Roca-Maza, X. Viñas, M. Centelles, P. Ring, and P. Schuck, Phys. Rev. C 84, 054309 (2011). T. Nikšić, D. Vretenar, and P. Ring, Phys. Rev. [**C78**]{}, 034318 (2008). S. E. Abgemava, A. V. Afanasjev, D. Ray, and P. Ring, in preparation. A. V. Afanasjev and S. Shawaqfeh, Phys. Lett. [**B706**]{}, 177 (2011). J. Erler, C. J. Horowitz, W. Nazarewicz, M. Rafalski, P.-G. Reinhard, Phys. Rev. c [**87**]{}, 044320 (2013). M. Bender, K. Rutz, P.-G. Reinhard, J. A. Maruhn, and W. Greiner, Phys. Rev. [**C58**]{}, 2126 (1998). A. V. Afanasjev, T. L. Khoo, S. Frauendorf, G. A. Lalazissis, and I. Ahmad, Phys. Rev. [**C67**]{}, 024309 (2003).
{ "pile_set_name": "ArXiv" }
--- abstract: 'The short pulse equation (SPE) is considered as an initial-boundary value problem. It is found that the solutions of the SPE must satisfy an integral relation otherwise the temporal derivative exhibits discontinuities. This integral relation is not necessary for a solution to exist. An infinite number of such constraints can be dynamically generated by the evolution equation.' address: 'Department of Computer Science and Technology, University of Peloponnese, Tripolis 22100, Greece' author: - 'Theodoros P. Horikis' --- The standard model for describing propagation of a pulse-shaped complex field envelope in nonlinear dispersive media is the nonlinear Schrödinger (NLS) equation. In the context of nonlinear optics, the main assumption made when deriving the NLS equation from Maxwell’s equations is that the pulse-width is large as compared to the period of the carrier frequency. When this assumption is no longer valid, i.e., for pulse duration of the order of a few cycles of the carrier, the evolution of such “short pulses” is better described by the so-called short-pulse equation (SPE) [@schafer]. The SPE can be expressed in the following dimensionless form, $$u_{xt}=u+\frac{1}{6}(u^3)_{xx} \label{spe}$$ where subscripts denote partial derivatives. The SPE forms an initial-boundary problem when accompanied by the initial data $$u(0,x)=u_0,$$ and sufficiently fast decaying boundary conditions $u(t,\pm\infty)=0$. Much like the NLS equation, the SPE is integrable [@sakovich3] and exhibits soliton solutions in the form of loop-solitons [@sakovich1]. However, when it is formed as an evolution equation certain conditions must apply otherwise, as shown below, the temporal derivative exhibits discontinuities. Despite the fact that the equation is integrable via the inverse scattering transform [@victor], there are certain subtleties that need to be clarified. Integration of Eq. (\[spe\]) introduces the operation $$\partial_x^{-1}u(t,x)=\int_{-\infty}^x u(t,x')\;{\mathrm{d}}x'$$ Clearly as $x$ approaches $-\infty$, $\partial_x^{-1}u=0$, consistent with rapidly decaying data. However, as $x$ approaches $+\infty$, for $u$ and its time and space derivatives to decay, a constraint seems to be necessary (see the discussion below), namely $${\int_{-\infty}^{+\infty}}u(t,x)\;{\mathrm{d}}x =0 \label{spe.const}$$ Indeed, writing the SPE in evolution type form we have $$u_t=\partial_x^{-1}u+\frac{1}{6}(u^3)_x=\int_{-\infty}^x u\;{\mathrm{d}}x' + \frac{1}{6}(u^3)_x$$ and imposing the boundary condition as $x\rightarrow +\infty$, one results to Eq. (\[spe.const\]). In fact, this constraint induces further constraints obtained by successively taking the time derivative of the integral and using Eq. (\[spe\]). For example, the next constraint is given by $${\int_{-\infty}^{+\infty}}\partial_x^{-1}u \; {\mathrm{d}}x =0\label{spe.const2}$$ However, Eqs. (\[spe.const\]), (\[spe.const2\]), along with the rest of the family of infinite constraints generated as above, are not generically true. One might surmise that constraints are required at all times for a solution to exist. However, as discussed below, this is not the case. Extra constraints on the initial data are not necessary, but the solution suffers from a temporal discontinuity. For smooth initial data not satisfying Eq. (\[spe.const\]), $u_t(t,x)$ has at $t=0$ different left and right limits and the rest of the family of constraints cannot be generated dynamically at that point. The same issues arise in the context of the Kadomtsev-Petviashvili (KP) equations and were studied in Refs. [@mja1; @mja2]. Our analysis starts by taking the Fourier transform (FT) of Eq. (\[spe\]), $$i k \hat{u}_t=\hat{u}-\frac{ k ^2}{6}\widehat{u^3} \label{spe.ft}$$ where the FT pair is defined as $$\begin{aligned} \hat{u}(t, k )=\mathcal{F}\left\{ u(t,x) \right\}&=&{\int_{-\infty}^{+\infty}}u(t,x)\,{\mathrm{e}}^{i k x}\;{\mathrm{d}}x \\ u(t,x)=\mathcal{F}^{-1}\left\{ \hat{u}(t, k ) \right\}&=&\frac{1}{2\pi} {\int_{-\infty}^{+\infty}}\hat{u}(t, k )\,{\mathrm{e}}^{-i k x}\;{\mathrm{d}}k\end{aligned}$$ Define $\hat{U}=\widehat{u^3}$ and write Eq. (\[spe.ft\]) in the form of a first order differential equation in $t$, $$\begin{aligned} \hat{u}_t-\frac{1}{i k }\hat{u}=\frac{i k }{6}\hat{U} \label{fourier}\end{aligned}$$ which can be readily integrated with the use of integrating factors to give $$\hat{u}(t, k )=\hat{u}_0\,{\mathrm{e}}^{t/i k }+\frac{i k }{6}\,{\mathrm{e}}^{t/i k } \int_0^t \hat{U}\,{\mathrm{e}}^{-\tau/i k }\;{\mathrm{d}}\tau \label{sol.ft}$$ where $\hat{u}_0=\hat{u}_0( k )=\mathcal{F}\{ u_0(x) \}$, is the FT of the initial data. Using Eq. (\[sol.ft\]) we calculate the temporal derivative to be $$\hat{u}_t(t, k )=\frac{1}{i k }\hat{u}_0\,{\mathrm{e}}^{t/i k }+\frac{i k }{6}\hat{U} +\frac{1}{6} \int_0^t \hat{U}\,{\mathrm{e}}^{(t-\tau)/i k }\;{\mathrm{d}}\tau \label{der.ft}$$ Clearly, from Eq. (\[fourier\]), as $ k $ tends to zero, we should demand that so will $\hat{u}$. This translates to [@pelinovsky] $$\hat{u}(t,0)=0 \Leftrightarrow {\int_{-\infty}^{+\infty}}u(t,x)\;{\mathrm{d}}x =0$$ However, as $t\rightarrow \pm 0$, we have that $\hat{u}(0,x)=\hat{u}_0$ from Eq. (\[sol.ft\]), and $$\hat{u}_t=\frac{1}{ ik -\,{\mathrm{sign}}(t)0}\,\hat{u}_0+\frac{i k }{6}\,\hat{U}$$ This is because the function $\exp(t/ ik )$ defines a distribution, depending continuously on $t$, in the Schwartz space of the variable $ k $ [@boiti] with $$\frac{\partial}{\partial t}{\mathrm{e}}^{t/ ik }=\frac{1}{ ik -\,{\mathrm{sign}}(t)0}\,{\mathrm{e}}^{it/ k }, \quad t=0$$ and $$\frac{\partial}{\partial t}{\mathrm{e}}^{t/ ik }=\frac{1}{ ik }\,{\mathrm{e}}^{t/ ik }, \quad t\ne 0$$ This suggests that although there is no discontinuity in the solution, there is one in the derivative. Indeed, taking the inverse FT of Eq. (\[der.ft\]), at $t\rightarrow \pm 0$, we have $$u_t(t\rightarrow \pm 0,x)=\frac{1}{2\pi}\lim_{t\rightarrow \pm 0} {\int_{-\infty}^{+\infty}}\frac{1}{i k }\hat{u}_0\, {\mathrm{e}}^{t/i k }\,{\mathrm{e}}^{-i k x}\;{\mathrm{d}}k +\frac{1}{6}(u^3)_{x} (t\rightarrow \pm 0,x)$$ The nonlinear term is straightforward to handle so we focus on the linear part, $$\begin{aligned} I(x)&=& \frac{1}{2\pi}\lim_{t\rightarrow \pm 0} {\int_{-\infty}^{+\infty}}\frac{1}{i k }\hat{u}_0\, {\mathrm{e}}^{t/i k }\,{\mathrm{e}}^{-i k x}\;{\mathrm{d}}k \nonumber\\ &=&\frac{1}{2\pi}\lim_{t\rightarrow \pm 0} {\int_{-\infty}^{+\infty}}\frac{1}{i k }\left[\hat{u}_0{\mathrm{e}}^{-i k x} +\hat{u}_0(0) - \hat{u}_0(0) \right] {\mathrm{e}}^{t/i k }\;{\mathrm{d}}k \nonumber\\ &=& \frac{1}{2\pi}\lim_{t\rightarrow \pm 0} {\int_{-\infty}^{+\infty}}\frac{1}{i k }\left[\hat{u}_0{\mathrm{e}}^{-i k x} - \hat{u}_0(0) \right] {\mathrm{e}}^{t/i k }\;{\mathrm{d}}k \nonumber\\ &+& \frac{1}{2\pi}\lim_{t\rightarrow \pm 0} {\int_{-\infty}^{+\infty}}\frac{1}{i k }\,\hat{u}_0(0) \,{\mathrm{e}}^{t/i k }\;{\mathrm{d}}k \label{ix}\end{aligned}$$ Using the property $${\int_{-\infty}^{+\infty}}\frac{1}{i k }\,{\mathrm{e}}^{t/i k }\;{\mathrm{d}}k =-\pi\,{\mathrm{sign}}(t)$$ the second integral of Eq. (\[ix\]) is reduced to $- \hat{u}_0(0)\pi\,{\mathrm{sign}}(t)/2$. Furthermore, we write $$\hat{u}_0(0)={\int_{-\infty}^{+\infty}}\delta( k )\,\hat{u}_0( k )\,{\mathrm{e}}^{-i k x}\;{\mathrm{d}}k$$ so that finally $$\begin{aligned} I(x)&=&\frac{1}{2\pi}{\int_{-\infty}^{+\infty}}\left[ \mathrm{P}\left(\frac{1}{i k }\right) -\pi\,{\mathrm{sign}}(t)\delta( k )\right]\hat{u}_0( k )\,{\mathrm{e}}^{-i k x}\;{\mathrm{d}}k \\ &=& \frac{1}{2\pi}{\int_{-\infty}^{+\infty}}\frac{\hat{u}_0( k )}{i k -0\,{\mathrm{sign}}(t)}\, {\mathrm{e}}^{-i k x}\;{\mathrm{d}}k \\ &=& \int_{{\mathrm{sign}}(t)\infty}^{x}u_0(x')\;{\mathrm{d}}x'\end{aligned}$$ where P denotes principal value. Thus, at $t=0$, Eq. (\[der.ft\]) translates into physical space as $$u_t =\int_{{\mathrm{sign}}(t)\infty}^{x}u_0(x')\;{\mathrm{d}}x' +\frac{1}{6}(u_0)_{x}$$ As also mentioned in Refs. [@pelinovsky; @mja2], the operator $$\partial_x^{-1}=\int_{{\mathrm{sign}}(t)\infty}^{x} \;{\mathrm{d}}x'$$ and its relative average $$\partial_x^{-1}=\frac{1}{2}\left( \int_{-\infty}^{x} \;{\mathrm{d}}x' + \int_{x}^{\infty} \;{\mathrm{d}}x'\right)$$ are equivalent, meaning that one can choose either one of them. If $t\ne 0$ we have that ${\int_{-\infty}^{+\infty}}u\;{\mathrm{d}}x=0$, hence both choices are valid. At $t=0$ there is a discontinuity in the temporal derivative. For the evolution of the SPE, Eq. (\[spe.const\]) is not preserved in time and as such leads to the infinite number of further constraints. Indeed, if Eq. (\[spe.const\]) holds then an infinite number of constraints, dynamically generated using the SPE, hold during the evolution. Within the physical framework of the SPE these constraints are neither “natural" nor necessary. Solutions of the SPE can exist without satisfying this condition, the most prominent example being the loop-soliton [@sakovich1]. This solution, however, in addition to the possible temporal discontinuities, suffers from discontinuities in its spatial derivatives, $u_x(t,x)$, and extra care may be needed when the above formalism is applied. We conclude with a note on the so-called regularized SPE (RSPE) model, recently derived in Ref. [@costanzino]. The latter has been derived by including a regularization term, based on the next term in the expansion of the dielectric’s susceptibility. In that case, the pulses (of the real component of the electric field) are described by: $$u_{xt}=u+\frac{1}{6}(u^3)_{xx}+\beta u_{xxxx}$$ where $\beta$ is a small parameter. Without the regularization term, $\beta u_{xxxx}$, i.e., in the case of the SPE –cf. Eq. (\[spe\])–, traveling pulses in the class of piecewise smooth functions with one discontinuity do not exist. However, when the regularization term is added, and for a particular parameter regime, the RSPE supports smooth traveling waves which have structure similar to solitary waves of the modified KdV equation [@costanzino]. The regularization term does not alter the analysis for the SPE. Indeed, in the Fourier domain the term is written as $\mathcal{F}\left\{ \beta u_{xxxx}\right\}=\beta k ^4\hat{u}$ and thus when dividing with $ik$ from the left-hand-side the resulting power of $ k $ is continuous at $k=0$. The linear part of the RSPE, much like the linear part of the KP-I equation [@mja1; @boiti], deserves more study and the analysis will be presented in a future communication. Acknowledgments {#acknowledgments .unnumbered} =============== I wish to thank Mark J. Ablowitz for bringing the KP analysis to my attention and Barbara Prinari, Dimitri J. Frantzeskakis and Panayotis G. Kevrekidis for many useful discussions. References {#references .unnumbered} ========== [1]{} T. Schäfer and [C.E.]{} Wayne. Propagation of ultra-short optical pulses in cubic nonlinear media. , 196:90–105, 2004. A. Sakovich and S. Sakovich. The short pulse equation is integrable. , 74:239––241, 2005. A. Sakovich and S. Sakovich. Solitary wave solutions of the short pulse equation. , 39:L361–L367, 2006. Victor, [B.B.]{} Thomas, and [T.C.]{} Kofane. On exact solutions of the [S]{}chäfer–-[W]{}ayne short pulse equation: [WKI]{} eigenvalue problem. , 40:5585–5596, 2007. Ablowitz and [X-P]{} Wang. Initial time layers and [K]{}adomtsev–-[P]{}etviashvili–type equations. , 98:121–137, 1997. Ablowitz and J. Villarroel. On the [K]{}adomtsev–-[P]{}etviashvili equation and associated constraints. , 85:195–213, 1991. D. Pelinovsky and A. Sakovich. Global well-posedness of the short-pulse and sine-[G]{}ordon equations in energy space. , 2008. M. Boiti, F. Pempinelli, and A. Pogrebkov. Solutions of the [KPI]{} equation with smooth initial data. , 10:505–519, 1994. N. Costanzino, V. Manukian, and [C.K.R.T.]{} Jones. Solitary waves of the regularized short pulse and [O]{}strovsky equations. , 2008.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider the problem of an auctioneer who faces the task of selling a good (drawn from a known distribution) to a set of buyers, when the auctioneer does not have the capacity to describe to the buyers the exact identity of the good that he is selling. Instead, he must come up with a constrained signalling scheme: a (non injective) mapping from goods to signals, that satisfies the constraints of his setting. For example, the auctioneer may be able to communicate only a bounded length message for each good, or he might be legally constrained in how he can advertise the item being sold. Each candidate signaling scheme induces an incomplete-information game among the buyers, and the goal of the auctioneer is to choose the signaling scheme and accompanying auction format that optimizes welfare. In this paper, we use techniques from submodular function maximization and no-regret learning to give algorithms for computing constrained signaling schemes for a variety of constrained signaling problems.' author: - 'Shaddin Dughmi [^1]' - 'Nicole Immorlica [^2]' - 'Aaron Roth [^3]' bibliography: - 'signaling.bib' title: Constrained Signaling in Auction Design --- [^1]: University of Southern California [^2]: Microsoft Research. This work was partially supported by NSF CAREER Grant CCF-1055020, the Alfred P. Sloan Research Fellowship, and the Microsoft New Faculty Fellowship. [^3]: University of Pennsylvania. This work was partially supported by an NSF CAREER Grant and NSF Grant CCF-1101389
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a discrete-time formulation for the autonomous learning conjecture. The main feature of this formulation is the possibility to apply the autonomous learning scheme to systems in which the errors with respect to target functions are not well-defined for all times. This restriction for the evaluation of functionality is a typical feature in systems that need a finite time interval to process a unit piece of information. We illustrate its application on an artificial neural network with feed-forward architecture for classification and a phase oscillator system with synchronization properties. The main characteristics of the discrete-time formulation are shown by constructing these systems with predefined functions.' author: - 'Agustín M. Bilen' - Pablo Kaluza title: 'Autonomous learning by simple dynamical systems with a discrete-time formulation' --- Introduction ============ The autonomous learning conjecture for the design of dynamical systems with predefined functionalities has been previously proposed by the authors [@kaluza_pre2014]. It extends the dynamics of a given system to a new one where the parameters are transformed to dynamical variables. The extended dynamical system then operates decreasing some cost function or error by varying the parameters through dynamics that include a delayed feedback and noise. The central feature of this idea is that the original variables and the parameters evolve simultaneously on different time scales. Because of an intimate connection with the previous publication, we do not provide here a detailed introduction and literature review, all of which can be found in Ref. [@kaluza_pre2014]. The original formulation of the autonomous learning scheme works properly for systems where the cost functions are defined at all times during their evolution. However, many dynamical systems cannot satisfy this restriction since they need a finite time interval in order to process a piece of information and produce its response. As a result, the errors of such systems with respect to target functions are not defined during these processing intervals. Examples of these systems are feed-forward neural networks [@book_rojas], spiking neural networks [@kaluza_EPJB_2014], gene regulatory systems with adaptive responses [@inoue_kaneko; @kaluza_inoue], signal transduction networks [@kaluza_EPJB_2012], etc. In order to treat these kinds of systems we propose to define an iterative map for the evolution of their parameters. Between successive iterations, a dynamical system is allowed to evolve during a long enough transient time for its error function to be evaluated. With the error thus computed we update the parameters in the next iteration according to our proposed autonomous learning scheme. This approach can be seen as an adiabatic realization of the original learning scheme for continuous-time evolution. To illustrate the application of this new formulation we consider two systems: a feed-forward neural network and a Kuramoto system of phase oscillators. With the former, we have a classic example of a system that needs a time interval to process some input signal and produce its corresponding output. Since the system may classify several input patterns, the network does not only need a time to process each signal, it also needs to restart the process for each one of them. We show that this system can not only learn to classify a set of patterns but also to be robust against structural damages, namely, deletion of one of its nodes in the processing layer. In the case of the Kuramoto system, we repeat the problem of synchronization treated in our previous work [@kaluza_pre2014], now using the new formalism. We then proceed to compare some aspects of the discrete and continuous-time approaches. The work is organized as follows. In section two we write down the original continuous-time autonomous learning scheme, followed by the proposed discrete-time approach. In section three we describe the two systems where we apply the new scheme. In section four we present our numerical investigations of these systems. Finally, in the last section we discuss our results and present our conclusions. Autonomous learning theory ========================== The continuous-time version of the autonomous learning scheme considers a simple dynamical system with $N$ variables $\bm x = (x_1, ..., x_N)$, and $Q$ parameters $\bm w = (w_1,...,w_Q$) with dynamics given by $$\frac{d \boldsymbol{x}}{dt} = \boldsymbol{f}(\boldsymbol{x},\boldsymbol{w}), \label{equ_dynamical_system}$$ and an output function $\bm{F}(\bm{x})$ which defines the task the system is responsible of executing. We further define a cost function or error $\epsilon$ between the output function and some target performance $\bm{R_0}$ as $$\epsilon = |\bm{F}(\bm{x}) - \bm{R_0}|. \label{equ_error}$$ Finally, to allow for a self-directed (or autonomous) minimization of the deviation $\epsilon$, we extend the original system (\[equ\_dynamical\_system\]) by defining a dynamics for the parameters $\bm{w}$ as follows: $$\frac{d\bm{w}}{dt} = -\frac{1}{\tau}\bm{\delta w}(t) \delta \epsilon(t) + \epsilon(t) S \bm{\xi}(t). \label{equ_dynamical_system_parameters}$$ In this expression $\bm{\delta w}(t) = \bm{w}(t) - \bm{w}(t - \Delta)$ and $\delta \epsilon(t) = \epsilon(t) - \epsilon(t - \Delta)$ are temporal differences at time $t$ and $t-\Delta$, where $\Delta$ is a time delay. The constant $\tau$ fixes the time scale of the evolution of $\bm{w}$. In the last term, $S$ plays the role of a noise intensity, and $\bm{\xi}(t) = (\xi_1(t), \xi_2(t),..., \xi_Q(t))$ are independent random white noises with $\langle \xi(t) \rangle = 0$ and $\langle \xi_{\alpha}(t) \xi_{\beta}(t')\rangle = 2 \delta_{\alpha \beta} \delta(t-t')$. Taken together, equations (\[equ\_dynamical\_system\]) and (\[equ\_dynamical\_system\_parameters\]) define an extended dynamical system with combined variables $\bm x$ and $\bm w$, evolving according to two different characteristic times. In effect, if the time scale of subsystem (\[equ\_dynamical\_system\]) is taken as unity and $\tau \gg 1$, the dynamics of subsystem (\[equ\_dynamical\_system\_parameters\]) will be slower. Additionally, the time delay $\Delta$ must satisfy $\tau \gg \Delta \gg 1$. Our conjecture is that, under an appropriate choice of $\tau$, $\Delta$ and $S$, this autonomous system will evolve along an orbit in the space of variables $\bm w$ which minimizes the system’s deviation $\epsilon$ from the target performance. In [@kaluza_pre2014] we show that this learning method works properly for systems of oscillators where different levels of synchronization must be reached. Equation (\[equ\_dynamical\_system\_parameters\]) has been designed in order to perform the weight evolution aiming to reduce the error $\epsilon$. The interpretation of the first term on the right side of this equation is the following. If, as a consequence of the delayed feedback (memory), the system performance improves, i.e. $\delta \epsilon < 0$, with the weight correction satisfying $\delta w_i < 0$, then the weight $w_i$ should next decrease. If instead $\delta w_i > 0$, $w_i$ should increase. The opposite behavior is obtained in the case that the system performance decays with $\delta \epsilon > 0$. The second term on the right side of eq. (\[equ\_dynamical\_system\_parameters\]) is a noise proportional to the error $\epsilon$. Its function is to keep the dynamics from being trapped in local minima, vanishing when the error is zero. Another interpretation of eq. (\[equ\_dynamical\_system\_parameters\]) is that the weight evolution is controlled by a drift term (first term on the right) whose corrections are given by the memory (feedback) of the system. The second term is a stochastic exploration in parameter space proportional to the error $\epsilon$. As a result, this dynamics can be seen as a competition between a drift term of intensity $\frac{1}{\tau}$ and a stochastic term of intensity $\epsilon S$. As we will later show, a proper balance between these two terms is needed in order to obtain successful evolutions. Discrete-time formulation ------------------------- For the application of (\[equ\_dynamical\_system\]) and (\[equ\_dynamical\_system\_parameters\]) it is necessary that the error $\epsilon(t)$ be defined for all $t$. As we mentioned in the introduction, this restriction cannot be satisfied by systems which need a finite amount of time $T$ to execute the task given by $\bm F (\bm x)$ and, consequently, to yield a corresponding value for $\epsilon$. Our discrete-time autonomous learning approach for the optimization of such systems considers that during each time interval $T$ the system’s parameters $\bm{w}$ are fixed. After this transient, the system function $\bm F( \bm x)$ assumes a value, allowing the error function $\epsilon$ to be evaluated and the parameters $\bm{w}$ to be accordingly updated. Taking each iteration step as comprising one complete processing interval $T$, we define an iterative map analogous to eq. (\[equ\_dynamical\_system\_parameters\]) as: $$\bm{w}(n+1) = \bm{w}(n) - K_{\tau} \bm{\delta w}(n) \delta \epsilon(n) + \epsilon(n) S \bm{\xi}(n). \label{equ_discreta_parametros}$$ Here, $n$ is the iteration index, $\bm{\delta w} = \bm{w}(n) - \bm{w}(n- \Delta_{\eta})$ and $\delta \epsilon = \epsilon(n) - \epsilon(n- \Delta_{\eta})$, with $\Delta_{\eta} \in \mathbb{N}$ a delay given by a certain number of iterations. The other quantities are analogous to those of eq. (\[equ\_dynamical\_system\_parameters\]). The constant $K_{\tau}$ determines the characteristic time for the evolution of the parameters and is equivalent to $1/\tau$. In this way, the new formulation replaces (\[equ\_dynamical\_system\_parameters\]) with an iterative map as the new dynamics for $\bm{w}$, and allows the system to evolve according to (\[equ\_dynamical\_system\]) during an interval of time $T$ between successive iterations. Dynamical models ================ In this section we present the two models we set out to design through optimization with the discrete-time autonomous learning scheme. We note that the goal of these models in this work is only to serve as examples of application and we do not pretend to analyze their properties in detail nor to compare our procedure with other methods of optimization. We focus only on the autonomous learning procedure. Neural network model -------------------- The first example we consider is a classical feed-forward artificial neural network [@book_rojas] able to classify bitmaps. This kind of system is prototypic for our interest. The neural network must process different signals with a fixed set of weights, requiring a finite time interval to compute the input information and retrieve its response. During this processing interval the error is not defined, making it difficult to implement the original scheme of autonomous learning. Our model is defined as follows: a network $G$ has $N_{in}$ nodes in the input layer, $M$ nodes in the hidden layer and $N_{out}$ nodes in the output layer. A weight $w^{c,c-1}_{ij}$ is associated with a directed connection from node $j$ in layer $c-1$ to node $i$ in layer $c$. The dynamics $x_i^c$ of node $i$ in layer $c$ is $$x_i^c = f \Bigg( \sum_{j=1}^{n} w^{c,c-1}_{ij} x_j^{c-1} + w^c_{i\theta}\theta\Bigg), \label{equ_nn_dynamics}$$ where the activation function $f$ is given by $f(x)=\tanh(x)$. For each node $i$ in layer $c$, we include a threshold value in the activation function by adding a weight $w^c_{i\theta}$ from a threshold node $\theta$ to the node in question. The threshold node is always activated with $\theta =1$. The neural network operates with a discrete-time dynamics. In the first iteration the input neurons read an input pattern $\zeta$ and get their values. In the second iteration the neurons in the hidden layer compute their state as a function of the states of the input neurons. Finally, in the third iteration the output nodes compute their states using the states of the neurons in the hidden layer. As a result, the output layer yields the response of the network after it processes the input pattern $\zeta$. A network may repeat this $K$ times in order to compute the set of patterns $\bm \zeta = \zeta_1, ..., \zeta_K$. We thus define the error of the network as $$\epsilon= \frac{1}{K N_{out}} \sum_{i=k}^{K} \sum_{i=1}^{N_{out}} (y^k_i - x^k_{i} )^2. \label{equ_nn_error}$$ In this expression $y^k_i$ and $x^k_i$ are the ideal and actual responses (respectively) of the output node $i$ when the pattern $\zeta_k$ is processed by the network. ### Functionality In order to construct functional networks, i.e., networks with small $\epsilon$, we use our new discrete-time formalism of autonomous learning. We set the iterative map (\[equ\_discreta\_parametros\]) as the evolution law for the weights of the neural network. In order to use this equation we sort all the weights $w^{c,c-1}_{ij}$ and $w^{c}_{i\theta}$ in one linear array $\bm w$. ### Robustness We are also interested on retaining the functionality of a network when destructive mutations or damages alter the network structure. This property of structural robustness is a key feature of neural networks [@book_rojas; @robustness_nn] and several biological systems [@robustness_biologicos]. Our aim is to employ the autonomous learning scheme as well to improve the robustness of a given system. The following ideas concerning robustness have already been applied to flow processing networks [@kaluza_pre_2007]. Consider a network $G$ with error $\epsilon$. If we delete one if its $M$ nodes in the hidden layer, we get a new network $G_i$ with error $\epsilon_i$. In general, the damage introduced will worsen the functionality of the original network $G$, that is, we will have $\epsilon < \epsilon_i$. We say that a damaged network is no longer functional if $\epsilon_i > h$, where $h$ is thus defined as the maximum error below which a network is still considered functional. We can repeat this process of deletion for all nodes in the hidden layer to define the robustness $\rho(G)$ of network $G$ as the ratio between the number of damaged functional networks and the total number of possible damages $M$: $$\rho(G) = \frac{1}{M} \sum_{i=1}^{M} \Theta[h - \epsilon_i(G)]. \label{equ_nn_robustnez}$$ Here, $\Theta$ is the Heaviside function with $\Theta(x) = 0$ if $x<0$ and $\Theta(x) = 1$ if $x \geq 0$. Since our goal is the construction of functional and robust networks, we must aim for $\epsilon \rightarrow 0$ and $\rho \rightarrow 1$. We propose the simultaneous optimization of these two quantities through the use of a biparametric autonomous learning scheme for the weights as follows: $$\begin{aligned} \bm w(n+1) &=& \bm w(n) - \bm{\delta w}(n) \Big[K_{\epsilon}\delta \epsilon(n) - K_{\rho}\delta \rho(n) \Big] \nonumber \\ & & + \Big[ \epsilon(n) S_{\epsilon} + \big(1-\rho(n)\big) S_{\rho} \Big]\bm{\xi}(n). \label{equ_nn_pesos_robustnez} \end{aligned}$$ In this expression, we have two constants $K_{\epsilon}$ and $K_{\rho}$ related to the drift term, and two constants $S_{\epsilon}$ and $S_{\rho}$ related to the noise intensity. It is directly seen that this biparametric prescription for the evolution of $\bm{w}$ is essentially a superposition of (\[equ\_discreta\_parametros\]) as applied individually to the optimization of $\epsilon$ and $\rho$. Note that the robustness must be calculated in each iteration, and therefore the $M$ nodes are removed one by one in each iteration. Kuramoto system --------------- We take as a second example of a dynamical system the Kuramoto model studied in our previous article [@kaluza_pre2014], now applying the discrete-time autonomous learning prescription for the evolution of the coupling weights. The Kuramoto model [@Kuramoto] of coupled phase oscillators is described by equations $$\frac{d \phi_i}{dt} = \omega_i + \frac{1}{N} \sum_{j=1}^{N} w_{ij} \sin(\phi_j - \phi_i) \label{equ_kuramoto}$$ where $\phi_i$ is the phase and $\omega_i$ is the natural frequency of oscillator $i$. The interactions are characterized by weights $w_{ij}$. They are symmetric, i.e., $w_{ij} = w_{ji}$, and can be positive or negative. The synchronization of the system is quantified by the Kuramoto order parameter $$r(t) = \frac{1}{N} \Bigg| \sum_{j=1}^{N} \exp(i\phi_j) \Bigg|. \label{equ_kuramoto_order_parameter}$$ Due to time fluctuations, we work with the mean order parameter $R(t)$ defined as $$R(t) = \frac{1}{T} \int_{t-T}^{t}r(t')dt', \label{equ_kuramoto_mean_order_parameter}$$ where $T$ is the time interval we consider for its calculation. $R(t)$ can vary between zero and one. In the case of full phase synchronization, $R(t) \to 1$. The aim of this example is to construct a system of oscillators able to autonomously learn to reach a target order parameter $P$. The error associated to this function is defined as $$\epsilon(t) = |P - R(t)|. \label{equ_kuramoto_error}$$ The discrete-time autonomous learning scheme dictates that the system (\[equ\_kuramoto\]) evolve for a time $T$ after each learning iteration. The iterative map for the weights is a variation of eq. (\[equ\_discreta\_parametros\]) for our system of oscillators: $$\begin{aligned} w_{ij}(n+&1&) = w_{ij}(n) - K_{\tau} \delta w_{ij}(n) \delta \epsilon(n) \nonumber \\ &+& \lambda\frac{w_{ij}(n)}{v(n)} \Big( W - v(n) \Big) + \epsilon(n) S \xi_{ij}(n). \label{equ_kuramoto_pesos} \end{aligned}$$ Note that we compute the weights $w_{ij}$ (with $i,j = 1,...,N$) only for $j>i$, because of the symmetry of the interactions. As in our previous work, we add a term to control the total interaction in the system. The mean absolute weight $v(t)$ is defined as $$v(n) = \frac{1}{N(N-1)} \sum_{i,j=1}^{N} | w_{ij}(n) |. \label{equ_kuramoto_medio_absoluto}$$ The control parameter $\lambda$ determines how strong the redistribution of the weights is, $W$ being the ideal absolute value of $v$. As a result of this dynamics the system of oscillators evolves to a mean order parameter $R(t) = P$ by redistributing the weights and maintaining the total absolute value $v(t)=W$. Numerical results ================= We present several numerical studies of the proposed systems. We show examples of evolutions where the autonomous learning scheme is able to lead the system to the target states and we analyze particular aspects of it which arise in connection with each particular model. Feed-forward neural network --------------------------- The neural network we work with must learn to classify the vowels, i.e., there are $K=5$ input patterns $\bm \zeta$. The letters are given as matrices of binary pixels, as shown in fig. \[fig\_figure1\].a. Each pixel acts on an input neuron, with black squares indicating activation and white ones inactivation of the associated neuron. The network has $N_{in}=35$ input neurons, one hidden layer with $M=15$ nodes and $N_{out}=5$ output neurons. A schematic representation of the network is shown in fig. \[fig\_figure1\].b. ### Functional networks Our first experiment consists in the realization of a full evolution through the iterative map (\[equ\_discreta\_parametros\]) to construct a functional network able to classify the letters. Figure \[fig\_figure2\].a presents a typical evolution of the error as function of the number of iterations (blue curve). We observe that the error decreases to a relatively small value at the end of the simulation, indicating a proper average classification of the patterns. The learning parameters used in the simulation were $K_{\tau}=79.43$, $S=0.13$, and $\Delta_{\eta}=1$. As the initial conditions for the weights we set $\bm w(0) = 0$ and $\bm w(-\Delta_{\eta}) = 0$. As a matter of comparison, we add a standard back-propagation realization for a supervised learning for this system in fig. \[fig\_figure2\]a. We observe that the error as a function of the number of iterations (black dashed curve) converges to small values much faster than for the discrete-time autonomous learning scheme (blue curve). This important difference in performance is due to the deterministic character of the back-propagation algorithm in contrast with the stochastic searching of the autonomous learning. The learning factor used in the back-propagation realization was $\alpha = 0.01$. ### Convergence As we have seen from the interpretations of eqs. \[equ\_discreta\_parametros\] and \[equ\_dynamical\_system\_parameters\], there is a competition between the drift term controlled by $K_{\tau}$ and the stochastic term of exploration controlled by $S$. As a result, there exist certain combinations of these two values $K_\tau$ and $S$ where the learning is optimum. Generally, combinations which differ from such optimum ones result in failed evolutions where the system cannot learn. The second numerical experiment is related to finding these optimum values for $K_{\tau}$ and $S$, that is, those values for which the evolutions converge to the smallest values of $\epsilon$ in a fixed number of iterations. In order to find them, we run several simulations with ensembles of $100$ networks, fixing for each ensemble the value of $K_{\tau}$ and $S$, and evaluating the mean error $\langle \epsilon \rangle$ over the ensemble after $1 \times 10^4$ iterations. The results are shown in fig. 3a, where it can be seen that there is a clear minimum of $\langle \epsilon \rangle$ at $(\log K_{\tau} = 1.75, \log S=-0.75)$. The previous study considered only five input patterns for classification. Now, in order to get more robust results against the number of patterns to be classified, we consider $15$ input patterns with the same network characteristics. This set of $15$ patterns consists of the five vowels shown in fig. \[fig\_figure1\]a and the ten numerical digits from $0$ to $9$. The number of output nodes is now $15$. The mean error $\langle \epsilon \rangle$ as a function of $K_{\tau}$ and $S$ is shown in fig. \[fig\_figure3\]b. We observe that almost the same error surface is found as in the previous study, with the same optimum values for $K_{\tau}$ and $S$. ### Robust networks We now consider the biparametric weight evolution given by eq. \[equ\_nn\_pesos\_robustnez\] that aims to minimize the error $\epsilon$ and maximize the robustness $\rho$. We set the values of $K_{\epsilon}$ and $S_{\epsilon}$ as the ones that guarantee the best convergence in the optimization of functionality, i.e., those found in the previous study. Performing a similar study to find the optimum parameters associated with the optimization of robustness, we found the minimum of $\langle 1 - \rho \rangle$ at $(\log K_{\rho}=0.75, \log S_{\rho}=-1.75)$. For the functionality threshold we take $h=0.0509$. An evolution for this case is shown in fig. \[fig\_figure2\].b. We observe that at the beginning of the learning process the error $\epsilon$ is high and, as a result, the network has zero robustness. When the error is reduced and close to the threshold $h$ the learning of robustness is automatically turned on, owing to the fact that a damaged functional network has more chances to possess an error lower than the threshold as compared with a nonfunctional network with high error. As the evolution progresses, the error is kept below $h$ and the robustness increases until it reaches its optimum value. Thus, the resulting network is functional and robust. ![Histograms $D(\epsilon)$ of the errors for four ensembles of networks, all normalized to unity. (a) Histogram $D(\epsilon)$ for $100$ networks optimized only with respect to functionality. (b) Histogram $D(\epsilon)$ for $100$ networks optimized with respect to both functionality and robustness. (c) Histogram $D(\epsilon')$ for the ensemble of networks obtained by removal of hidden nodes from functional networks. (d) Histogram $D(\epsilon')$ for the ensemble of networks obtained by removal of hidden nodes from robust networks. The vertical dashed lines indicate the functionality threshold value $h=0.0509$.[]{data-label="fig_figure4"}](figure4.eps){width="0.7\columnwidth"} In order to fix the threshold value $h$ we proceed as in article [@kaluza_EPJB_2012]. We optimize an ensemble of $100$ networks only by functionality (testing ensemble), computing the resulting final errors and those of the associated damaged networks. The corresponding histograms of the errors $\epsilon$ are shown in fig. \[fig\_figure4\].a (original ensemble) and \[fig\_figure4\].c (associated damaged ensemble) respectively. We choose $h$ as the value for which the functional ensemble has a mean robustness $\langle \rho \rangle = 0.50$. Hence, increasing the mean robustness from $\langle \rho \rangle = 0.50$ to $\langle \rho \rangle = 1$ will represent a $100\%$ increase in robustness with respect to the testing ensemble. Figures \[fig\_figure4\].b and \[fig\_figure4\].d show the histograms for an ensemble of $100$ networks (original and damaged, respectively) optimized to be functional *and* robust through the application of the biparametric autonomous learning scheme during $2 \times 10^5$ iterations. The mean robustness of this ensemble is $\langle \rho \rangle = 0.83$. This notable increase in robustness can be clearly seen by comparing the histograms corresponding to the damaged set of networks in each case (Fig. \[fig\_figure4\].c and \[fig\_figure4\].d). In the case of the ensemble of networks optimized solely with respect to functionality, the error window below the threshold $h$ is much less populated than in the case for the ensemble optimized to be both functional and robust. The relative increment in robustness of the latter with respect to the testing ensemble is approximately $66\%$. Kuramoto system --------------- This system presents an interesting characteristic concerning the error function. This function is evaluated by using the mean order parameter $R(t)$ from eq. (\[equ\_kuramoto\_mean\_order\_parameter\]), that is, a mean value over a time interval $T$. In the previous formulation for continuous-time parameter dynamics the error can be evaluated at any time. However, we are then forced to make a prescription concerning the set of weights which are most responsible for this mean value. In effect, during the interval $T$ the weights are continuously changing and it is therefore not clear which set of assumed values during $T$ are the effective ones in determining the error $\epsilon(t)$. The prescription we used then was that the error $\epsilon(t)$ be related to the weights at time $t-T$, that is, we considered that the weights at the beginning of the time interval $T$ are the ones responsible for the behavior of the system at the end of the interval. Of course, we can always use a different prescription such as, for example, using the mean value of the weights over $T$. This problem has been previously considered in a different implementation of reinforcement learning for spiking neural networks [@Urbanczik]. A second problem related to this situation is that in the case of large enough $T$, the correlation between the weights and the error is missing. At the same time, we need a long enough period $T$ in order to minimize the variations of $R(t)$. This problem can be avoided by using our new formulation with discrete-time evolution for the parameters. We take a full connected system with $N=10$ phase oscillators with natural frequencies $\omega_i = (i-1)/15 - 0.3$ ($i = 1,2,..., 10$). The integer delay is set to $\Delta_{\eta}=1$, and the target values to $P=0.6$ and $W=0.3$. As the initial conditions, we set $w_{ij}(0) = w_{ij}(-\Delta_{\eta}) = W$ and the initial phases $\phi_i(0)$ uniformly distributed between 0 and $2\pi$, which altogether results in the system’s order parameter $R$ having an approximate initial value of $0.3$. Our aim is to increase the synchronization level to $100\%$. The system is numerically integrated using an Euler algorithm with time step $dt=0.01$. Note that between iterations of the learning algorithm the phases $\phi$ preserve their values, therefore the initial conditions are not restarted as in our previous example. We use this protocol to accelerate the simulations and to avoid transients as well as cases of multistability. ### An evolution Figure \[fig\_figure5\] illustrates a typical successful evolution for this system. The weights of the system evolve according to eq. (\[equ\_kuramoto\_pesos\]). In this simulation we use $K_{\tau}=10$, $S=0.1$ and $\lambda=0.01$. The time interval to compute the mean value $R(t)$ is $T=300$. In fig. \[fig\_figure5\].a we plot the error as a function of the number of iterations. We observe that it decreases from $\epsilon \approx 0.3$ to $\epsilon \approx 0$ in $3000$ iterations. Not all the realizations converge to small errors in this given number of iterations. A study of the learning efficiency is presented later on in this work. Figure \[fig\_figure5\].b displays the mean absolute weight $v(t)$ as a function of the number of iterations. We see that at the beginning of the learning process there are relatively strong fluctuations around the target weight $W$. When the system is reaching low error values, these fluctuations vanish almost entirely as $v(t) \rightarrow W$. ![Example of the discrete autonomous learning for the Kuramoto model. (a) Error $\epsilon$ as a function of time. (b) Mean absolute weight $v(t)$ as a function of time. The target weight value $W=0.3$ is shown with a black dashed line. (c) and (d) Order parameters $r(t)$ as a function of time for the initial and final systems respectively. Black dashed lines show the target order parameter value $P=0.6$.[]{data-label="fig_figure5"}](figure5.eps){width="0.7\columnwidth"} Figures fig. \[fig\_figure5\].c and fig. \[fig\_figure5\].d show the order parameter $r(t)$ as a function of time for the initial and the final systems, respectively. The simulation is performed for a time interval longer than $T$. We observe the relative improvement in the behavior of the order parameter $r(t)$ between the initial and final systems. The dashed black line shows the target value $P$ prescribed for the learning evolution. We see that, for the final system, $r(t)$ varies consistently around $P$. ### Dependence on $\lambda$ Controlling the total absolute weight $v(t)$ imposes a strong restriction on the learning process. Effectively, the corresponding correcting term implies that the difference $W - v(n)$ is distributed in proportion to the strength of the connections. This way of redistributing the weights opposes their differentiation and, in general, resists the heterogeneities needed in order to find a solution. The stronger the correction by $\lambda$, the more difficult it is to reach small errors. ![Mean order parameter $\langle R \rangle$ (a) and mean absolute weight $\langle v \rangle$ (b) as a function of $\lambda$ for ensembles of networks after the learning process. Errors bars indicate the dispersion of the distributions.[]{data-label="fig_figure6"}](figure6.eps){width="0.7\columnwidth"} Figure \[fig\_figure6\].a shows the mean order parameter $\langle R \rangle$ as a function of $\lambda$ for ensembles of $100$ networks after the learning process. Each evolution is done with $5000$ iterations, $T=200$, $K_{\tau}=10.0$ and $S=0.1$. In Figure \[fig\_figure6\].b we show the mean absolute weight $\langle v \rangle$ of these ensembles as a function of $\lambda$. We observe that, for large values of $\lambda$, the learning scheme cannot find good solutions and the mean value $\langle R \rangle$ is far from the target value $P$. However, the mean absolute weight $\langle v \rangle$ is near its target $W$ with very small dispersion. The opposite situation is found for small values of $\lambda$. There, the mean order parameter $\langle R \rangle$ is close to the target value $P$ with small dispersion, but the mean absolute weight $\langle v \rangle$ is much larger than $W$. As a compromise between these two tendencies, we may settle with $\lambda = 0.01$. In such a case, we find that $25\%$ of the optimized networks have $R>0.5$. For this sub-ensemble of networks, we have $\langle R \rangle = 0.57$ and $\langle v \rangle = 0.306$. Thus, the efficiency of the learning process to find acceptable solutions is about $0.25$. ### Dependence on time interval duration We now study the dependence of the learning process on the time interval $T$. As done in the previous analysis, we optimize several ensembles of networks, this time fixing $\lambda=0.01$ and varying $T$. We keep the previous values for the rest of the learning parameters. Figures \[fig\_figure7\].a and \[fig\_figure7\].b show, respectively, the mean order parameter $\langle R \rangle$ and the mean absolute weight $\langle v \rangle$ as a function $T$. The average values are computed only with successful learning cases, i.e., systems with $R>0.5$. Thus, the number of averaged systems can be different for any two points, but approximately close to $25$. ![Mean order parameter $\langle R \rangle$ (a) and mean absolute weight $\langle v \rangle$ (b) as a function of $T$ for ensembles of networks after the learning process. Errors bars indicate the dispersion in each ensemble.[]{data-label="fig_figure7"}](figure7.eps){width="0.7\columnwidth"} We observe that, for any value of $T$, we can find networks with order parameters close to the target $P$. However, for short periods of time $T$ the fluctuations are strong and the weight restriction does not work properly. As a result, the total absolute weight $\langle v \rangle$ grows to values much larger than $W$. When we increase $T$ we observe that the weight control operates correctly and all the solutions approach $\langle v \rangle =W$. Additionally, it is interesting to note that the learning scheme works well for relatively short time windows of $T=50$, considering that in our previous work [@kaluza_pre2014] with the continuous-time version of autonomous learning we worked with a much longer time interval $T=200$. This result indicates that the discrete-time formulation is more efficient than the continuous version in terms of convergence speed. This fact can be understood by considering that the weights values are fixed during the interval $T$ in the new formulation and the error can be better estimated that way. Conclusions and discussions =========================== In this work we presented a new formulation for the autonomous learning scheme by defining an iterative map for the evolution of the parameters of a dynamical system. The utility of this discrete-time formulation resides in including within the scope of application of the autonomous learning scheme those systems which need an intrinsic time interval of finite duration to process a unit amount of information and therefore cannot measure a cost function at all times during their dynamics. The first system treated, a feed-forward neural network responsible for classifying several input patterns, is a typical example of a system subject to this restriction. We showed that our learning scheme works properly for this system, with the resulting networks able to classify the assigned patterns. Furthermore, we showed that we can implement the discrete-time scheme in biparametric form by setting a double feed-back signal in the evolution prescription for the weights, allowing us to optimize the networks with respect to the error $\epsilon$ and the robustness $\rho$ simultaneously. Our results show that the final systems can in this manner improve their robustness $66\%$ with respect to a testing ensemble. It is important to mention that from the point of view of machine learning [@machine_learning], the autonomous learning scheme can be classified as a type of reinforcement learning. These methods are characterized by their slow convergence, mainly due to the stochastic exploration of parameter space. In our case, this exploration is carried out by the multiplicative noise. As a result, our method is found to converge slower than the classical back-propagation algorithm, which constitutes a form of supervised learning by gradient descent. In the second example, a system of phase oscillators, we showed that the discrete-time formulation of autonomous learning can help to avoid the inherent fluctuations of the error function generated by the dynamics of a continuous-time dynamical system. Authors acknowledge financial support from SeCTyP-UNCuyo (project M009 2013-2015) and from CONICET (PIP 11220150100013), Argentina. [99]{} P. Kaluza, and A. S. Mikhailov, Phys. Rev. E **90**(3), 030901 (2014). R. Rojas, *Neural networks: a systematic introduction* (Springer-Verlag, Berlin, New-York, 1996). P. Kaluza, E. Urdapilleta, Eur. Phys. J. B **87**:236 (2014). M. Inoue, K. Kaneko, PLoS Comput. Biol. **9**(4), e1003001 (2013). P. Kaluza, M. Inoue, Eur. Phys. J. B (2016) *in press*. P. Kaluza, and A.S. Mikhailov. Eur. Phys. J. B **85**, 129 (2012). Y. Kuramoto, *Chemical Oscillations, Waves and Turbulence*, (Springer, New York, 1984). A. Schuster, International Journal of Computational Intelligence **4**(2) (2008). S. Bornholdt, K. Sneppen, Proc. R. Soc. Lond. B **267**, 1459 (2000). P. Kaluza, M. Ipsen, M. Vingron, and A. S. Mikhailov. Phys. Rev. E. **75**, 015101 (2007). R. Urbanczik, and W. Senn, Nat. Neurosci. **12**(3):250-2 (2009). M. Mohri, A. Rostamizade, and A. Talwalkar, *Foundations of Machine Learning*, (MIT Press, 2012).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We apply the crystal basis theory for Fock spaces over quantum affine algebras to the modular representations of the cyclotomic Hecke algebras of type $G(p,p,n)$. This yields a classification of simple modules over these cyclotomic Hecke algebras in the non-separated case, generalizing our previous work \[J. Hu, J. Algebra 267 (2003) 7-20\]. The separated case was completed in \[J. Hu, J. Algebra 274 (2004) 446–490\]. Furthermore, we use Naito–Sagaki’s work \[S. Naito & D. Sagaki, J. Algebra 251 (2002) 461–474\] on Lakshmibai–Seshadri paths fixed by diagram automorphisms to derive explicit formulas for the number of simple modules over these Hecke algebras. These formulas generalize earlier results of \[M. Geck, Represent. Theory 4 (2000) 370-397\] on the Hecke algebras of type $D_n$ (i.e., of type $G(2,2,n)$).' author: - | Jun Hu\ Department of Applied Mathematics\ Beijing Institute of Technology\ Beijing, 100081, P.R. China\ E-mail: junhu303@yahoo.com.cn title: 'Crystal bases and simple modules for Hecke algebras of type $G(p,p,n)$ [^1] ' --- Introduction ============ The theory of crystal (or canonical) bases is one of the most significant advances in Lie theory over the past two decades. It was discovered and developed by M. Kashiwara ([@Kas]) and G. Lusztig ([@Lu]) around 1990. Since then remarkable applications to some classical problems in representation theory have been found. One typical example is the well-known Lascoux–Leclerc–Thibon’s Conjecture ([@LLT]), which asserts that, the decomposition numbers of the Iwahori–Hecke algebra associated to symmetric group at a primitive $e$-th root of unity in $\mathbb{C}$ (the complex number field) can be obtained from the evaluation at $1$ of the coefficient polynomials of natural bases appeared in the expansion of global crystal bases (i.e., canonical bases) of some level one Fock spaces over the quantum affine algebra $U_q({\widehat{\mathfrak{sl}}}_e)$. This conjecture has been proved by S. Ariki ([@A1]), who generalized it to the case of the cyclotomic Hecke algebras of type $G(r,1,n)$. A similar conjecture ([@LT]), which relates the decomposition numbers of the $q$-Schur algebra with $q$ specialized to a primitive $e$-th root of unity in $\mathbb{C}$ to global crystal bases of Fock space as $U_q({\widehat{\mathfrak{gl}}}_e)$-module, has been proved by Varagnolo–Vasserot ([@VV]). For further example, see the work of Brundan–Kleshchev ([@BK]), where the theory of crystal bases of type $A_{2l}^{(2)}$ was applied to the modular representations of Hecke–Clifford superalgebras as well as of double covers of symmetric groups. This paper provides a new application of the theory of crystal (or canonical) bases to modular representation theory. Precisely, we apply the crystal basis theory for Fock spaces of higher level over the quantum affine algebra of type $A_{l}^{(1)}$ to the modular representations of the cyclotomic Hecke algebra ${\mathcal{H}}(p,p,n)$ of type $G(p,p,n)$ in the non-separated case (see Definition \[sep\]). The separated case has been completely solved in our previous work [@Hu3]. We explicitly describe (in terms of combinatorics over certain Kleshchev’s good lattices) which irreducible representation of the Ariki–Koike algebra ${\mathcal{H}}(p,n)$ remains irreducible when restricted to ${\mathcal{H}}(p,p,n)$. This yields a classification of simple modules over the cyclotomic Hecke algebra ${\mathcal{H}}(p,p,n)$ in the non-separated case, generalizing our previous work [@Hu2] on the Hecke algebras of type $D_n$. Then we go further in the remaining part of this paper. We use Naito–Sagaki’s work ([@NS1],[@NS2]) on Lakshmibai–Seshadri paths fixed by diagram automorphisms to derive explicit formula for the number of simple modules over the cyclotomic Hecke algebra ${\mathcal{H}}(p,p,n)$. Our formula generalizes earlier result of Geck [@Ge] on the Hecke algebra of type $D_n$ (i.e., of type $G(2,2,n)$). Note that our approach even in that special case is quite different, because it is based on Ariki’s celebrated theorem ([@A1]) on a generalization of Lascoux–Leclerc–Thibon’s Conjecture as well as Naito–Sagaki’s work ([@NS1],[@NS2]) on Lakshmibai–Seshadri paths, while Geck’s method in [@Ge] depends on explicit information on character tables and Kazhdan–Lusztig theory for Iwahori–Hecke algebras associated to finite Weyl groups—not presently available in our general $G(p,p,n)$ cases. As a byproduct, we get a remarkable bijection between two sets of Kleshchev multipartitions. Our explicit formulas strongly indicate that there are some new intimate connections between the representation of ${\mathcal{H}}(p,p,n)$ at roots of unity and the representations of various Ariki–Koike algebras of smaller sizes at various roots of unity. Although we will not discuss these matters in the present paper, we remark that it seems very likely the decomposition matrix of the latter can be naturally embedded as a submatrix of the decomposition matrix of the former. The paper is organized as follows. Section 2 collects some basic known results about Ariki–Koike algebras (i.e., the cyclotomic Hecke algebras of type $G(r,1,n)$). These include Dipper–James–Mathas’s work on the structure and representation theory of Ariki–Koike algebras as well as Dipper–Mathas’s Morita equivalence results. The notion of Kleshchev multipartition as well as Ariki’s remarkable result (Theorem \[Ariki\]) are also introduced there. In Section 3, we first recall our previous work on modular representations of Hecke algebras of type $D_n$ and of type $G(p,p,n)$. Then we give the first two main results (Theorem \[main1\] and Theorem \[main2\]) in this paper. Theorem \[main3\] shows that these two main results are valid over any field $K$ which contains primitive $p$-th root of unity and over which ${\mathcal{H}}(p,p,n)$ is split. Our Theorem \[main1\] is a direct generalization of [@Hu2 (1.5)]. A sketch of the proof (following the streamline of the proof of [@Hu2 (1.5)]) is presented in Section 4. The proof of Theorem \[main2\] is given in Section 5. Our main tools used there are Dipper–Mathas’s Morita equivalence results for Ariki–Koike algebras and their connections with type $A$ affine Hecke algebras. In Section 6 we give the second two main results (Theorem \[mainthm3\] and Theorem \[mainthm4\]) in this paper, which yield explicit formula for the number of simple modules over the cyclotomic Hecke algebra ${\mathcal{H}}(p,p,n)$ in the non-separated case. Note that in the separated case such a formula can be easily written down by using the results [@Hu3 (5.7)]. The proof uses our first two main results (Theorem \[main1\] and Theorem \[main2\]) as well as Naito–Sagaki’s work ([@NS1],[@NS2]) on Lakshmibai–Seshadri paths fixed by diagram automorphisms. As a byproduct, we get (Corollary \[cor609\]) a remarkable bijection between two sets of Kleshchev multipartitions, which seems of independent interest. The present paper is an expanded version of an earlier preprint (cited in Ariki’s book [@A3 [\[]{}cyclohecke12[\]]{}]) completed in the February of 2002. That preprint already contains Theorem \[main1\] and Theorem \[main2\], which are generalizations of [@Hu2 (1.5)]. Part of the remaining work was done during the author’s visit of RIMS in 2004. After this expanded version was completed and the main results were announced, N. Jacon informed us that a result similar to Theorem \[main1\] in the context of FLOTW partitions (see [@Ja Definition 2.2]) was also obtained in his Ph.D. Thesis [@Ja0] in 2004, and we are informed the existence of a preprint [@GJ] of Genet and Jacon on the modular representation of the Hecke algebra of type $G(r,p,n)$ (where $p|r$). Our main results Theorem \[mainthm3\] and Theorem \[mainthm4\] are not related to any results in [@Ja0] and [@GJ]. Both the paper [@GJ] and the present paper use Ariki’s results on Fock spaces, crystal graphs as well as Clifford theory. But [@GJ] uses a different version of Fock space and hence a different parameterization of simple modules over Ariki–Koike algebras. The relationships between the parameterization results given in [@GJ] and our parameterization results given in Theorem \[main1\], Theorem \[main2\] and in [@Hu3 Theorem 4.9] are explained in Remark 3.12. The author thanks Professor Susumu Ariki for some stimulating discussion, especially for informing him about the work of Naito–Sagaki. The author also thanks Professor Masaki Kashiwara for explaining a result about crystal bases. The main results of this paper were announced at the “International Conference on Representation Theory, III" (Chengdu, August 2004). The author would like to thank Professor Nanhua Xi and Professor Jie Du for their helpful comments. The author also would like to thank the referee for many helpful suggestions. Preliminaries ============= Let $r$, $p$, $d$ and $n$ be positive integers such that $pd=r$. The complex reflection group $G(r,p,n)$ is the group consisting of $n$ by $n$ permutation matrices with the properties that the entries are either $0$ or $r$-th roots of unity in $\mathbb{C}$, and the $d$-th power of the product of the non-zero entries of each matrix is $1$. The order of $G(r,p,n)$ is $dr^{n-1}n!$, and $G(r,p,n)$ is a normal subgroup of $G(r,1,n)$ of index $p$. Cyclotomic Hecke algebras associated to complex reflection groups were introduced in the work of Broué–Malle ([@BM]) and in the work of Ariki–Koike ([@AK]). These algebras are deformations of the group rings of the complex reflection groups. We recall their definitions. Let $K$ be a field and let $q, Q_1,\cdots,Q_r$ be elements of $K$ with $q$ invertible. Let ${\mathcal{H}}_K(r,n)={\mathcal{H}}_{q,Q_1,\cdots,Q_r}(r,n)$ be the unital $K$-algebra with generators $T_0,T_1,\cdots,T_{n-1}$ and relations $$\begin{aligned} &(T_0-Q_1)\cdots (T_0-Q_r)=0,\\ &T_0T_1T_0T_1=T_1T_0T_1T_0,\\ &(T_i+1)(T_i-q)=0,\quad\text{for $1\leq i\leq n-1$,}\\ &T_iT_{i+1}T_i=T_{i+1}T_{i}T_{i+1},\quad\text{for $1\leq i\leq n-2$,}\\ &T_iT_j=T_jT_i,\quad\text{for $0\leq i<j-1\leq n-2$.}\end{aligned}$$ This algebra is called [*Ariki–Koike algebra*]{} or the cyclotomic Hecke algebra of type $G(r,1,n)$. Whenever the parameter $q$ is clear from the context, we shall say (for simplicity) that the algebra ${\mathcal{H}}_K(r,n)$ is with parameters set $\{Q_1,\cdots,Q_r\}$. This algebra contains the Hecke algebras of type $A$ and type $B$ as special cases. It can be defined over ${\mathbb Z}[v,v^{-1},v_1,\cdots,v_r]$, where $v, v_1,\cdots, v_r$ are all indeterminates. Upon setting $v=1$ and $v_i=(\sqrt[r]{1})^{i-1}$ for each $i$, where $\sqrt[r]{1}$ denotes a primitive $r$-th root of unity in $\mathbb{C}$, one obtains the group algebra for the complex reflection group $G(r,1,n)\cong{\mathbb Z}_{r}\wr{\mathfrak S}_{n}$. Suppose that $K$ contains a primitive $p$-th root of unity ${\varepsilon}$. Let $x_1,\cdots,x_d$ be invertible elements in $K$ with $x_{i}^{1/p}\in K$ for each $i$. We consider the Hecke algebra ${\mathcal{H}}_{K}(r,n)$ with parameters $$q,\,\, x_{i}^{1/p}{\varepsilon}^{j},\quad i=1,2,\cdots,d,\,\,j=0,1,\cdots,p-1.$$ Then the first defining relation for ${\mathcal{H}}_{K}(r,n)$ becomes $$(T_{0}^{p}-x_1)(T_{0}^{p}-x_2)\cdots (T_{0}^{p}-x_d)=0.$$ Let ${\mathcal{H}}_{K}(r,p,n)={\mathcal{H}}_{q,x_{1},\cdots,x_{d}}(r,p,n)$ be the subalgebra of ${\mathcal{H}}_{K}(r,n)$ generated by the elements $$T_0^{p},\quad T_{u}:{=}T_{0}^{-1}T_{1}T_{0}, \,\,\, T_{1},\,T_2,\,\cdots,\,T_{n-1}.$$ Then it is a $q$-analogue of the group algebra for the complex reflection group $G(r,p,n)$. This algebra is called the cyclotomic Hecke algebra of type $G(r,p,n)$. It is known that in this case (by [@MaM]) ${\mathcal{H}}_{K}(r,p,n)$ is a symmetric algebra over $K$. For simplicity, we shall often write ${\mathcal{H}}(r,p,n), {\mathcal{H}}(r,n)$ instead of ${\mathcal{H}}_{K}(r,p,n), {\mathcal{H}}_{K}(r,n)$. Our main interest in this paper will be the algebra ${\mathcal{H}}(r,p,n)$ in the special case when $p=r$, that is, the cyclotomic Hecke algebra of type $G(p,p,n)$. The special case ${\mathcal{H}}(2,2,n)$ is just the Iwahori–Hecke algebra of type $D_n$. For convenience, we shall use a normalized version of ${\mathcal{H}}(p,p,n)$ which is defined as follows. Let $p, n\in\mathbb{N}$. Let $K$ be a field and $q$ be an invertible element in $K$. Throughout this paper, we assume that $K$ contains primitive $p$-th roots of unity. In particular, $\operatorname{char}K$ is coprime to $p$. We define the algebra ${\mathcal{H}}_q(p,n)$ to be the associative unital $K$-algebra with generators $T_0, T_1,\cdots, T_{n-1}$ subject to the following relations $$\begin{aligned} &T_{0}^{p}-1=0,\\ &T_0T_1T_0T_1=T_1T_0T_1T_0,\\ &(T_i+1)(T_i-q)=0,\quad\text{for $1\leq i\leq n-1$,}\\ &T_iT_{i+1}T_i=T_{i+1}T_{i}T_{i+1},\quad\text{for $1\leq i\leq n-2$,}\\ &T_iT_j=T_jT_i,\quad\text{for $0\leq i<j-1\leq n-2$.}\end{aligned}$$ Note that if we fix a choice of a primitive $p$-th root of unity ${\varepsilon}$ in $K$, then the first relation becomes $$(T_0-1)(T_0-{\varepsilon})\cdots(T_{0}-{\varepsilon}^{p-1})=0,$$ and the algebra ${\mathcal{H}}_q(p,n)$ is just the Ariki–Koike algebra or the cyclotomic Hecke algebra of type $G(p,1,n)$ with parameters $q,1,{\varepsilon},\cdots,{\varepsilon}^{p-1}$. However, the algebra ${\mathcal{H}}_q(p,n)$ itself does not depend on the choice of primitive $p$-th root of unity. Now, let ${\mathcal{H}}_{q}(p,p,n)$ be the subalgebra of ${\mathcal{H}}_q(p,n)$ generated by the elements $$T_{u}:=T_{0}^{-1}T_{1}T_{0},\,\,\, T_{1},\,T_2,\,\cdots,\,T_{n-1}.$$ This algebra, called cyclotomic Hecke algebra of type $G(p,p,n)$, will be the main subject of this paper. It is well-known that ${\mathcal{H}}_{q}(p,n)$ is a free ${\mathcal{H}}_{q}(p,p,n)$-module with basis $\{1, T_{0},\cdots,T_{0}^{p-1}\}$. As a ${\mathcal{H}}_{q}(p,p,n)$-module, ${\mathcal{H}}_{q}(p,n)$ is in fact isomorphic to a direct sum of $p$ copies of regular ${\mathcal{H}}_{q}(p,p,n)$-modules. Let $\tau$ be the $K$-algebra automorphism of ${\mathcal{H}}_{q}(p,n)$ which is defined on generators by $\tau(T_1)=T_0^{-1}T_1T_0, \tau(T_i)=T_i$, for any $i\neq 1$. Let ${\sigma}$ be the nontrivial $K$-algebra automorphism of ${\mathcal{H}}_{q}(p,n)$ which is defined on generators by ${\sigma}(T_0)={\varepsilon}T_0, {\sigma}(T_i)=T_i$, for any $1\leq i\leq n-1$. By [@Hu3 (1.4)], $\tau({\mathcal{H}}_{q}(p,p,n))={\mathcal{H}}_{q}(p,p,n)$ and clearly ${\sigma}\downarrow_{{\mathcal{H}}_{q}(p,p,n)}=\operatorname{id}$. Moreover, the set of $K$-subspaces $\bigl\{T_0^{i}{\mathcal{H}}_{q}(p,p,n)\bigr\}_{i=0}^{p-1}$ of ${\mathcal{H}}_{q}(p,n)$ forms a ${\mathbb Z}/p{\mathbb Z}$-graded Clifford system in ${\mathcal{H}}_{q}(p,n)$ in the sense of [@CR (11.12)]. Our approach to the modular representations of the algebra ${\mathcal{H}}_q(p,p,n)$ is to consider the restriction of the representations of ${\mathcal{H}}_q(p,n)$. To this end, we have to first recall some known results about ${\mathcal{H}}_q(p,n)$. The structure and representation theory for Ariki–Koike algebras with arbitrary parameters $q,Q_1,$ $\cdots,Q_p$ have been well studied in [@DJM], where it was shown that these algebras are cellular in the sense of [@GL] if $q$ is invertible. To state their results, we need some combinatorics. Recall that a partition of $n$ is a non-increasing sequence of positive integers ${\lambda}=({\lambda}_1,\cdots,{\lambda}_r)$ such that $|{\lambda}|:=\sum_{i=1}^{r}{\lambda}_i=n$, while a $p$-multipartition of $n$ is a $p$-tuple of partitions ${{{\lambda}}}=({\lambda}^{(1)},\cdots,{\lambda}^{(p)})$ such that $|{\lambda}|:=\sum_{i=1}^p|{\lambda}^{(i)}|=n$. For any two $p$-multipartitions ${{{\lambda}}}, {{\mu}}$ of $n$, we define ${{{\lambda}}}\trianglerighteq{{\mu}}$ if $$\sum_{j=1}^{i-1}|{\lambda}^{(j)}|+\sum_{k=1}^{m}{\lambda}^{(i)}_k \geq \sum_{j=1}^{i-1}|\mu^{(j)}|+\sum_{k=1}^{m}\mu^{(i)}_k,\,\,\,\forall\,1\leq i\leq p,\,m\geq 1.$$ We shall state the results of [@DJM] in our special case of the Ariki–Koike algebra ${\mathcal{H}}_{q}(p,n)$. First, we fix a primitive $p$-th root of unity ${\varepsilon}$ in $K$. Let $ \operatorname{Q}:=\{1,{\varepsilon},{\varepsilon}^2,\cdots,{\varepsilon}^{p-1}\}$. Let ${\overrightarrow\operatorname{Q}}=(Q_1,\cdots,Q_p)$ be an ordered $p$-tuple which is obtained by fixing an order on $\operatorname{Q}$. We regard ${\mathcal{H}}_{q}(p,n)$ as the Ariki–Koike algebra with parameters $q,1,{\varepsilon},\cdots,$ ${\varepsilon}^{p-1}$. Then for each $p$-multipartitions ${{{\lambda}}}$ of $n$, there is a Specht module, denoted by ${\widetilde{S}}^{{{{\lambda}}}}_{{\overrightarrow\operatorname{Q}}}$, and there is a naturally defined bilinear form $\langle,\rangle$ on ${\widetilde{S}}^{{{{\lambda}}}}_{{\overrightarrow\operatorname{Q}}}$. Let ${\widetilde{D}}^{{\lambda}}_{{\overrightarrow\operatorname{Q}}}={\widetilde{S}}^{{{{\lambda}}}}_{{\overrightarrow\operatorname{Q}}}/\operatorname{rad}\langle,\rangle$. Note here the subscript ${\overrightarrow\operatorname{Q}}$ (instead of $\operatorname{Q}$) is used to emphasis that the structure of the module ${\widetilde{S}}^{{{{\lambda}}}}_{{\overrightarrow\operatorname{Q}}}$ (resp., ${\widetilde{D}}^{{\lambda}}_{{\overrightarrow\operatorname{Q}}}$) does depend on the fixed order on the set $\{1,{\varepsilon},\cdots,{\varepsilon}^{p-1}\}$ of parameters. \[def21\] Let $v$ be an indeterminate over ${\mathbb Z}$. Let $\epsilon$ be a primitive $p$-th root of unity in $\mathbb{C}$. We define $\mathcal{Z}={\mathbb Z}[\epsilon][v,v^{-1}]$ and $$f_{p,n}(v,\epsilon)=\prod_{1\leq i<j\leq p}\prod_{-n<k<n}\bigl(\epsilon^{i-1}v^k-\epsilon^{j-1}\bigr)\in \mathcal{Z}.$$ Note that $v^{p}-1=\prod_{k|p}\Phi_k(v)$, where $\Phi_k(v)$ is the $k$-th cyclotomic polynomial over ${\mathbb Z}$. It follows easily that for any ${\mathbb Z}[v,v^{-1}]$-algebra $K$ which contains a primitive $p$-th root of unity ${\varepsilon}$, the natural homomorphism ${\mathbb Z}[v,v^{-1}]\rightarrow K$ can be uniquely extended to a homomorphism from $\mathcal{Z}$ to $K$ by mapping $\epsilon$ to ${\varepsilon}$. In other words, we can always specialize $\epsilon$ to any primitive $p$-th root of unity. [([@DJM], [@A4])]{}\[thm22\] With the above notations, we have that 1\) the set $\bigl\{{\widetilde{D}}^{{{{\lambda}}}}_{{\overrightarrow\operatorname{Q}}}\bigm|\text{${{{\lambda}}}$ is a $p$-multipartition of $n$ and ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}\neq 0$}\bigr\}$ forms a complete set of pairwise non-isomorphic simple ${\mathcal{H}}_{q}(p,n)$-modules; 2\) if ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{{\mu}}}\neq 0$ is a composition factor of ${\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}$ then ${{{\lambda}}}\trianglerighteq{{\mu}}$, and every composition factor of ${\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}$ is isomorphic to some ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{{\mu}}}$ with ${{{\lambda}}}\trianglerighteq{{\mu}}$; if ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}\neq 0$ then the composition multiplicity of ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}$ in ${\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}$ is $1$; 3\) ${\mathcal{H}}_q(p,n)$ is semisimple if and only if $$\biggl(\prod_{i=1}^{n}\bigl(1+q+q^2+\cdots+q^{i-1}\bigr)\biggr)f_{p,n}(q,{\varepsilon})\neq 0,$$ in $K$. In that case, ${\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}={\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}$ for each $p$-multipartition ${{{\lambda}}}$ of $n$. Note that whether $f_{p,n}(q,{\varepsilon})$ is nonzero in $K$ or not is independent of the choice of the primitive $p$-th root of unity ${\varepsilon}$ in $K$. It remains to determine when ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}\neq 0$. This was solved in [@DJ] in the case of type $A$. In general case it was solved by the work of Dipper–Mathas [@DM] and the work of Ariki [@A2]. We first recall Dipper–Mathas’s result in [@DM]. Two parameters $Q_i, Q_j$ are said to be in the same $q$-orbit, if $Q_i=q^kQ_j$ for some $k\in{\mathbb Z}$. Now we suppose that $\operatorname{Q}=\operatorname{Q}_1\sqcup\operatorname{Q}_2\sqcup\cdots\sqcup\operatorname{Q}_{\kappa}$ (disjoint union) such that $Q_i, Q_j$ are in the same $q$-orbit only if $Q_i, Q_j\in\operatorname{Q}_c$ for some integer $c$ with $1\leq c\leq\kappa$. Let $p_i=|\operatorname{Q}_i|$ for each integer $i$ with $1\leq i\leq\kappa$. [([@DM Theorem 1.1])]{}\[thm23\] With the above notations, the algebra ${\mathcal{H}}_q(p,n)$ is Morita equivalent to the algebra $$\bigoplus_{\substack{n_1,\cdots,n_{\kappa}\geq 0\\ n_1+\cdots+n_{\kappa}=n}}{\mathcal{H}}_{q,\operatorname{Q}_1}(p_1,n_1)\otimes\cdots\otimes{\mathcal{H}}_{q,\operatorname{Q}_{\kappa}}(p_{\kappa},n_{\kappa}),$$ where each ${\mathcal{H}}_{q,\operatorname{Q}_i}(p_i,n_i)$ denotes the Ariki–Koike algebra of size $n_i$ and with parameters set $\operatorname{Q}_i$. Moreover, if we fix an order on each $\operatorname{Q}_i$ to get an ordered tuple ${\overrightarrow\operatorname{Q}}_i$ and suppose that ${\overrightarrow\operatorname{Q}}=\bigl({\overrightarrow\operatorname{Q}}_1,{\overrightarrow\operatorname{Q}}_2,\cdots,{\overrightarrow\operatorname{Q}}_{\kappa}\bigr)$ (concatenation of ordered tuples), then the above Morita equivalence sends ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}$ to ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}_1}^{{\lambda}^{[1]}}\otimes\cdots\otimes{\widetilde{D}}_{{\overrightarrow\operatorname{Q}}_{\kappa}}^{{\lambda}^{[\kappa]}}$, where $${\lambda}^{[i]}:=\bigl({\lambda}^{(\sum_{j=1}^{i-1}p_j+1)},{\lambda}^{(\sum_{j=1}^{i-1}p_j+2)},\cdots,{\lambda}^{(\sum_{j=1}^{i}p_j)}\bigr), \,\,|{\lambda}^{[i]}|=n_i,\,\,\forall\,1\leq i\leq\kappa,$$ ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}_i}^{{\lambda}^{[i]}}$ denotes the quotient module (see definition above (\[def21\])) of the Specht module ${\widetilde{S}}_{{\overrightarrow\operatorname{Q}}_i}^{{\lambda}^{[i]}}$ over ${\mathcal{H}}_{q,\operatorname{Q}_i}(p_i,n_i)$. In particular, ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\neq 0$ if and only if ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}_i}^{{\lambda}^{[i]}}\neq 0$ for any integer $i$ with $1\leq i\leq\kappa$. \[lm24\] Let ${\mathcal{H}}_{q,\operatorname{Q}}(p,n)$ be the Ariki–Koike algebra with parameters $q,Q_1,$ $\cdots,Q_p$ and defined over $K$. Let $\operatorname{Q}=\{Q_1,\cdots,Q_p\}$. Let $0\neq a\in K$. Let $a\!\operatorname{Q}=\{aQ_1,\cdots,aQ_p\}$. Let $\sigma_a$ be the isomorphism from ${\mathcal{H}}_{q,a\!\operatorname{Q}}(p,n)$ onto ${\mathcal{H}}_{q,\operatorname{Q}}(p,n)$ which is defined on generators by $\sigma_a(T_0)=aT_0$ and $\sigma_a(T_i)=T_i$ for $ i=1,2,\cdots,n-1$. Let ${\overrightarrow\operatorname{Q}}$ be an ordered $p$-tuple which is obtained by fixing an order on $\operatorname{Q}$. Then for each $p$-multipartition ${\lambda}$ of $n$, there are ${\mathcal{H}}_{q,a\!\operatorname{Q}}(p,n)$-module isomorphisms $$\Bigl({\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\Bigr)^{\sigma_a}\cong{\widetilde{S}}_{\overrightarrow{a\!\operatorname{Q}}}^{{\lambda}},\,\,\,\, \Bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\Bigr)^{\sigma_a}\cong{\widetilde{D}}_{\overrightarrow{a\!\operatorname{Q}}}^{{\lambda}},$$ where $\overrightarrow{a\!\operatorname{Q}}$ denotes the ordered $p$-tuple which is obtained from ${\overrightarrow\operatorname{Q}}$ by multiplying $a$ on each component. In particular, ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\neq 0$ if and only if ${\widetilde{D}}_{\overrightarrow{a\!\operatorname{Q}}}^{{\lambda}}\neq 0$. [Proof:]{} This follows directly from the definitions and constructions of the modules ${\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}, {\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}$. Theorem \[thm23\] and Lemma \[lm24\] reduce the problem on determining when ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{{{\lambda}}}}\neq 0$ to the case where all the parameters in $\operatorname{Q}$ are some powers of $q$. In that case, the problem was solved by Ariki [@A2]. From now on until the end of this section, we assume that all the parameters in $\operatorname{Q}$ are some powers of $q$. In particular, in our ${\mathcal{H}}_q(p,n)$ case, $q$ must be a root of unity. To state the result of Ariki, we have to recall the definition of Kleshchev multipartition (see [@AM]). For any $p$-multipartition ${\lambda}$, the Young diagram of ${\lambda}$ is the set $$[{\lambda}]=\Bigl\{(a,b,c)\Bigm|\text{$1\leq c\leq p$ and $1\leq b\leq{\lambda}_{a}^{(c)}$}\Bigr\}.$$ The elements of $[{\lambda}]$ are nodes of ${\lambda}$. Given any two nodes $\gamma=(a,b,c), \gamma'=(a',b',c')$ of ${\lambda}$, say that $\gamma$ is [*below*]{} $\gamma'$, or $\gamma'$ is [*above*]{} $\gamma$, if either $c>c'$ or $c=c'$ and $a>a'$. The [*residue*]{} of $\gamma=(a,b,c)$ is defined to be $$\label{res}\text{$\operatorname{res}(\gamma):= m+e{\mathbb Z}\in{\mathbb Z}/e{\mathbb Z}$, \quad if $q=\sqrt[e]{1}$ and $q^m= q^{b-a}Q_c$,}$$ and we say that $\gamma$ is a $\operatorname{res}(\gamma)$-node. A [*removable*]{} node is a node of the boundary of $[{\lambda}]$ which can be removed, while an [*addable*]{} node is a concave corner on the rim of $[{\lambda}]$ where a node can be added. If $\mu=(\mu^{(1)},\cdots,\mu^{(p)})$ is a $p$-multipartition of $n+1$ with $[\mu]=[{\lambda}]\cup\bigl\{\gamma\bigr\}$ for some removable node $\gamma$ of $\mu$, we write ${\lambda}\rightarrow\mu$. If in addition $\operatorname{res}(\gamma)=x$, we also write that ${\lambda}\overset{x}{\rightarrow}\mu$. For example, suppose $n=10, p=4, q=\sqrt[8]{1}$ and ${\varepsilon}=q^2=\sqrt[4]{1}$. The nodes of ${\lambda}=((2,1),(1^2),(1^3),(2))$ have the following residues $${\lambda}=\biggl(\left(\begin{matrix} 0& 1\\ 7 \end{matrix} \right),\left(\begin{matrix} 2 \\ 1 \end{matrix} \right),\left(\begin{matrix} 4 \\ 3\\ 2 \end{matrix} \right),\left(\begin{matrix} 6& 7\end{matrix} \right)\biggr).$$ It has five removable nodes. Fix a residue $x$ and consider the sequence of removable and addable $x$-nodes obtained by reading the boundary of ${\lambda}$ from the bottom up. In the above example, we consider residue $x=1$, then we get a sequence “ARR", where each “A” corresponds to an addable $x$-node and each “R” corresponds to a removable $x$-node. Given such a sequence of letters “A,R" , we remove all occurrences of the string “AR” and keep on doing this until no such string “AR” is left. The “R”s that still remain are the [*normal*]{} $x$-nodes of ${\lambda}$ and the highest of these is the [*good*]{} $x$-node. In the above example, there is only one normal $1$-node, which is a good $1$-node. If $\gamma$ is a good $x$-node of $\mu$ and ${\lambda}$ is the multipartition such that $[\mu]=[{\lambda}]\cup\gamma$, we write ${\lambda}\overset{x}{\twoheadrightarrow}\mu$. For each integer $n\geq 0$, let $\mathcal{P}_{n}$ be the set of all $p$-multipartitions of $n$. [([@AM])]{} Suppose $n\geq 0$. The set $\mathcal{K}_n$ of Kleshchev $p$ multipartitions with respect to $(q,Q_1,\cdots,Q_p)$ is defined inductively as follows: \(1) $\mathcal{K}_0:=\Bigl\{\underline{\emptyset}:=\bigl(\underbrace{\emptyset,\cdots, \emptyset}_{p}\bigl)\Bigr\}$; \(2) $\mathcal{K}_{n+1}:=\Bigl\{\mu\in\mathcal{P}_{n+1}\Bigm|\text{${\lambda}\overset{x} {\twoheadrightarrow}\mu$ for some ${\lambda}\in\mathcal{K}_n$ and some $x$}\Bigr\}$. The [*Kleshchev’s good lattice*]{} with respect to $(q,Q_1,\cdots,Q_p)$ is, by definition, the infinite graph whose vertices are the Kleshchev $p$-multipartitions with respect to $(q,Q_1,\cdots,Q_p)$ and whose arrows are given by $$\text{${\lambda}\overset{x}{\twoheadrightarrow}\mu$\quad$\Longleftrightarrow$\quad ${\lambda}$ is obtained from $\mu$ by removing a good $x$-node}.$$ Now we can state Ariki’s remarkable result ([@A2]), that is, [([@A2])]{} \[Ariki\] With the above notations, we have that ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\neq 0$ if and only if ${\lambda}$ is a Kleshchev $p$-multipartition with respect to $(q,Q_1,\cdots,Q_p)$. Classification of simple ${\mathcal{H}}_q(p,p,n)$-modules ========================================================= In this section, we shall first review some known results on the classification of simple modules over ${\mathcal{H}}_q(p,p,n)$. Then we shall state the first two main results in this paper, which give a classification of simple modules over ${\mathcal{H}}_q(p,p,n)$ in the non-separated cases. The proof will be given in Section 4 and Section 5. Recall that $\operatorname{Q}=\bigl\{1,{\varepsilon},\cdots,{\varepsilon}^{p-1}\bigr\}$, where ${\varepsilon}$ is a fixed primitive $p$-th root of unity in $K$. Let ${\overrightarrow\operatorname{Q}}$ be an ordered $p$-tuple which is obtained by fixing an order on $\operatorname{Q}$. Let ${K}_n=\bigl\{{\lambda}\in\mathcal{P}_{n}\bigm| {\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\neq 0\bigr\}$. The automorphism ${\sigma}$ determines uniquely an automorphism $\operatorname{h}$ of ${K}_n$ such that $\bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\bigr)^{{\sigma}}\cong{\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{\operatorname{h}({\lambda})}$. Clearly, $\operatorname{h}^p=\operatorname{id}$. In particular, we get an action of the cyclic group $C_p$ on ${K}_n$ given as follows: ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\sigma}^{k}\cdot{\lambda}}\cong ({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}})^{{\sigma}^k}$. Let $\sim_{{\sigma}}$ be the corresponding equivalence relation on ${K}_n$. That is, ${\lambda}\sim_{{\sigma}}\mu$ if and only if ${\lambda}=g\cdot\mu$ for some $g\in C_p$. For each ${\lambda}\in{K}_n/{\sim_{{\sigma}}}$, let ${C}_{{\lambda}}$ be the stabilizer of ${\lambda}$ in $C_p$. Then ${C}_{{\lambda}}$ is a cyclic subgroup of $C_p$ with order $|{C}_{{\lambda}}|$. Clearly $|{C}_{{\lambda}}|\mid p$. We define $${K}_n(0):=\bigl\{{\lambda}\in{K}_n/{\sim_{{\sigma}}}\bigm| {C}_{{\lambda}}=1\bigr\},\quad {K}_n(1):=\bigl\{{\lambda}\in{K}_n/{\sim_{{\sigma}}}\bigm| {C}_{{\lambda}}\neq 1 \bigr\}.$$ \[thm31\][([@Hu3 (5.4),(5.5),(5.6)])]{} Suppose ${\mathcal{H}}_{q}(p,p,n)$ is split over $K$. 1\) Let ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}$ be any given irreducible ${\mathcal{H}}_{q}(p,n)$-module and $D$ be an irreducible ${\mathcal{H}}_{q}(p,p,n)$-submodule of ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}$. Let $d$ be the smallest positive integer such that $D\cong DT_{0}^{d}$. Suppose $1\leq d<p$. Then $k:=p/d$ is the smallest positive integer such that ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\cong({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}})^{{\sigma}^k}$, and $${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\downarrow_{{\mathcal{H}}_{q}(p,p,n)}\cong D\oplus DT_0\oplus\cdots\oplus DT_0^{d-1}.$$ 2\) The set $ \Bigl\{{\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\downarrow_{{\mathcal{H}}_{q}(p,p,n)}\Bigm|{\lambda}\in{K}_n(0)\Bigr\} \bigcup \Bigl\{D^{{\lambda},0},D^{{\lambda},1},\cdots,D^{{\lambda},|{C}_{{\lambda}}|-1}\Bigm| {\lambda}\in{K}_n(1)\Bigr\} $ forms a complete set of pairwise non-isomorphic simple ${\mathcal{H}}_{q}(p,p,n)$-modules, where for each ${\lambda}\in{K}_n(1)$, $D^{{\lambda},0}$ is an irreducible ${\mathcal{H}}_q(p,p,n)$ submodule of ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}$, and $D^{{\lambda},i}=\bigl(D^{{\lambda},0}\bigr)^{\tau^i}$ for $i=0,1,\cdots,|{C}_{{\lambda}}|-1$. Therefore, the problem on classifying simple ${\mathcal{H}}_{q}(p,p,n)$-modules reduces to the problem on determining the automorphism $\operatorname{h}$. [([@A5])]{}\[sep\] We refer to the condition $f_{p,n}(q,{\varepsilon})\in K^{\times}$ as the separation condition. Say that we are in the separated case if the separation condition is satisfied, otherwise we are in the non-separated case. In the separated case, the classification of all the simple ${\mathcal{H}}_q(p,p,n)$-modules is known by the results in [@Hu3]. In particular, we have that [([@Hu3])]{} Suppose $f_{p,n}(q,{\varepsilon})\neq 0$ in $K$. Then ${\mathcal{H}}_{q}(p,p,n)$ is split over $K$, and for any ${\lambda}=({\lambda}^{(1)},\cdots,{\lambda}^{(p)})\in{K}_n$, $$\operatorname{h}({\lambda})={\lambda}[1]:=({\lambda}^{(2)},{\lambda}^{(3)},\cdots,{\lambda}^{(p)},{\lambda}^{(1)}).$$ The above result generalizes the corresponding results in [@P (3.6),(3.7)] and [@Hu1] on the Iwahori–Hecke algebra of type $D_n$. So it remains to deal with the case when $f_{p,n}(q,{\varepsilon})=0$, i.e., the separated case. In this case, by Theorem \[Ariki\], $K_n=\mathcal{K}_n$. Henceforth we identify $K_n$ with $\mathcal{K}_n$ without further comments. Note that in the special case where $p=2$, $q$ must be a primitive $(2{\ell})$-th root of unity for some positive integer ${\ell}$ and the classification of all the simple modules is also known by the results in [@Hu2]. In that case, $\operatorname{h}$ is an involution. We have that [([@Hu2 (1.4)], [@Hu4 Appendix])]{} \[H2\] Suppose $q$ is a primitive $2{\ell}$-th root of unity for some positive integer ${\ell}$. Suppose $\operatorname{char}K\neq 2$ and ${\mathcal{H}}_q(D_n)$ is split over $K$. Let ${\lambda}\in\mathcal{K}_n$ be a Kleshchev bipartition (i.e., $2$-multipartition) of $n$ with respect to $(\sqrt[2{\ell}]{1},1,-1)$, and let $$\underline{\emptyset}\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_n}{\twoheadrightarrow}{\lambda}$$ be a path from $\underline{\emptyset}$ to ${\lambda}$ in Kleshchev’s good lattice with respect to $(\sqrt[2{\ell}]{1},1,-1)$. Then, the sequence $$\underline{\emptyset}\overset{{\ell}+r_1}{\twoheadrightarrow}\cdot \overset{{\ell}+r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{{\ell}+r_n}{\twoheadrightarrow}\cdot$$ also defines a path in Kleshchev’s good lattice with respect to $(\sqrt[2{\ell}]{1},1,-1)$, and it connects $\underline{\emptyset}$ to $\operatorname{h}({\lambda})$. Note that the above description of the involution $\operatorname{h}$ bears much resemblance with Kleshchev’s description of the well-known Mullineux involution (see [@Kl], [@B]). The first one of the two main results in this section is a direct generalization of Lemma \[H2\] to the case of the cyclotomic Hecke algebra ${\mathcal{H}}_q(p,p,n)$. By assumption, $K$ is a field which contains a primitive $p$-th root of unity ${\varepsilon}$. In particular, $\operatorname{char}K$ is coprime to $p$. Now $f_{p,n}(q,{\varepsilon})=0$ implies that $\langle{\varepsilon}\rangle\cap\langle q\rangle\neq\{1\}$, where $\langle{\varepsilon}\rangle$ (resp., $\langle q\rangle$) denotes the multiplicative subgroup generated by ${\varepsilon}$ (resp., by $q$). Let $1\leq k< p$ be the smallest positive integer such that ${\varepsilon}^{k}\in\langle q\rangle$. In particular, $k|p$. Suppose ${\varepsilon}^{k}=q^{\ell}$, where $\ell>0$. Let $d=p/k$. then $q^{\ell}={\varepsilon}^{k}$ is a primitive $d$-th root of unity. Hence $q$ is a primitive $(d\ell_1)$-th root of unity for some positive integer $\ell_1$ with $\ell_1|\ell$. We need the following result from number theory. \[number\] Let $K$ be a field which contains a primitive $p$-th root of unity ${\varepsilon}$. Suppose $p=dk$, where $p,d,k\in\mathbb{N}$. $\xi\in K$ is a primitive $d$-th root of unity. Then there exists a primitive $p$-th root of unity $\zeta\in K$ such that $\zeta^k=\xi$. [Proof:]{} Clearly ${\varepsilon}^k$ is a primitive $d$-th root of unity in $K$. Indeed, the set $\bigl\{1,{\varepsilon},{\varepsilon}^2,\cdots,{\varepsilon}^{p-1}\bigr\}$ is the set of all $p$-th root of unity in $K$, and the set $\bigl\{1,{\varepsilon}^k,{\varepsilon}^{2k},\cdots,{\varepsilon}^{(d-1)k}\bigr\}$ is the set of all $d$-th root of unity in $K$. It follows that there exists some integer $1\leq a<d$ with $(a,d)=1$ and such that $\xi={\varepsilon}^{ak}$. We write $k=k'k''$, where $(k',d)=1$ and any prime factor of $k''$ is a factor of $d$. By the Chinese Remainder Theorem, ${\mathbb Z}_{k'd}\cong {\mathbb Z}_{k'}\times{\mathbb Z}_{d}$. Then we can find $j$ such that $a+jd\equiv 1\pmod{k'}$. In particular, $(a+jd,k')=1$. It follows that $(a+jd,k)=1$, and hence $(a+jd,p)=1$ (because $(a,d)=1$). Now $\zeta:={\varepsilon}^{a+jd}$ is a primitive $p$-th root of unity, and $\zeta^k={\varepsilon}^{ak+jdk}={\varepsilon}^{ak+jp}=\xi$. This completes the proof of the lemma. We return to our discussion above Lemma \[number\]. Note that $q$ is a primitive $(d\ell_1)$-th root of unity implies that $q^{\ell_1}$ is a primitive $d$-th root of unity. By Lemma \[number\], we can always find a primitive $p$-th root of unity $\widetilde{{\varepsilon}}$ such that $(\widetilde{{\varepsilon}})^{k}=q^{\ell_1}$. Since both ${\varepsilon}$ and $\widetilde{{\varepsilon}}$ are primitive $p$-th roots of unity, it follows that there exist integers $i, j$, such that ${\varepsilon}=(\widetilde{{\varepsilon}})^i$ and $\widetilde{{\varepsilon}}={\varepsilon}^j$. In particular, ${\varepsilon}^m\in\langle q\rangle$ if and only if $(\widetilde{{\varepsilon}})^m\in\langle q\rangle$ for any integer $m\geq 0$. Therefore, $1\leq k<p$ is also the smallest positive integer such that $(\widetilde{{\varepsilon}})^{k}\in\langle q\rangle$. Replacing ${\varepsilon}$ by $\widetilde{{\varepsilon}}$ (which makes the Hecke algebra ${\mathcal{H}}_{q}(p,n)$ itself unchanged) if necessary, we can assume without loss of generality that $\ell=\ell_1$. [*Henceforth, we fix such ${\varepsilon}$. Therefore, $q$ is a primitive $d\ell$-th root of unity, $q^{\ell}={\varepsilon}^{k}$ is a primitive $d$-th root of unity, and $1\leq k<p$ is still the smallest positive integer such that ${\varepsilon}^{k}\in\langle q\rangle$.*]{} For integer $i=1,2,\cdots, k$, we set ${\overrightarrow\operatorname{Q}}_i=({\varepsilon}^{i-1},{\varepsilon}^{k+i-1},\cdots, {\varepsilon}^{(d-1)k+i-1})$. Then $\operatorname{Q}=\operatorname{Q}_1\sqcup\cdots\sqcup\operatorname{Q}_k$ is a partition of the parameter set $\operatorname{Q}$ into different $q$-orbits. Let ${\overrightarrow\operatorname{Q}}=\bigl({\overrightarrow\operatorname{Q}}_1,{\overrightarrow\operatorname{Q}}_2,\cdots,{\overrightarrow\operatorname{Q}}_{k}\bigr)$ (concatenation of ordered tuples). For each $p$-multipartition ${\lambda}=({\lambda}^{(1)},\cdots,{\lambda}^{(p)})$ of $n$, we write $${\lambda}^{[i]}=({\lambda}^{((i-1)d+1)},{\lambda}^{((i-1)d+2)},\cdots,{\lambda}^{(id)}),\,\,\,\text{for}\,\,i=1,2,\cdots,k.$$ and we use $\theta$ to denote the map ${\lambda}\mapsto ({\lambda}^{[1]},\cdots,{\lambda}^{[k]})$. Now we can state the first main result, which deals with the case where $k=1$ and $K=\mathbb{C}$. \[main1\] Suppose that $K=\mathbb{C}$, and $q,{\varepsilon}\in K$ such that ${\varepsilon}=q^{\ell}$ is a primitive $p$-th root of unity and $q$ is a primitive $(p\ell)$-th root of unity. Recall our definition of $\operatorname{h}$ in the second paragraph of this section. Let ${\lambda}\in\mathcal{K}_n$ be a Kleshchev $p$-multipartition of $n$ with respect to $(q,1,{\varepsilon},\cdots,{\varepsilon}^{p-1})$, and let $$\underline{\emptyset}\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_n}{\twoheadrightarrow}{\lambda}$$ be a path from $\underline{\emptyset}$ to ${\lambda}$ in Kleshchev’s good lattice with respect to $(q,1,{\varepsilon},\cdots,$ ${\varepsilon}^{p-1})$. Then, the sequence $$\underline{\emptyset}\overset{{\ell}+r_1}{\twoheadrightarrow}\cdot \overset{{\ell}+r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{{\ell}+r_n}{\twoheadrightarrow}\cdot$$ also defines a path in Kleshchev’s good lattice with respect to $(q,1,{\varepsilon},\cdots,$ ${\varepsilon}^{p-1})$, and it connects $\underline{\emptyset}$ to $\operatorname{h}({\lambda})$. The proof of Theorem \[main1\] will be given in Section 4. Here we give an example. Suppose that $n=p=3, q=\sqrt[6]{1}$ and ${\varepsilon}=q^2$. Then the following are all Kleshchev $3$-multipartitions with respect to $(q,1,{\varepsilon},{\varepsilon}^2)$ of $3$ $$\begin{aligned} &\bigl(\emptyset, \emptyset, (1^3)\bigr),\,\,\bigl(\emptyset, \emptyset, (2,1)\bigr),\,\, \bigl(\emptyset, \emptyset, (3)\bigr),\,\,\bigl(\emptyset, (1), (1^2)\bigr),\,\, \bigl(\emptyset, (1), (2)\bigr),\\ &\bigl(\emptyset, (1^2), (1)\bigr),\,\,\bigl(\emptyset, (1^3), \emptyset\bigr),\,\,\bigl(\emptyset, (2),(1)\bigr),\,\,\bigl(\emptyset, (2,1), \emptyset\bigr), \,\,\bigl((1), \emptyset, (1^2)\bigr),\\ &\bigl((1), \emptyset, (2)\bigr),\,\,\bigl((1), (1), (1)\bigr), \,\,\bigl((1), (1^2), \emptyset),\,\,\bigl((1), (2), \emptyset\bigr),\,\, \bigl((1^2), \emptyset, (1)\bigr),\\ &\bigl((1^2), (1),\emptyset\bigr),\,\,\bigl((2), \emptyset, (1)\bigr), \,\,\bigl((2), (1), \emptyset\bigr),\,\,\bigl((2,1), \emptyset, \emptyset),\end{aligned}$$ and the automorphism $\operatorname{h}$ is given by $$\begin{matrix} \bigl(\emptyset, \emptyset, (1^3)\bigr)&\longmapsto & \bigl((1^2), \emptyset, (1)\bigr)&\longmapsto &\bigl(\emptyset, (1^3), \emptyset\bigr),\\ \bigl(\emptyset, \emptyset, (2,1)\bigr)&\longmapsto &\bigl((2,1), \emptyset, \emptyset\bigr)&\longmapsto &\bigl(\emptyset, (2,1), \emptyset\bigr),\\ \bigl(\emptyset, \emptyset, (3)\bigr)&\longmapsto &\bigl((2), (1), \emptyset\bigr)&\longmapsto &\bigl(\emptyset, (2), (1)\bigr),\\ \bigl(\emptyset, (1), (1^2)\bigr)&\longmapsto & \bigl((1), \emptyset, (2)\bigr)&\longmapsto &\bigl((1), (1^2), \emptyset\bigr),\\ \bigl(\emptyset, (1), (2)\bigr)&\longmapsto &\bigl((2), \emptyset, (1)\bigr)&\longmapsto &\bigl((1), (2), \emptyset\bigr),\\ \bigl(\emptyset, (1^2), (1)\bigr)&\longmapsto &\bigl((1), \emptyset, (1^2)\bigr)&\longmapsto &\bigl((1^2), (1), \emptyset\bigr),\\ \bigl((1), (1), (1)\bigr)&\longmapsto &\bigl((1), (1), (1)\bigr)& \longmapsto &\bigl((1), (1), (1)\bigr).\\ \end{matrix}$$ In view of Theorem \[main1\] and the discussion above it, it remains to consider the case where $k>1$. Now we suppose $k>1$. Recall our assumption, that is, $q$ is a primitive $(d\ell)$-th root of unity, $q^{\ell}={\varepsilon}^{k}$ is a primitive $d$-th root of unity, and $1\leq k<p$ is the smallest positive integer such that ${\varepsilon}^{k}\in\langle q\rangle$. Let ${\varepsilon}'={\varepsilon}^k$, which is a primitive $d$-th root of unity in $\mathbb{C}$. For any integer $n'\geq 1$, let ${\mathcal{H}}_{q}(d,n')$ be the Ariki–Koike algebra (over $\mathbb{C}$) with parameters $\bigl\{q,1,{\varepsilon}',({\varepsilon}')^2,\cdots,({\varepsilon}')^{d-1}\bigr\}$ and size $n'$. Let ${\sigma}'$ be the nontrivial $\mathbb{C}$-algebra automorphism of ${\mathcal{H}}_{q}(d,n')$ which is defined on generators by ${\sigma}(T_0)={\varepsilon}' T_0, {\sigma}(T_i)=T_i$, for $i=1,2,\cdots, n'-1$. Let $\mathcal{K}'_{n'}$ be the set of Kleshchev $d$-multipartitions of $n'$ with respect to $(q,1,{\varepsilon}',({\varepsilon}')^2,\cdots,({\varepsilon}')^{d-1})$. Let ${\overrightarrow\operatorname{Q}}':=(1,{\varepsilon}',({\varepsilon}')^2,\cdots,({\varepsilon}')^{d-1})$. Note that all the parameters in ${\overrightarrow\operatorname{Q}}'$ are in a single $q$-orbit. For any $d$-multipartition ${\lambda}'$ of $n'$, by Theorem \[Ariki\], ${\widetilde{D}}_{{\overrightarrow\operatorname{Q}}'}^{{\lambda}'}\neq 0$ if and only if ${\lambda}'\in\mathcal{K}'_{n'}$. The automorphism ${\sigma}'$ determines uniquely an automorphism $\operatorname{h}'$ of $\mathcal{K}'_{n'}$ such that $\bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}'}^{{\lambda}'}\bigr)^{{\sigma}'}\cong{\widetilde{D}}_{{\overrightarrow\operatorname{Q}}'}^{\operatorname{h}'({\lambda}')}$. Clearly, $(\operatorname{h}')^d=\operatorname{id}$. Note that ${\varepsilon}'={\varepsilon}^k=q^{\ell}$. Hence we are in a position to apply Theorem \[main1\] with $p$ replaced by $d$, $n$ replaced by $n'$ and ${\varepsilon}$ replaced by ${\varepsilon}'$. We get that \[maincor\] Let ${\lambda}'\in\mathcal{K}'_{n'}$ be a Kleshchev $d$-multipartition of $n'$ with respect to $(q,1,{\varepsilon}',\cdots,{({\varepsilon}')}^{d-1})$, and let $$\bigl(\underbrace{\emptyset,\cdots, \emptyset}_{d}\bigl)=\underline{\emptyset}\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_{n'}}{\twoheadrightarrow}{\lambda}'$$ be a path from $\underline{\emptyset}$ to ${\lambda}'$ in Kleshchev’s good lattice with respect to $(q,1,{\varepsilon}',\cdots,$ ${({\varepsilon}')}^{d-1})$. Then, the sequence $$\bigl(\underbrace{\emptyset,\cdots, \emptyset}_{d}\bigl)=\underline{\emptyset}\overset{{\ell}+r_1}{\twoheadrightarrow}\cdot \overset{{\ell}+r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{{\ell}+r_{n'}}{\twoheadrightarrow}\cdot$$ also defines a path in Kleshchev’s good lattice with respect to $(q,1,{\varepsilon}',\cdots,$ ${({\varepsilon}')}^{d-1})$, and it connects $\underline{\emptyset}$ to $\operatorname{h}'({\lambda}')$. Now we can state the second main result, which deals with the case where $k>1$ and $K=\mathbb{C}$. \[main2\] Suppose that $K=\mathbb{C}$, $q,{\varepsilon}\in K$ such that ${\varepsilon}$ is a primitive $p$-th root of unity, ${\varepsilon}^k=q^{\ell}$ is a primitive $d$-th root of unity and $q$ is a primitive $(d\ell)$-th root of unity, and $1\leq k<p$ is the smallest positive integer such that ${\varepsilon}^{k}\in\langle q\rangle$. Let ${\lambda}\in\mathcal{K}_n$ be a Kleshchev $p$-multipartition of $n$ with respect to $(q,{\overrightarrow\operatorname{Q}})$, where ${\overrightarrow\operatorname{Q}}=\bigl({\overrightarrow\operatorname{Q}}_1,{\overrightarrow\operatorname{Q}}_2,\cdots,{\overrightarrow\operatorname{Q}}_{k}\bigr)$ (concatenation of ordered tuples). Then $$\theta\Bigl(\operatorname{h}({\lambda})\Bigr)=\Bigl(\operatorname{h}'({\lambda}^{[k]}),{\lambda}^{[1]},\cdots,{\lambda}^{[k-1]}\Bigr),$$ where $|{\lambda}^{[k]}|=n'$, $\operatorname{h}'$ is as defined in Corollary \[maincor\] and the righthand side of the above equality is understood as concatenation of ordered tuples. The proof of Theorem \[main2\] will be given in Section 5. \[main3\] Both Theorem \[main1\] and Theorem \[main2\] remain true if we replace $\mathbb{C}$ by any field $K$ such that ${\mathcal{H}}_q(p,p,n)$ is split over $K$ and $K$ contains primitive $p$-th root of unity. [Proof:]{} In the case where $p=2$, this is proved in the appendix of [@Hu4] (which is essentially an argument due to S. Ariki). In general, this can still be proved by using the same argument as in the appendix of [@Hu4]. Note that we have proved in [@Hu3 Theorem 5.7] that in the separated case ${\mathcal{H}}_q(p,p,n)$ is always split over $K$ whenever $K$ contains primitive $p$-th root of unity. It would be interesting to know if this is still true in the non-separated case. [**Remark 3.10**]{}Let ${\overrightarrow\operatorname{Q}}:=(Q_1,\cdots,Q_p)$ be an arbitrary permutation of $(1,{\varepsilon},$ $\cdots,{\varepsilon}^{p-1})$. We redefine the residue of a node $\gamma=(a,b,c)$ to be $\operatorname{res}(\gamma):=q^{b-a}Q_c$. The notions of good nodes and Kleshchev multipartitions are defined in a similar way as before. For any $r\in K$, we use ${\lambda}\overset{r}{\twoheadrightarrow}\mu$ to indicate ${\lambda}$ is obtained from $\mu$ by removing a good $r$-node. One can easily deduce from Theorem \[main1\] and Theorem \[main2\] the following description of $\operatorname{h}$: let ${\lambda}\in\mathcal{K}_n$ be a Kleshchev $p$-multipartition of $n$ with respect to $(q,Q_1,\cdots,Q_p)$, and let $\underline{\emptyset}\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_n}{\twoheadrightarrow}{\lambda}$ be a path from $\underline{\emptyset}$ to ${\lambda}$ in Kleshchev’s good lattice with respect to $(q,Q_1,\cdots,$ $Q_p)$. Then, the sequence $$\underline{\emptyset}\overset{{\varepsilon}r_1 }{\twoheadrightarrow}\cdot \overset{{\varepsilon}r_2 }{\twoheadrightarrow}\cdot \cdots\cdots \overset{{\varepsilon}r_n }{\twoheadrightarrow}\cdot\label{311}$$ also defines a path in Kleshchev’s good lattice with respect to $(q,Q_1,\cdots,Q_p)$, and it connects $\underline{\emptyset}$ to $\operatorname{h}({\lambda})$. [**Remark 3.12**]{}In the case of $G(r,p,n)$, one can still define the automorphisms ${\sigma}, \tau$ and $\operatorname{h}$, and the problem of classifying simple ${\mathcal{H}}_q(r,p,n)$-modules can still be reduced to the determination of $\operatorname{h}$. Note that although we deal with the $G(p,p,n)$ case in this paper, it is not difficult to generalize Theorem \[main1\] and Theorem \[main2\] in this paper to the $G(r,p,n)$ case by using the same argument as well as the Morita equivalence theorem ([@DM Theorem 1.1]) of Dipper–Mathas for Ariki–Koike algebras[^2]. In the preprint [@GJ], Genet and Jacon give a parameterization of simple ${\mathcal{H}}_q(r,p,n)$-modules when $q$ is a root of unity by using combinatorics of FLOTW partitions. Our ${\sigma},\tau, \operatorname{h}$ are denoted by $f,g,\tau$ in their paper. They give a characterization of $\tau$ in terms of $\omega$ and a bijection $\kappa$ between the set of Kleshchev multipartitions and the set of FLOTW partitions (see [@GJ Proposition 2.9, Line3–4 in Page 14]). Note that our description of $\operatorname{h}$ in (\[311\]) is actually a statement about the crystal graph, and although [@GJ] uses JMMO’s Fock space ([@JMM], [@FLO]) and we use Hayashi’s Fock space ([@Ha], [@AM]), the two crystals provided by lattice of Kleshchev multipartitions and by lattice of FLOTW partitions respectively are isomorphic to each other. It follows that the description of $\operatorname{h}$ in (\[311\]) in the context of Kleshchev’s good lattices should also be valid in the context of FLOTW’s good lattices. Therefore, there is a second approach which can be used to generalize Theorem \[main1\] and Theorem \[main2\] in this paper to the $G(r,p,n)$ case. The details are given in [@Hu5]. The main idea is to derive from the setting of FLOTW partitions and [@GJ Proposition 2.9, Line3–4 in Page 14]) a description of $\operatorname{h}$ like (\[311\]) (see [@Ja0 (4.3.1)] for the special case when $p=r$ and ${\varepsilon}=q^{{\ell}}$). In the case where $p=r$, the parameterization of simple modules obtained in [@GJ] is the same as [@Hu3 Theorem 5.7] in the separated case. It is worthwhile and interesting to establish a direct connection between the two parameterizations of irreducible representations. Proof of Theorem \[main1\] ========================== In this section, we shall give the proof of Theorem \[main1\]. It turns out that the proof of Theorem \[main1\] is a direct generalization of the proof of [@Hu2 (1.5)]. Throughout this section, we keep the same assumptions and notations as in Theorem \[main1\]. That is, $K=\mathbb{C}$, $q,{\varepsilon}\in \mathbb{C}$ be such that ${\varepsilon}=q^{\ell}$ is a primitive $p$-th root of unity and $q$ is a primitive $(p\ell)$-th root of unity. Let $v$ be an indeterminate over $\mathbb{Q}$. Let $\mathfrak{h}$ be a $(p\ell+1)$-dimensional vector space over $\mathbb{Q}$ with basis $\bigl\{h_0,h_1,\cdots,h_{p\ell-1},d\bigr\}$[^3]. Denote by $\bigl\{ \Lambda_0,\Lambda_1,\cdots,$ $\Lambda_{p\ell-1},\delta\bigr\}$ the corresponding dual basis of $\mathfrak{h}^{\ast}$, and we set $\alpha_i=2\Lambda_{i}-\Lambda_{i-1}-\Lambda_{i+1}+\delta_{i,0}\delta$ for $i\in{\mathbb Z}/p\ell{\mathbb Z}$. The weight lattice is $P={\mathbb Z}\Lambda_0\oplus\cdots\oplus{\mathbb Z}\Lambda_{p\ell-1}\oplus {\mathbb Z}\delta$, its dual is $P^{\vee}={\mathbb Z}h_0\oplus\cdots\oplus{\mathbb Z}h_{p\ell-1}\oplus{\mathbb Z}d$. Assume that the $p\ell\times p\ell$ matrix $\bigl(\langle\alpha_{i}, h_{j}\rangle\bigr)$ is just the generalized Cartan matrix associated to ${\widehat{\mathfrak{sl}}}_{p\ell}$. The quantum affine algebra $U_v({{\widehat{\mathfrak{sl}}}}_{p\ell})$ is by definition the $\mathbb{Q}(v)$-algebra with $1$ generated by elements $E_i, F_i, \,\,i\in{\mathbb Z}/p\ell{\mathbb Z}$ and $K_{h},\,\,h\in P^{\vee}$, subject to the relations $$\begin{matrix} K_{h}K_{h'}=K_{h+h'}=K_{h'}K_{h},\,\,\,K_0=1,\\[4pt] K_{h}E_{j}=v^{\langle\alpha_{j},h\rangle}E_{j}K_{h},\,\,\, K_{h}F_{j}=v^{-\langle\alpha_{j},h\rangle}F_{j}K_{h},\\[4pt] E_{i}F_{j}-F_{j}E_{i}=\delta_{ij}\frac{K_{h_i}-K_{-h_i}}{v-v^{-1}},\\[4pt] \sum\limits_{k=0}^{1-\langle\alpha_{i},h_j\rangle}(-1)^{k}\begin{bmatrix} 1-\langle\alpha_{i},h_j\rangle\\[2pt] k\end{bmatrix} E_{i}^{1-\langle\alpha_{i},h_j\rangle-k} E_{j}E_{i}^{k}=0,\,\,\,(i\neq j),\\[4pt] \sum\limits_{k=0}^{1-\langle\alpha_{i},h_j\rangle}(-1)^{k} \begin{bmatrix} 1-\langle\alpha_{i},h_j\rangle\\[2pt] k\end{bmatrix} F_{i}^{1-\langle\alpha_{i},h_j\rangle-k} F_{j}F_{i}^{k}=0,\,\,\,(i\neq j), \end{matrix}$$ where $$[k]:=\frac{v^{k}-v^{-k}}{v-v^{-1}},\quad [k]!:=[k][k-1]\cdots[1],\quad \begin{bmatrix} m\\[2pt] k\end{bmatrix}:=\frac{[m]!}{[m-k]![k]!}.$$ It is a Hopf algebra with comultiplication given by $$\Delta(K_h)=K_{h}\otimes K_{h},\,\,\, \Delta(E_i)=E_{i}\otimes 1+K_{-h_i}\otimes E_{i},\,\,\, \Delta(F_{i})=F_{i}\otimes K_{h_i}+1\otimes F_{i}.$$ Let $\mathcal{P}^{(1)}$ be the set of all partitions. Let $\mathcal{P}:=\sqcup_{n\geq 0}\mathcal{P}_n$, the set of all $p$-multipartitions. For each integer $j$ with $0\leq j<p\ell$, there is a [*level $1$ Fock space*]{} $\mathcal{F}^{(1)}(\Lambda_j) $, which is defined as follows. As a vector space, $$\mathcal{F}^{(1)}(\Lambda_j):=\bigoplus_{{\lambda}\in\mathcal{P}^{(1)}}\mathbb{Q}(v){\lambda},$$ and the algebra $U_v({{\widehat{\mathfrak{sl}}}}_{p\ell})$ acts on $\mathcal{F}^{(1)}(\Lambda_j)$ by $$\begin{matrix} K_{h_i}{\lambda}=v^{N_i({\lambda})}{\lambda},&K_{d}{\lambda}=v^{-N_d({\lambda})}{\lambda},\\ E_{i}{\lambda}=\sum_{\nu\overset{i}{\rightarrow}{\lambda}}v^{-N_i^{r}(\nu,{\lambda})}\nu,& F_{i}{\lambda}=\sum_{{\lambda}\overset{i}{\rightarrow}\mu}v^{N_i^{l}({\lambda},\mu)}\mu, \end{matrix}$$ where $i\in{\mathbb Z}/p\ell{\mathbb Z}$, ${\lambda}\in\mathcal{P}^{(1)}$, and $$\begin{aligned} N_{i}({\lambda})&=\#\Bigl\{\mu\bigm|{\lambda}\overset{i}{\rightarrow}\mu\Bigr\}- \#\Bigl\{\nu\bigm|\nu\overset{i}{\rightarrow}{\lambda}\Bigr\},\\ N_{i}^{r}(\nu,{\lambda})&=\sum_{\gamma\in [\lambda]\setminus[\nu]}\biggl(\#\Bigl\{\gamma'\Bigm| \begin{matrix} \text{$\gamma'$ an addable}\\ \text{$i$-node of ${\lambda}$ above $\gamma$}\end{matrix}\Bigr\}-\\ &\qquad\qquad\quad\#\Bigl\{\gamma'\Bigm|\begin{matrix} \text{$\gamma'$ a removable}\\ \text{$i$-node of $\nu$ above $\gamma$}\end{matrix}\Bigr\}\biggr),\\ N_{i}^{l}({\lambda},\mu)&=\sum_{\gamma\in [\mu]\setminus[{\lambda}]}\biggl(\#\Bigl\{\gamma'\Bigm| \begin{matrix} \text{$\gamma'$ an addable}\\ \text{$i$-node of $\mu$ below $\gamma$}\end{matrix}\Bigr\}-\\ &\qquad\qquad\quad\#\Bigl\{\gamma'\Bigm|\begin{matrix} \text{$\gamma'$ a removable}\\ \text{$i$-node of ${\lambda}$ below $\gamma$}\end{matrix}\Bigr\}\biggr),\end{aligned}$$ and $N_{d}({\lambda}):=\#\bigl\{\gamma\in[{\lambda}]\bigm|\operatorname{res}(\gamma)=0\bigr\}$, and here we should use the following definition of residue, namely, the node in the $a$th row and the $b$th column of ${\lambda}$ is filled out with the residue $b-a+j\in{{\mathbb Z}/p\ell{\mathbb Z}}$. Let $\Lambda:=\Lambda_0+\Lambda_{{\ell}}+\Lambda_{2{\ell}}+\cdots+\Lambda_{(p-1){\ell}}$. Replacing $\mathcal{P}^{(1)}$ by $\mathcal{P}$, and using the definition of residue given in (\[res\]) for multipartition, we get a [*level $p$ Fock space*]{} $$\mathcal{F}(\Lambda):=\bigoplus_{{\lambda}\in\mathcal{P}}\mathbb{Q}(v){\lambda}.$$ As a vector space, we have $\mathcal{F}(\Lambda)\cong\mathcal{F}^{(1)}(\Lambda_0)\otimes\cdots\otimes \mathcal{F}^{(1)}(\Lambda_{(p-1){\ell}})$, ${\lambda}\mapsto{\lambda}^{(1)}\otimes\cdots\otimes{\lambda}^{(p)}$. By [@AM (2.5)], it is indeed an $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$-module isomorphism, where the action on the right-hand side is defined via $\Delta^{(p-1)}:=\underbrace{(\Delta\otimes 1\otimes\cdots\otimes 1)}_{p-1} \cdots\underbrace{(\Delta\otimes 1)}_{2}\underbrace{\Delta}_{1}$. For each ${\lambda}=({\lambda}^{(1)},\cdots,{\lambda}^{(p)})\in\mathcal{P}$, we define $\widehat{{\lambda}}=({\lambda}^{(p)},{\lambda}^{(1)},\cdots,{\lambda}^{(p-1)})$. Let ${\Theta}$ be the automorphism of $\Bigl(U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})\Bigr)^{\otimes p}$ which is defined by $\Theta(x_1\otimes\cdots\otimes x_p)=x_2\otimes \cdots\otimes x_{p}\otimes x_1$ for any $x_1,\cdots,x_p\in U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$. We now introduce a different version of level $p$ Fock space $\widehat{\mathcal{F}}(\Lambda)$. As a vector space, $\widehat{\mathcal{F}}(\Lambda)=\mathcal{F}(\Lambda)\cong\mathcal{F}^{(1)}(\Lambda_0)\otimes\cdots\otimes \mathcal{F}^{(1)}(\Lambda_{(p-1){\ell}})$, while the action (denoted by “$\circ$”) of $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ is defined by $$x\circ({\lambda}^{(1)}\otimes\cdots\otimes{\lambda}^{(p)}):=\Bigl\{\Theta\Bigl( \Delta^{(p-1)}(x)\Bigr)\Bigr\}({\lambda}^{(1)}\otimes\cdots\otimes{\lambda}^{(p)}).$$ \[lm41\] The above action defines an integrable representation of the algebra $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ on $\widehat{\mathcal{F}}(\Lambda)$, such that $$\begin{matrix} K_{h_i}\circ{\lambda}=v^{N_i({\lambda})}{\lambda},&K_{d}\circ{\lambda}=v^{-N_d({\lambda})}{\lambda},\\ E_{i}\circ{\lambda}=\sum_{\nu\overset{i}{\rightarrow}{\lambda}} v^{-N_{i+\ell}^{r}(\widehat{\nu},\widehat{{\lambda}})}\nu,& F_{i}\circ{\lambda}=\sum_{{\lambda}\overset{i}{\rightarrow}\mu}v^{N_{i+\ell}^{l} (\widehat{{\lambda}},\widehat{\mu})}\mu.\\ \end{matrix}$$ Moreover, the empty $p$-multipartition $\underline{\emptyset}:=(\underbrace{\emptyset,\cdots,\emptyset}_{p})$ is a highest weight vector of weight $\Lambda=\Lambda_{0}+\Lambda_{{\ell}}+\cdots+\Lambda_{(p-1){\ell}}$, and $\widehat{L}(\Lambda):=U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})\circ\underline{\emptyset}$ is isomorphic to the irreducible highest weight module with highest weight $\Lambda$. [Proof:]{} For any $x,y\in U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ and ${\lambda}\in\mathcal{P}$, we have that $$\begin{aligned} x\circ (y\circ {\lambda})&=\Bigl(\Theta\bigl(\Delta^{p-1}(x)\bigr)\Bigr) \biggl(\Bigl(\Theta\bigl(\Delta^{p-1}(y)\bigr)\Bigr){\lambda}\biggr)\\ &=\Bigl(\Theta\bigl(\Delta^{p-1}(x)\bigr)\Theta\bigl(\Delta^{p-1}(y)\bigr)\Bigr){\lambda}=\Bigl(\Theta\bigl(\Delta^{p-1}(x)\Delta^{p-1}(y)\bigr)\Bigr){\lambda}\\ &=\Bigl(\Theta\bigl(\Delta^{p-1}(xy)\bigr)\Bigr){\lambda}=(xy)\circ{\lambda},\end{aligned}$$ and $1\circ{\lambda}=K_0\circ{\lambda}={\lambda}$. It follows that the action “$\circ$" does define a representation of the algebra $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ on $\widehat{\mathcal{F}}(\Lambda)$. The remaining part of the lemma can be verified easily. Therefore, both $\mathcal{F}(\Lambda)$ and $\widehat{\mathcal{F}}(\Lambda)$) are integrable $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$-modules. They are not irreducible. In both cases the empty multipartition $\underline{\emptyset}$ is a highest weight vector of weight $\Lambda=\Lambda_{0}+\Lambda_{{\ell}}+\cdots+\Lambda_{(p-1){\ell}}$, and both ${L}(\Lambda):=U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})\underline{\emptyset}$ and $\widehat{L}(\Lambda)$ are isomorphic to the irreducible highest weight module with highest weight $\Lambda$. Let $U'_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ be the subalgebra of $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ generated by $E_{i},F_{i},K_{h_i}^{\pm 1}, i\in{\mathbb Z}/p{\ell}{\mathbb Z}$. Let $\#$ be the automorphism of $U'_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ which is defined on generators by $$E_{i}^{\#}:=E_{{\ell}+i},\,\,F_{i}^{\#}:=F_{{\ell}+i},\,\,K_{h_{i}}^{\#}:=K_{h_{i+{\ell}}}, \quad \forall\,i\in{\mathbb Z}/p{\ell}{\mathbb Z}.$$ Now we begin to follow the streamline of the proof of [@Hu2 (1.5)]. We first recall some basic facts about crystal bases. Let ${\mathcal A}:=\mathbb{Q}[v,v^{-1}]$, let $A$ be the ring of rational functions in $\mathbb{Q}(v)$ which do not have a pole at $0$. Write $$\mathcal{F}(\Lambda)_{{\mathcal A}}:=\bigoplus_{{\lambda}\in\mathcal{P}}{\mathcal A}{\lambda},\quad \mathcal{F}(\Lambda)_{A}:=\bigoplus_{{\lambda}\in\mathcal{P}}A{\lambda},$$ and we use similar notations for $\widehat{\mathcal{F}}(\Lambda)$. Let $U_{{\mathcal A}}$ be the Lusztig–Kostant ${\mathcal A}$-form of $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$. Then by [@AM (2.7)] we know that both $\mathcal{F}(\Lambda)_{{\mathcal A}}$ and $\widehat{\mathcal{F}}(\Lambda)_{{\mathcal A}}$ are $U_{{\mathcal A}}$-modules. Let $u_{\Lambda}:=\underline{\emptyset}$, the highest weight vector of weight $\Lambda$ in $\mathcal{F}(\Lambda)$. For each $i\in\{0,1,\cdots,p{\ell}-1\}$, let $\widetilde{E}_{i}, \widetilde{F}_{i}$ be the Kashiwara operators introduced in [@Kas]. Let $L(\Lambda)_{A}$ be the $A$-submodule of $\mathcal{F}(\Lambda)_{A}$ generated by all $\widetilde{F}_{i_1}\cdots\widetilde{F}_{i_k}u_{\Lambda}$ for all $i_1,\cdots,i_k\in{\mathbb Z}/{p{\ell}}Z$. It is a free $A$-module, and is stable under the action of $\widetilde{E}_{i}$ and $\widetilde{F}_{i}$. The set $$\mathbb{B}(\Lambda):=\bigl\{\widetilde{F}_{i_1}\cdots\widetilde{F}_{i_k}u_{\Lambda}+ vL(\Lambda)_{A}\bigm|i_{1},\cdots,i_{k}\in{\mathbb Z}/p{\ell}{\mathbb Z}\bigr\}\setminus\bigl\{0\bigr\}$$ is a basis of $L(\Lambda)_{A}/vL(\Lambda)_{A}$. In [@Kas], the pair $\bigl(L(\Lambda)_{A},\mathbb{B}(\Lambda)\bigr)$ is called the [*lower crystal basis*]{} at $v=0$ of $L(\Lambda)$. Following [@Kas], the [*crystal graph*]{} of $L(\Lambda)$ is the edge labelled direct graph whose set of vertices is $\mathbb{B}(\Lambda)$ and whose arrows are given by $$\text{$b\overset{i}{\twoheadrightarrow}b'$\quad$ \Longleftrightarrow$\quad $\widetilde{F}_{i}b=b'$\,\,\, for some $i\in{\mathbb Z}/p{\ell}{\mathbb Z}.$}$$ It is a remarkable fact ([@MM], [@AM (2.11)]) that the crystal graph of $L(\Lambda)$ is exactly the same as the Kleshchev’s good lattice if we use the embedding $L(\Lambda)\subset \mathcal{F}(\Lambda)$. In particular, $\mathbb{B}(\Lambda)$ can be identified with $\mathcal{K}:=\sqcup_{n\geq 0}\mathcal{K}_n$. Henceforth, we fix such an identification. Let “$-$” be the involutive ring automorphism of $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ which is defined by $$\begin{matrix} \overline{v}:=v^{-1},\quad \overline{K_{h}}:=K_{-h},\,\,(h\in P^{\vee}),\\ \overline{E_i}=E_{i},\quad \overline{F_{i}}:=F_{i},\,\,i=0,1,\cdots,p{\ell}-1. \\ \end{matrix}$$ This gives rise to an involution (still denoted by “$-$”) of $L(\Lambda)$. That is, for $x=P\underline{\emptyset}\in L(\Lambda)$, we set $\overline{x}:=\overline{P} \underline{\emptyset}$. By [@Kas], there exists a unique ${\mathcal A}$-basis $\bigl\{G(\mu)\bigm|\mu\in\mathcal{K}\bigr\}$ of $L(\Lambda)_{{\mathcal A}}$ such that 1. $G(\mu)\equiv\mu\pmod{vL(\Lambda)_{A}}$, 2. $\overline{G(\mu)}=G(\mu)$. The basis $\bigl\{G(\mu)\bigr\}_{\mu\in\mathcal{K}}$ is called the [**lower global crystal basis**]{} of $L(\Lambda)$. Let ${\lambda}\in\mathcal{P}_n, \mu\in\mathcal{K}_n$. Let $d_{{\lambda},\mu}:=[{\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}:{\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{\mu}]$, and let $d_{{\lambda},\mu}(v)\in{\mathcal A}$ be such that $G(\mu)=\sum_{{\lambda}}d_{{\lambda},\mu}(v){\lambda}$. By a well-known result of Ariki [@A1], $d_{{\lambda},\mu}(1)=d_{{\lambda},\mu}$, for any ${\lambda}\in\mathcal{P}_n, \mu\in\mathcal{K}_n$. The following six results can be proved by using exactly the same arguments as in the proof of [@Hu2 (2.2),(3.1),(3.2),(3.3),(3.4),(3.5)]. \[thm42\] Let $\mathcal{F}(\Lambda)^{\#}$ be the $U'_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$-module which is obtained from $\mathcal{F}(\Lambda)$ by twisting the action by the automorphism $\#$. Let $\phi$ be the linear map $\mathcal{F}(\Lambda)^{\#}\rightarrow\widehat{\mathcal{F}}(\Lambda)$ which is defined by $\sum_{\lambda}f_{\lambda}(v)\lambda\mapsto\sum_{\lambda}f_{\lambda}(v)\widehat{\lambda}$. Then $\phi$ is a $U'_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$-module isomorphism. Moreover, $\phi\bigl(L(\Lambda)\bigr)=\widehat{L}(\Lambda)$. Let $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})^{-}$ be the subalgebra of $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$ generated by $F_{i}, i\in{\mathbb Z}/p{\ell}{\mathbb Z}$. Then there is a $\mathbb{Q}(v)$-linear automorphism (denoted by $\#$) on the irreducible $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$-module $L(\Lambda)$ such that $$\bigl(\underline{\emptyset}\bigr)^{\#}:=\underline{\emptyset},\,\, \bigl(Px\bigr)^{\#}:=P^{\#}x^{\#},\quad \forall\,\,P\in U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})^{-}, x\in L(\Lambda).$$ \[lm45\] For any $x\in L(\Lambda)$, we have $\bigl(\widetilde{F}_{i}x\bigr)^{\#} =\widetilde{F}_{{\ell}+i}x^{\#}$. Let ${\lambda}\in\mathcal{K}_n$ be a Kleshchev $p$-multipartition of $n$ with respect to $(q,1,{\varepsilon},{\varepsilon}^2,\cdots,{\varepsilon}^{p-1})$, and let $ \underline{\emptyset}\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_n}{\twoheadrightarrow}{\lambda}$ be a path from $\underline{\emptyset}$ to ${\lambda}$ in Kleshchev’s good lattice with respect to $(q,1,{\varepsilon},{\varepsilon}^2,\cdots,{\varepsilon}^{p-1})$. Then, the sequence $$\underline{\emptyset}\overset{{\ell}+r_1}{\twoheadrightarrow}\cdot \overset{{\ell}+r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{{\ell}+r_n}{\twoheadrightarrow}\cdot$$ also defines a path in Kleshchev’s good lattice with respect to $(q,1,{\varepsilon},{\varepsilon}^2,\cdots,$ ${\varepsilon}^{p-1})$. We denote the endpoint by ${\lambda}^{\#}$. For each $\mu\in\mathcal{K}$, $G(\mu)^{\#}=G\bigl(\mu^{\#}\bigr)$. \[lm48\] For any $x\in L(\Lambda)$, we have $\phi\bigl(\widetilde{F}_{i}x\bigr) =\widetilde{F}_{{\ell}+i}\circ\phi(x)$. Recall that $\widehat{L}(\Lambda)$ is also an irreducible highest weight $U_v({{\widehat{\mathfrak{sl}}}}_{p{\ell}})$-module of highest weight $\Lambda$. For any $i_{1},\cdots,i_{k}\in{\mathbb Z}/p{\ell}{\mathbb Z}$, it is easy to see $\widetilde{F}_{i_1}\cdots\widetilde{F}_{i_k}u_{\Lambda}\in vL(\Lambda)_{A}$ if and only if $\widetilde{F}_{i_1+{\ell}}\circ\cdots\circ\widetilde{F}_{i_k+{\ell}}\circ u_{\Lambda}\in v\widehat{L}(\Lambda)_{A}$. By Lemma \[thm42\] and Lemma \[lm48\] we know that the lower global crystal basis of $\widehat{L}(\Lambda)$ is parameterized by $\widehat{\mathcal{K}}:=\bigl\{\widehat{\mu}\bigm|\mu\in\mathcal{K}\bigr\}$. We denote them by $\bigl\{\widehat{G}(\widehat{\mu})\bigm|\mu\in\mathcal{K}\bigr\}$. For any ${\lambda}\in\mathcal{P}, \mu\in\mathcal{K}$, let $\widehat{d}_{{\lambda},\widehat{\mu}}(v)\in{\mathcal A}$ be such that $\widehat{G}(\widehat{\mu})=\sum_{{\lambda}}\widehat{d}_{{\lambda},\widehat{\mu}}(v){\lambda}$. The following three results can be proved by using exactly the same arguments as in the proof of [@Hu2 (3.6),(3.7),(3.8)]. \[cor49\] For any ${\lambda}\in\mathcal{P}, \mu\in\mathcal{K}$, we have $\phi\bigl(G(\mu)\bigr)=\widehat{G}(\widehat{\mu})$, and $d_{{\lambda},\mu}(v)= \widehat{d}_{\widehat{{\lambda}},\widehat{\mu}}(v)$. \[thm410\] Let $\varphi\colon L(\Lambda)\rightarrow\widehat{\mathcal{F}}(\Lambda)$ be the map which is defined by $\varphi(x):=\phi\bigl(x^{\#}\bigr)$. Then, if specialized at $v=1$, the $\varphi$ is the restriction of the ${{\widehat{\mathfrak{sl}}}}_{p{\ell}}$-module isomorphism $\mathcal{F}(\Lambda)_{\mathbb{Q}}\rightarrow\widehat{\mathcal{F}}(\Lambda)_{\mathbb{Q}}$ given by ${\lambda}\mapsto{\lambda}$. \[cor411\] For any ${\lambda}\in\mathcal{P}, \mu\in\mathcal{K}$, we have $\varphi\bigl(G(\mu^{\#})\bigr)=\widehat{G}\bigl(\widehat{\mu}\bigr)$ and $d_{{\lambda},\mu^{\#}}(1)=\widehat{d}_{{{\lambda}},\widehat{\mu}}(1)$. [**Proof of Theorem \[main1\]:**]{}It suffices to show that $\operatorname{h}(\mu)=\mu^{\#}$ for any $\mu\in\mathcal{K}_{n}$. It is well-known that $\bigl({\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{\widehat{{\lambda}}}\bigr)_{\mathbb{C}(v)}\cong \bigl({\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\bigr)_{\mathbb{C}(v)}^{\sigma}$ (see e.g., [@Hu3 (3.7),(5.8)]). Hence in the Grothendieck group of the category of finite-dimensional ${\mathcal{H}}_{q}(p,n)$-modules, $[{\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{\widehat{{\lambda}}}]=[\bigl({\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\bigr)^{\sigma}]$. By Corollary \[cor49\], Corollary \[cor411\] and Ariki’s result [@A1], we deduce that $$\begin{aligned} {[}{\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{\widehat{{\lambda}}}:{{\widetilde{D}}}_{{\overrightarrow\operatorname{Q}}}^{\operatorname{h}(\mu)}{]} &=[\bigl({\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\bigr)^{\sigma}:\bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{\mu} \bigr)^{\sigma}]=[{\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}:{\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{\mu}]=d_{{\lambda},\mu}\\ &=d_{{\lambda},\mu}(1)=\widehat{d}_{\widehat{{\lambda}},\widehat{\mu}}(1) =d_{\widehat{{\lambda}},\mu^{\#}}(1)=d_{\widehat{{\lambda}},\mu^{\#}} =[{\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{\widehat{{\lambda}}}:{\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{\mu^\#}],\end{aligned}$$ for any ${\lambda}\in\mathcal{P}_n$. Taking $\widehat{\lambda}$ to be $\operatorname{h}(\mu)$ or $\mu^{\#}$, we get that $\mu^{\#}\trianglelefteq \operatorname{h}(\mu)$ and $h(\mu)\trianglelefteq\mu^{\#}$, hence $\operatorname{h}(\mu)=\mu^{\#}$, as required. This completes the proof of Theorem \[main1\]. For any ${\lambda}\in\mathcal{P}_n$ and $\mu\in\mathcal{K}_n$, ${[}{\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}:{{\widetilde{D}}}_{{\overrightarrow\operatorname{Q}}}^{\mu}{]}={[}{\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{\widehat{{\lambda}}}:{{\widetilde{D}}}_{{\overrightarrow\operatorname{Q}}}^{\operatorname{h}(\mu)}{]}$. In particular, ${[}{\widetilde{S}}_{{\overrightarrow\operatorname{Q}}}^{\widehat{\mu}}:{{\widetilde{D}}}_{{\overrightarrow\operatorname{Q}}}^{\operatorname{h}(\mu)}{]}=1$ for any $\mu\in\mathcal{K}_n$. Proof of Theorem \[main2\] ========================== In this section, we shall give the proof of Theorem \[main2\]. Our main tools are Dipper–Mathas’s Morita equivalence results ([@DM]) for Ariki–Koike algebras and their connections with type $A$ affine Hecke algebras. Throughout this section, we keep the same assumptions and notations as in Theorem \[main2\]. That is, $K=\mathbb{C}$, $q,{\varepsilon}\in \mathbb{C}$ be such that ${\varepsilon}$ is a primitive $p$-th root of unity, ${\varepsilon}^k=q^{\ell}$ is a primitive $d$-th root of unity and $q$ is a primitive $(d\ell)$-th root of unity, and $1<k<p$ is the smallest positive integer such that ${\varepsilon}^{k}\in\langle q\rangle$. Let ${\mathcal{H}}_q({\mathfrak S}_n)$ be the Iwahori–Hecke algebra associated to the symmetric group ${\mathfrak S}_n$. Let $K[X_1^{\pm 1},\cdots,X_n^{\pm 1}]$ be the ring of Laurent polynomials on $n$ indeterminates $X_1,\cdots,X_n$. The type $A$ affine Hecke algebra ${\mathcal{H}}_n^{\operatorname{aff}}$ is the $K$-algebra, which as a $K$-linear space is isomorphic to $${\mathcal{H}}_q({\mathfrak S}_n)\otimes_{K}K[X_1^{\pm 1},\cdots,X_n^{\pm 1}].$$ The algebra structure is given by requiring that ${\mathcal{H}}_q({\mathfrak S}_n)$ and $K[X_1^{\pm 1},\cdots,X_n^{\pm 1}]$ are subalgebras and that $$\label{equ51} T_if-{\null}^{s_i}\negthickspace f T_i=(q-1)\frac{f- {\null}^{s_i}\negthickspace f}{1-X_iX_{i+1}^{-1}}, \quad \forall\,\,f\in K[X_1^{\pm 1},\cdots,X_n^{\pm 1}],$$ Here $s_i\in{\mathfrak S}_n$ act on $K[X_1^{\pm 1},\cdots,X_n^{\pm 1}]$ by permuting $X_i$ and $X_{i+1}$. Note that the relation (\[equ51\]) is equivalent to $$\begin{aligned} & T_iX_{i}T_i=qX_{i+1},\quad \text{$\forall\,\, i$ with $1\leq i<n$,}\\ & T_iX_j=X_j T_i, \quad \forall\,\,j\not\in\{i, i+1\},\end{aligned}$$ Let $Q_1,\cdots,Q_p$ be elements of $K$. Let ${\mathcal{H}}(p,n):={\mathcal{H}}_{q,Q_1,\cdots,Q_p}(p,n)$ be the Ariki–Koike algebras with parameters $\{q,Q_1,\cdots,Q_p\}$. It is well-known that there is a surjective $K$-algebra homomorphism $\varphi:\,{\mathcal{H}}_n^{\operatorname{aff}}\twoheadrightarrow{\mathcal{H}}(p,n)$ which is defined on generators by $$T_i\mapsto T_i,\quad X_j\mapsto L_j,\,\,\forall\,1\leq i<n,\,\,\forall\,1\leq j\leq n,$$ where $L_j:=q^{1-j}T_{j-1}\cdots T_1T_0T_1\cdots T_{j-1}$ (the $j$-th Murphy operator). As a consequence, every simple ${\mathcal{H}}(p,n)$-module is a simple ${\mathcal{H}}_n^{\operatorname{aff}}$-module. We shall use Dipper–Mathas’s explicit construction of Morita equivalence for Ariki–Koike algebras. To this end, we need some notations and definitions. Let $\{s_1,s_2,\cdots,s_{n-1}\}$ be the set of basic transpositions in ${\mathfrak S}_n$. A word $w=s_{i_1}\cdots s_{i_k}$ for $w\in{\mathfrak S}_{n}$ is a reduced expression if $k$ is minimal; in this case we say that $w$ has length $k$ and we write $\ell(w)=k$. Given a reduced expression $s_{i_1}\cdots s_{i_k}$ for $w\in{\mathfrak S}_n$, we write $T_w=T_{i_1}\cdots T_{i_k}$, then $T_w$ depends only on $w$ and not on the choice of reduced expression. It is well-known that ${\mathcal{H}}_{{q}}({\mathfrak S}_n)$ is a free module with basis $\{T_w|w\in{\mathfrak S}_n\}$. For each integer $0\leq a\leq n$, we define $$w_{n-a,a}=\underbrace{(s_{n-a}\cdots s_{n-1})}_{\text{$a$ times}} \underbrace{(s_{n-a-1}\cdots s_{n-2})}_{\text{$a$ times}}\cdots \underbrace{(s_{1}\cdots s_{a})}_{\text{$a$ times}}$$ if $a\not\in\{0,n\}$; or $w_{n-a,a}=1$ if $a\in\{0,n\}$. Let $s$ be an integer with $1\leq s\leq p$ and such that $$\prod_{1\leq i\leq s<j\leq p}\prod_{-n<a<n}(q^aQ_i-Q_j)\neq 0,$$ in $K$. [([@DM])]{} For each integer $0\leq b\leq n$, let $$\begin{aligned} u_{n-b}^{-}:&=\prod_{t=1}^{s}(L_1-Q_t)(L_2-Q_t)\cdots (L_{n-b}-Q_t),\\ u_{b}^{+}:&=\prod_{t=s+1}^{p}(L_1-Q_t)(L_2-Q_t)\cdots (L_{b}-Q_t),\\ v_{b}:&=u_{n-b}^{-}T_{w_{n-b,b}}u_{b}^{+},\,\,V^b:=v_b{\mathcal{H}}(p,n). \end{aligned}$$ Let ${\mathcal{H}}(s,b)={\mathcal{H}}_{q,Q_1,\cdots,Q_s}(s,b)$ (resp., ${\mathcal{H}}(p-s,n-b)={\mathcal{H}}_{q,Q_{s+1},\cdots,Q_p}(p-s,n-b)$) be the Ariki–Koike algebra with parameters $\{q,Q_1,\cdots,Q_s\}$ and of size $b$ (resp., with parameters $\{q,Q_{s+1},\cdots,Q_p\}$ and of size $n-b$). Following [@DM], let $\hat{\Pi}_{s,b}$ be the functor from $\operatorname{mod}_{{\mathcal{H}}(s,b)\otimes{\mathcal{H}}(p-s,n-b)}$ to $\operatorname{mod}_{{\mathcal{H}}(p,n)}$ given by $$\hat{\Pi}_{s,b}(X):=X\otimes_{{\mathcal{H}}(s,b)\otimes{\mathcal{H}}(p-s,n-b)}V^{b},\,\,\, \forall\,X\in \operatorname{mod}_{{\mathcal{H}}(s,b)\otimes{\mathcal{H}}(p-s,n-b)}.$$ Note that here the left $\bigl({\mathcal{H}}(s,b)\otimes{\mathcal{H}}(p-s,n-b)\bigr)$-action on $V^{b}$ is well-defined because of the following very useful result from [@DM (3.4)]. \[lm54\] [([@DM (3.4)])]{} Suppose that $0\leq b\leq n$. \(i) $$T_i v_b=\begin{cases} v_b T_{i+b}, &\text{if $1\leq i<n-b$,}\\ v_bT_{i-n+b}, &\text{if $n-b<i\leq n$.}\end{cases}$$ \(ii) $$L_k v_b=\begin{cases} v_b L_{k+b}, &\text{if $1\leq k\leq n-b$,}\\ v_bL_{k-n+b}, &\text{if $n-b+1\leq k\leq n$.}\end{cases}$$ Let ${\overrightarrow\operatorname{Q}}_1:=(Q_1,\cdots,Q_s)$, ${\overrightarrow\operatorname{Q}}_2:=(Q_{s+1},\cdots,Q_p)$. Let ${\overrightarrow\operatorname{Q}}:=({\overrightarrow\operatorname{Q}}_1,{\overrightarrow\operatorname{Q}}_2)=(Q_1,\cdots,Q_p)$. Suppose that $D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_1}\neq 0$ (resp., $D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_2}\neq 0$) is an irreducible ${\mathcal{H}}(s,b)$-module (resp., ${\mathcal{H}}(p-s,n-b)$-module), where ${\lambda}^{[1]}$ (resp., ${\lambda}^{[2]}$) is an $s$-multipartition of $b$ (resp., $(p-s)$-multipartition of $n-b$). Let ${\lambda}:=({\lambda}^{[1]},{\lambda}^{[2]})$ (concatenation of ordered tuples), which is a $p$-multipartition of $n$. By [@DM], $D^{{\lambda}}_{{\overrightarrow\operatorname{Q}}}\neq 0$ is an irreducible ${\mathcal{H}}(p,n)$-module, and $$\hat{\Pi}_{s,b}\Bigl(D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_1}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_2}\Bigr)=\Bigl(D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_1}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_2}\Bigr)\otimes_{{\mathcal{H}}(s,b)\otimes{\mathcal{H}}(p-s,n-b)}V^b\cong D^{{\lambda}}_{{\overrightarrow\operatorname{Q}}}.$$ Let ${\mathcal{H}}_b^{\operatorname{aff}}$ (resp., ${\mathcal{H}}_{n-b}^{\operatorname{aff}}$) be the standard parabolic subalgebra of ${\mathcal{H}}_n^{\operatorname{aff}}$ generated by $T_1,\cdots,T_{b-1},X_1,\cdots,X_b$ (resp., by $T_{b+1},\cdots,T_{n-1},X_{b+1},\cdots,X_n$). Then $D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_1}$ (resp., $D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_2}$) naturally becomes an irreducible ${\mathcal{H}}_{b}^{\operatorname{aff}}$-module (resp., ${\mathcal{H}}_{n-b}^{\operatorname{aff}}$-module). We have that \[thm55\] There is an ${\mathcal{H}}_n^{\operatorname{aff}}$-module isomorphism $$D^{{\lambda}}_{{\overrightarrow\operatorname{Q}}}\cong\operatorname{Ind}_{{\mathcal{H}}_{b}^{\operatorname{aff}}\otimes{\mathcal{H}}_{n-b}^{\operatorname{aff}}}^{{\mathcal{H}}_n^{\operatorname{aff}}}\Bigl( D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_1}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_2}\Bigr).$$ [Proof:]{} By our previous discussion, it suffices to show that $$\begin{aligned}&\Bigl(D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_1}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_2}\Bigr)\otimes_{{\mathcal{H}}_{b}^{\operatorname{aff}}\otimes{\mathcal{H}}_{n-b}^{\operatorname{aff}}}{\mathcal{H}}_{n}^{\operatorname{aff}} \cong\\ &\qquad\qquad\Bigl(D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_1}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_2}\Bigr)\otimes_{{\mathcal{H}}(s,b)\otimes{\mathcal{H}}(p-s,n-b)}v_b{\mathcal{H}}(p,n), \end{aligned}$$ where $v_b{\mathcal{H}}(p,n)$ is regarded as right ${\mathcal{H}}_{n}^{\operatorname{aff}}$-module via the natural surjective homomorphism $\varphi:\,{\mathcal{H}}_{n}^{\operatorname{aff}}\twoheadrightarrow{\mathcal{H}}(p,n)$. In fact, by Lemma \[lm54\], it is easy to see that the following map $$\bigl(x\otimes y\bigr)\otimes_{{\mathcal{H}}_{b}^{\operatorname{aff}}\otimes{\mathcal{H}}_{n-b}^{\operatorname{aff}}} h\mapsto \bigl(x\otimes y\bigr)\otimes_{{\mathcal{H}}(s,b)\otimes{\mathcal{H}}(p-s,n-b)} v_b h$$ extends naturally to a well-defined surjective right ${\mathcal{H}}_{n}^{\operatorname{aff}}$-module homomorphism. Now comparing their dimensions (see [@DM (4.8)]), we proves the theorem. Now we suppose that $\operatorname{Q}=\operatorname{Q}_1\sqcup\operatorname{Q}_2\sqcup\cdots\sqcup\operatorname{Q}_{\kappa}$ (disjoint union) such that $Q_i, Q_j$ are in the same $q$-orbit only if $Q_i, Q_j\in\operatorname{Q}_c$ for some $1\leq c\leq\kappa$. For each integer $i$ with $1\leq i\leq\kappa$, let $D^{{\lambda}^{[i]}}_{{\overrightarrow\operatorname{Q}}_i}\neq 0$ be an irreducible ${\mathcal{H}}(p_i,b_i)$-module, where $p_i=|{\overrightarrow\operatorname{Q}}_i|$, ${\lambda}^{[i]}$ is a $p_i$-multipartition of $b_i$, $\sum_{i=1}^{\kappa}b_i=n$. Let ${\lambda}:=({\lambda}^{[1]},\cdots,{\lambda}^{[\kappa]})$ (concatenation of ordered tuples), which is a $p$-multipartition of $n$. \[cor56\] With the above assumptions and notations, we have an ${\mathcal{H}}_n^{\operatorname{aff}}$-module isomorphism $$D^{{\lambda}}_{{\overrightarrow\operatorname{Q}}}\cong\operatorname{Ind}_{{\mathcal{H}}_{b_1}^{\operatorname{aff}}\otimes\cdots\otimes{\mathcal{H}}_{b_{\kappa}}^{\operatorname{aff}}}^{{\mathcal{H}}_n^{\operatorname{aff}}}\Bigl( D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_1}\otimes\cdots\otimes D^{{\lambda}^{[\kappa]}}_{{\overrightarrow\operatorname{Q}}_{\kappa}}\Bigr).$$ [Proof:]{} This follows from Proposition \[thm55\] and the associativity of tensor product induction functor. Now we return to our setup in Theorem \[main2\]. For each $1\leq i\leq k$, we set ${\overrightarrow\operatorname{Q}}_i=({\varepsilon}^{i-1},{\varepsilon}^{k+i-1},\cdots, {\varepsilon}^{(d-1)k+i-1})$. $\operatorname{Q}=\operatorname{Q}_1\sqcup\cdots\sqcup\operatorname{Q}_k$ is a partition of the parameters set $\operatorname{Q}$ into different $q$-orbits. Let ${\overrightarrow\operatorname{Q}}=\bigl({\overrightarrow\operatorname{Q}}_1,{\overrightarrow\operatorname{Q}}_2,\cdots,{\overrightarrow\operatorname{Q}}_{k}\bigr)$ (concatenation of ordered tuples). For each $p$-multipartition ${\lambda}=({\lambda}^{(1)},\cdots,{\lambda}^{(p)})$ of $n$, we write $${\lambda}^{[i]}=({\lambda}^{((i-1)d+1)},{\lambda}^{((i-1)d+2)},\cdots,{\lambda}^{(id)}),\,\,\,\text{for}\,\,i=1,2,\cdots,k.$$ and recall $\theta$ is the map ${\lambda}\mapsto ({\lambda}^{[1]},\cdots,{\lambda}^{[k]})$. Let $n_i=|{\lambda}^{[i]}|$ for each $1\leq i\leq k$. [**Proof of Theorem \[main2\]:**]{}We define $${\overrightarrow\operatorname{Q}}_{1}^{\ast}:=({\varepsilon}^k,{\varepsilon}^{2k},\cdots,{\varepsilon}^{(d-1)k},1).$$ By Lemma \[lm24\], we have that $$\Bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\Bigr)^{\sigma}\cong{\widetilde{D}}_{({\overrightarrow\operatorname{Q}}_{2},{\overrightarrow\operatorname{Q}}_{3},\cdots,{\overrightarrow\operatorname{Q}}_{k},{\overrightarrow\operatorname{Q}}_{1}^{\ast})}^{{\lambda}}.$$ Applying Corollary \[cor56\], we get that $$\begin{aligned} \Bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\Bigr)^{\sigma}&\cong{\widetilde{D}}_{({\overrightarrow\operatorname{Q}}_{2},{\overrightarrow\operatorname{Q}}_{3},\cdots,{\overrightarrow\operatorname{Q}}_{k},{\overrightarrow\operatorname{Q}}_{1}^{\ast})}^{{\lambda}}\\ &\cong \operatorname{Ind}_{{\mathcal{H}}_{n_1}^{\operatorname{aff}}\otimes\cdots\otimes{\mathcal{H}}_{n_{k-1}}^{\operatorname{aff}}\otimes{\mathcal{H}}_{n_{k}}^{\operatorname{aff}}}^{{\mathcal{H}}_n^{\operatorname{aff}}}\Bigl( D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_2}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_3}\otimes\cdots\otimes D^{{\lambda}^{[k-1]}}_{{\overrightarrow\operatorname{Q}}_{k}}\otimes D^{{\lambda}^{[k]}}_{{\overrightarrow\operatorname{Q}}_1^{\ast}}\Bigr).\end{aligned}$$ By [@V (5.12)], the righthand side module has the same composition factors as $$\operatorname{Ind}_{{\mathcal{H}}_{n_k}^{\operatorname{aff}}\otimes{\mathcal{H}}_{n_{1}}^{\operatorname{aff}}\otimes\cdots\otimes{\mathcal{H}}_{n_{k-1}}^{\operatorname{aff}}}^{{\mathcal{H}}_n^{\operatorname{aff}}}\Bigl( D^{{\lambda}^{[k]}}_{{\overrightarrow\operatorname{Q}}_1^{\ast}}\otimes D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_2}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_3}\otimes\cdots\otimes D^{{\lambda}^{[k-1]}}_{{\overrightarrow\operatorname{Q}}_{k}}\Bigr).$$ In particular, as both modules are irreducible, these two modules are in fact isomorphic to each other. Therefore, $$\Bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\Bigr)^{\sigma}\cong\operatorname{Ind}_{{\mathcal{H}}_{n_k}^{\operatorname{aff}}\otimes{\mathcal{H}}_{n_{1}}^{\operatorname{aff}}\otimes\cdots\otimes{\mathcal{H}}_{n_{k-1}}^{\operatorname{aff}}}^{{\mathcal{H}}_n^{\operatorname{aff}}}\Bigl( D^{{\lambda}^{[k]}}_{{\overrightarrow\operatorname{Q}}_1^{\ast}}\otimes D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_2}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_3}\otimes\cdots\otimes D^{{\lambda}^{[k-1]}}_{{\overrightarrow\operatorname{Q}}_{k}}\Bigr).$$ Now again by Lemma \[lm24\], $$D^{{\lambda}^{[k]}}_{{\overrightarrow\operatorname{Q}}_1^{\ast}}\cong \Bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}_1}^{{\lambda}^{[k]}}\Bigr)^{\sigma'},$$ where $\sigma'$ denotes the $K$-algebra automorphism of the Ariki–Koike algebra $${\mathcal{H}}_{q}(d,n_k):={\mathcal{H}}_{q,1,{\varepsilon}',\cdots,({\varepsilon}')^{d-1}}(d,n_k)$$ (where ${\varepsilon}'={\varepsilon}^k$) which is defined on generators by ${\sigma}(T_0)={\varepsilon}' T_0, {\sigma}(T_i)=T_i$, for $i=1,2,\cdots,n_k-1$. By Theorem \[main1\] and Corollary \[maincor\], we deduce that $$D^{{\lambda}^{[k]}}_{{\overrightarrow\operatorname{Q}}_1^{\ast}}\cong \Bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}_1}^{{\lambda}^{[k]}}\Bigr)^{\sigma'}\cong D^{\operatorname{h}'({\lambda}^{[k]})}_{{\overrightarrow\operatorname{Q}}_1},$$ where $\operatorname{h}'$ is as defined in Corollary \[maincor\]. Therefore $$\begin{aligned} \Bigl({\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{{\lambda}}\Bigr)^{\sigma}&\cong\operatorname{Ind}_{{\mathcal{H}}_{n_k}^{\operatorname{aff}}\otimes{\mathcal{H}}_{n_{1}}^{\operatorname{aff}}\otimes\cdots\otimes{\mathcal{H}}_{n_{k-1}}^{\operatorname{aff}}}^{{\mathcal{H}}_n^{\operatorname{aff}}}\Bigl( D^{{\lambda}^{[k]}}_{{\overrightarrow\operatorname{Q}}_1^{\ast}}\otimes D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_2}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_3}\otimes\cdots\otimes D^{{\lambda}^{[k-1]}}_{{\overrightarrow\operatorname{Q}}_{k}}\Bigr)\\ &\cong\operatorname{Ind}_{{\mathcal{H}}_{n_k}^{\operatorname{aff}}\otimes{\mathcal{H}}_{n_{1}}^{\operatorname{aff}}\otimes\cdots\otimes{\mathcal{H}}_{n_{k-1}}^{\operatorname{aff}}}^{{\mathcal{H}}_n^{\operatorname{aff}}}\Bigl( D^{\operatorname{h}'({\lambda}^{[k]})}_{{\overrightarrow\operatorname{Q}}_1}\otimes D^{{\lambda}^{[1]}}_{{\overrightarrow\operatorname{Q}}_2}\otimes D^{{\lambda}^{[2]}}_{{\overrightarrow\operatorname{Q}}_3}\otimes\cdots\otimes D^{{\lambda}^{[k-1]}}_{{\overrightarrow\operatorname{Q}}_{k}}\Bigr)\\ &\cong {\widetilde{D}}_{{\overrightarrow\operatorname{Q}}}^{(\operatorname{h}'({\lambda}^{[k]}),{\lambda}^{[1]},\cdots,{\lambda}^{[k-1]})}. \end{aligned}$$ It follows that $$\theta\Bigl(\operatorname{h}({\lambda})\Bigr)=\Bigl(\operatorname{h}'({\lambda}^{[k]}),{\lambda}^{[1]},\cdots,{\lambda}^{[k-1]}\Bigr),$$ as required. This completes the proof of Theorem \[main2\]. Closed formula for the number of simple ${\mathcal{H}}_q(p,p,n)$-modules ======================================================================== The purpose of this section is to give the second two main results (Theorem \[mainthm3\] and Theorem \[mainthm4\]) of this paper, which yield explicit formula for the number of simple modules over the cyclotomic Hecke algebra of type $G(p,p,n)$ in the non-separated case. Note that in the separated case one can easily write down an explicit formula by using the result [@Hu3 (5.7)]. In the non-separated case we shall apply Theorem \[main1\] and Theorem \[main2\] as well as Naito–Sagaki’s work ([@NS1],[@NS2]) on Lakshmibai–Seshadri paths fixed by diagram automorphisms. Recall our definitions of ${K}_n$ and $\operatorname{h}$ in the second paragraph of Section 3. For each ${\lambda}\in{K}_n$, let $$o_{\operatorname{h}}({\lambda}):=\min\bigl\{1\leq m\leq p\bigm|\operatorname{h}^m({\lambda})={\lambda}\bigr\}.$$ For each integer $m$ with $1\leq m\leq p$, we define $$\begin{aligned} &\widetilde{\Sigma}(m):=\bigl\{{\lambda}\in{K}_n\bigm|\operatorname{h}^{m}({\lambda})={\lambda}\bigr\},\,\,\, \widetilde{N}(m):=\#\widetilde{\Sigma}(m),\\ & N(m):=\#\bigl\{{\lambda}\in{K}_n\bigm|o_{\operatorname{h}}({\lambda})=m\bigr\}. \end{aligned}$$ We use the notation $\#\operatorname{Irr}\bigl({\mathcal{H}}_q(p,p,n)\bigr)$ (resp., $\#\operatorname{Irr}\bigl({\mathcal{H}}_{q}(p,n)\bigr)$) to denote the number of simple ${\mathcal{H}}_q(p,p,n)$-modules (resp., simple ${\mathcal{H}}_{q}(p,n)$-modules). By Lemma \[thm31\], we know that $$\label{equa62}\begin{split} \#\operatorname{Irr}\bigl({\mathcal{H}}_q(p,p,n)\bigr)&=\frac{1}{p}\Bigl\{\#\operatorname{Irr}\bigl({\mathcal{H}}_{q}(p,n)\bigr) -\sum_{1\leq m<p, m|p}N(m)\Bigr\}\\ &\qquad+\sum_{1\leq m<p,m|p}\frac{N(m)}{m}\frac{p}{m}.\end{split}$$ Note that by [@AM], the number $\#\operatorname{Irr}\bigl({\mathcal{H}}_{q}(p,n)\bigr)$ is explicitly known. Therefore, to get a formula for $\#\operatorname{Irr}\bigl({\mathcal{H}}_q(p,p,n)\bigr)$, it suffices to derive a formula for ${N}(\widetilde{m})$ for each integer $1\leq \widetilde{m}<p$ satisfying $\widetilde{m}|p$. Let $\mu$ be the Möbius function $\mu:\,\mathbb{N}\rightarrow\{0,1,-1\}$ which is given by $$\mu(a)=\begin{cases} 1, &\text{if $a=1$,}\\ (-1)^{s}, &\begin{matrix}\text{if $a=p_1\cdots p_s$, where $\{p_i\}_{1\leq i\leq s}$ are}\\[-4pt] \text{pairwise different prime numbers,}\end{matrix}\\ 0 &\text{otherwise} \end{cases}$$ Since $\widetilde{N}(m)=\sum_{1\leq a\leq m, a|m}N(a)$, it follows from Möbius inversion formula that $$\label{equa63} N(\widetilde{m})=\sum_{1\leq m\leq \widetilde{m}, m|\widetilde{m}}\mu(\widetilde{m}/m)\widetilde{N}(m).$$ Therefore, it suffices to derive a formula for $\widetilde{N}(m)$ for each integer $1\leq m<p$ satisfying $m|p$. To this end, we have to use Naito–Sagaki’s work ([@NS1],[@NS2]) on Lakshmibai–Seshadri paths fixed by diagram automorphisms. For the moment, we assume the following setup. That is, $K=\mathbb{C}$, $q,{\varepsilon}\in \mathbb{C}$ be such that ${\varepsilon}=q^{\ell}$ is a primitive $p$-th root of unity, and $q$ is a primitive $p\ell$-th root of unity. We identify $K_n$ with $\mathcal{K}_n$, the set of Kleshchev $p$-multipartitions with respect to $\{q,1,{\varepsilon},\cdots,{\varepsilon}^{p-1}\}$. Let $\mathfrak{g}$ be the Kac–Moody algebra over $\mathbb{C}$ associated to a symmetrizable generalized Cartan matrix $(a_{i,j})_{i,j\in I}$ of finite size. Let $\mathfrak{h}$ be its Cartan subalgebra, and $W\subset\operatorname{GL}(\mathfrak{h}^{\ast})$ be its Weyl group. Let $\{\alpha_i^{\vee}\}_{0\leq i\leq n-1}$ be the set of simple coroots in $\mathfrak{h}$. Let $\mathcal{X}:=\bigl\{\Lambda\in\mathfrak{h}^{\ast}\bigm| \Lambda(\alpha_i^{\vee})\in{\mathbb Z},\,\forall\,0\leq i<n\bigr\}$ be the weight lattice. Let $\mathcal{X}^{+}:=\bigl\{\Lambda\in\mathcal{X}\bigm|\Lambda(\alpha_i^{\vee})\geq 0,\,\forall\,0\leq i<n\bigr\}$ be the set of integral dominant weights. Let $\mathcal{X}_{\mathbb{R}}:=\mathcal{X}\otimes_{{\mathbb Z}}\mathbb{R}$, where $\mathbb{R}$ is the real number field. Assume that $\Lambda\in\mathcal{X}^{+}$. P. Littelmann introduced ([@Li1], [@Li2]) the notion of Lakshmibai–Seshadri paths (L-S paths for short) of class $\Lambda$, which are piecewise linear, continuous maps $\pi:[0,1]\rightarrow\mathcal{X}_{\mathbb{R}}$ parameterized by pairs $(\underline{\nu},\underline{a})$ of a sequence $\underline{\nu}: \nu_1>\nu_2>\cdots>\nu_s$ of elements of $W\Lambda$, where $>$ is the “relative Bruhat order" (see [@Li2 Section 4]) on $W\Lambda$, and a sequence $\underline{a}: 0=a_0<a_1<\cdots<a_s=1$ of rational numbers with a certain condition, called the chain condition. The set $\mathbb{B}(\Lambda)$ of all L-S paths of class $\Lambda$ is called the path model for the irreducible integrable highest weight module $L(\Lambda)$ of highest weight $\Lambda$ over $\mathfrak{g}$. It is a remarkable fact that $\mathbb{B}(\Lambda)$ has a canonical crystal structure which is isomorphic to the crystal associated to the irreducible integrable highest weight module of highest weight $\Lambda$ over the quantum affine algebra $U'_v(\mathfrak{g})$. Now let $\mathfrak{g}={\widehat{\mathfrak{sl}}}_{p{\ell}}$, the affine Kac–Moody algebra of type $A_{p{\ell}-1}^{(1)}$. The generalized Cartan matrix $(a_{i,j})_{i,j\in I}$ of $\mathfrak{g}$ was indexed by the finite set $I:={\mathbb Z}/p{\ell}{\mathbb Z}$. Let $1\leq m<p$ be an integer satisfying $m|p$. Let $\omega:\,I\rightarrow I$ be an automorphism of order $p/m$ defined by $\bar{i}=i+p{\ell}{\mathbb Z}\mapsto \bar{i}-\overline{m{\ell}}=i-m{\ell}+p{\ell}{\mathbb Z}$ for any $\bar{i}\in I$. Clearly $\omega$ is a Dynkin diagram automorphism in the sense of [@NS1 §1.2] (i.e., satisfying $a_{\omega(i),\omega(j)}=a_{i,j}$, $\forall\,i,j\in I$). By [@FSS], $\omega$ induces a Lie algebra automorphism (which is called diagram outer automorphism) $\omega\in\operatorname{Aut}(\mathfrak{g})$ of order $p/m$ and a linear automorphism $\omega^{\ast}\in\operatorname{GL}(\mathfrak{h}^{\ast})$ of order $p/m$. Following [@FRS] and [@NS1 §1.3], we set $c_{i,j}:=\sum\limits_{t=0}^{N_j-1}a_{i,\omega^t(j)}$, where $N_j:=\#\bigl\{\omega^t(i)\bigm|t\geq 0\bigr\}$, $i,j\in I$. We choose a complete set $\widehat{I}$ of representatives of the $\omega$-orbits in $I$, and set $\check{I}:=\bigl\{i\in\widehat{I}\bigm|c_{i,i}>0\bigr\}$. We put $\hat{a}_{i,j}:=2c_{i,j}/c_{j}$ for $i,j\in\widehat{I}$, where $c_i:=c_{ii}$ if $i\in\check{I}$, and $c_i:=2$ otherwise. Then $(\hat{a}_{i,j})_{i,j\in\widehat{I}}$ is a symmetrizable Borcherds–Cartan matrix in the sense of [@Bo], and (if $\check{I}\neq\emptyset$) its submatrix $(\hat{a}_{i,j})_{i,j\in\check{I}}$ is a generalized Cartan matrix of affine type. Let $\widehat{\mathfrak{g}}$ be the generalized Kac–Moody algebra over $\mathbb C$ associated to $(\hat{a}_{i,j})_{i,j\in\widehat{I}}$, with Cartan subalgebra $\widehat{\mathfrak{h}}$, Chevalley generators $\{\hat{x}_i,\hat{y}_i\}_{i\in\widehat{I}}$. The orbit Lie algebra $\check{\mathfrak{g}}$ is defined to be the subalgebra of $\widehat{\mathfrak{g}}$ generated by $\widehat{\mathfrak{h}}$ and $\hat{x}_i,\hat{y}_i$ for $i\in\check{I}$, which is a usual Kac–Moody algebra. With the above assumptions and notations, we have that $$\check{\mathfrak{g}}=\begin{cases} {\widehat{\mathfrak{sl}}}_{m{\ell}}, &\text{if $m{\ell}>1$,}\\ \mathbb{C}, &\text{if $m={\ell}=1$.} \end{cases}$$ [Proof:]{} This follows from direct verification. We define $\bigl(\mathfrak{h}^{\ast}\bigr)^{\circ}:=\bigl\{\Lambda\in\mathfrak{h}^{\ast} \bigm|\omega^{\ast}(\Lambda)=\Lambda\bigr\}$. $\widetilde{W}:=\bigl\{w\in W\bigm|\omega^{\ast}w=w\omega^{\ast}\bigr\}$. To distinguish with the objects for $\mathfrak{g}$, the objects for the obit Lie algebra $\check{\mathfrak{g}}$ will always have the symbol “$\vee{}$” on the head. For example, $\check{\mathfrak{h}}$ denotes the Cartan subalgebra of $\check{\mathfrak{g}}$, $\check{W}$ the Weyl group of $\check{\mathfrak{g}}$, $\{\check{\Lambda}_i\}_{0\leq i\leq m{\ell}-1}$ the set of fundamental dominant weights in $\check{\mathfrak{h}}^{\ast}$. There exist a linear automorphism $P_{\omega}^{\ast}:\,\check{\mathfrak{h}}^{\ast}\rightarrow \bigl(\mathfrak{h}^{\ast}\bigr)^{\circ}$ and a group isomorphism $\Theta:\,\check{W}\rightarrow\widetilde{W}$ such that $\Theta(\check{w})=P_{\omega}^{\ast}\check{w}\bigl(P_{\omega}^{\ast}\bigr)^{-1}$ for each $w\in\check{W}$. By [@FSS §6.5], for each $0\leq i<m{\ell}$, $$P_{\omega}^{\ast}(\check{\Lambda}_i)=\Lambda_{i}+\Lambda_{i+m{\ell}}+\Lambda_{i+2m{\ell}}+\cdots+\Lambda_{i+(p-m){\ell}}+C\delta,$$ where $C\in\mathbb{Q}$ is some constant depending on $\omega$, $\delta$ denotes the null root of $\mathfrak{g}$. Let $\check{\Lambda}=\check{\Lambda}_0+\check{\Lambda}_{{\ell}}+\check{\Lambda}_{2{\ell}}+\cdots+\check{\Lambda}_{(m-1){\ell}}$. Let $\Lambda:=\Lambda_0+\Lambda_{{\ell}}+\Lambda_{2{\ell}}+\cdots+\Lambda_{(p-1){\ell}}$. Then it follows that $P_{\omega}^{\ast}(\check{\Lambda})=\Lambda+C'\delta$, for some $C'\in\mathbb{Q}$. Let $\mathbb{B}(\Lambda)$ (resp., $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda})\bigr)$) be the set of all L-S paths of class $\Lambda$ (resp., of class $P_{\omega}^{\ast}(\check{\Lambda})$). Let $\pi_{\Lambda}$ (resp., $\pi_{P_{\omega}^{\ast}(\check{\Lambda})}$) be the straight path joining $0$ and $\Lambda$ (resp., $0$ and $P_{\omega}^{\ast}(\check{\Lambda})$). Let $\widetilde{E}_i, \widetilde{F}_i$ denote the raising root operator and the lowering root operator (see [@Li1] and [@Li2]) with respect to the simple root $\alpha_i$. The map which sends $\pi_{P_{\omega}^{\ast}(\check{\Lambda})}$ to $\pi_{\Lambda}$ extends to a bijection $\beta$ from $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda})\bigr)$ onto $\mathbb{B}(\Lambda)$ such that $$\beta\bigl(\widetilde{F}_{i_1}\cdots \widetilde{F}_{i_s}\pi_{P_{\omega}^{\ast}(\check{\Lambda})}\bigr)=\widetilde{F}_{i_1}\cdots \widetilde{F}_{i_s}\pi_{\Lambda},$$ for any $i_1,\cdots,i_s\in{\mathbb Z}/{p{\ell}{\mathbb Z}}$. [Proof:]{} This follows from the fact that $P_{\omega}^{\ast}(\check{\Lambda})-\Lambda\in\mathbb{Q}\delta$ and the definitions of $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda})\bigr)$ and $\mathbb{B}(\Lambda)$. Henceforth we shall identify $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda})\bigr)$ with $\mathbb{B}(\Lambda)$. The action of $\omega^{\ast}$ on $\mathfrak{h}^{\ast}$ naturally extends to the set $\mathbb{B}\bigl(P_{\omega}^{\ast}(\check{\Lambda})\bigr)$ (and hence to the set $\mathbb{B}\bigl(\Lambda\bigr)$). By [@NS2 (3.1.1)], if $\widetilde{F}_{i_1}\widetilde{F}_{i_2}\cdots \widetilde{F}_{i_s} \pi_{{\Lambda}}\in {\mathbb{B}}({\Lambda})$, then $$\label{equa66} \omega^{\ast}\bigl(\widetilde{F}_{i_1}\widetilde{F}_{i_2}\cdots \widetilde{F}_{i_s} \pi_{{\Lambda}}\bigr)=\widetilde{F}_{i_1+m{\ell}}\widetilde{F}_{i_2+m{\ell}}\cdots \widetilde{F}_{i_s+m{\ell}} \pi_{{\Lambda}}.$$ We denote by $\mathbb{B}^{\circ}\bigl(\Lambda\bigr)$ the set of all L-S paths of class $\Lambda$ that are fixed by $\omega^{\ast}$. For $\check{\mathfrak{g}}$, we denote by $\widetilde{e}_i, \widetilde{f}_i$ the raising root operator and the lowering root operator with respect to the simple root $\alpha_i$. Let $\pi_{\check{\Lambda}}$ be the straight path joining $0$ and $\check{\Lambda}$. By [@NS1 (4.2)], the linear map $P_{\omega}^{\ast}$ naturally extends to a map from $\check{\mathbb{B}}(\check{\Lambda})$ to $\mathbb{B}^{\circ}\bigl(\Lambda\bigr)$ such that if $\widetilde{f}_{i_1}\widetilde{f}_{i_2}\cdots \widetilde{f}_{i_s} \pi_{\check{\Lambda}}\in \check{\mathbb{B}}(\check{\Lambda})$, then $$\begin{aligned} &P_{\omega}^{\ast}\bigl(\widetilde{f}_{i_1}\widetilde{f}_{i_2}\cdots \widetilde{f}_{i_s} \pi_{\check{\Lambda}}\bigr)=\widetilde{F}_{i_1}\widetilde{F}_{i_1+m{\ell}}\cdots\widetilde{F}_{i_1+(p-m){\ell}} \widetilde{F}_{i_2}\widetilde{F}_{i_2+m{\ell}}\cdots\widetilde{F}_{i_2+(p-m){\ell}}\cdots\\ &\qquad\qquad\qquad\qquad\qquad\widetilde{F}_{i_s}\widetilde{F}_{i_s+m{\ell}}\cdots\widetilde{F}_{i_s+(p-m){\ell}} \pi_{\Lambda}.\end{aligned}$$ \[thm67\] [([@NS1 (4.2),(4.3)])]{} $\mathbb{B}^{\circ}\bigl(\Lambda\bigr)=P_{\omega}^{\ast}\bigl( \check{\mathbb{B}}(\check{\Lambda})\bigr)$. Note that both $\check{\mathbb{B}}(\check{\Lambda})$ and $\mathbb{B}\bigl(\Lambda\bigr)$ have a canonical crystal structure with the raising and lowering root operators playing the role of Kashiwara operators. They are isomorphic to the crystals associated to the irreducible integrable highest weight modules $\check{L}(\check\Lambda)$ of highest weight $\check\Lambda$ over $U'_v(\check{\mathfrak{g}})$ and $L\bigl(\Lambda\bigr)$ of highest weight $\Lambda$ over $U'_v(\mathfrak{g})$ respectively. Henceforth, we identify them without further comments. Let $v_{\check{\Lambda}}$ (resp., $v_{\Lambda}$) denotes the unique highest weight vector of highest weight $\check{\Lambda}$ (resp., of highest weight $\Lambda$) in $\check{\mathbb{B}}(\check{\Lambda})$ (resp., in $\mathbb{B}(\Lambda)$). Therefore, by (\[equa66\]) and Lemma \[thm67\], we get that With the above assumptions and notations, there is an injection $\eta$ from the set $\check{\mathbb{B}}(\check{\Lambda})$ of crystal bases to the set $\mathbb{B}(\Lambda)$ of crystal bases such that $$\begin{aligned} &\eta\bigl(\widetilde{f}_{i_1}\widetilde{f}_{i_2}\cdots \widetilde{f}_{i_s} v_{\check{\Lambda}}\bigr)\equiv\widetilde{F}_{i_1}\widetilde{F}_{i_1+m{\ell}}\cdots\widetilde{F}_{i_1+(p-m){\ell}} \widetilde{F}_{i_2}\widetilde{F}_{i_2+m{\ell}}\cdots\widetilde{F}_{i_2+(p-m){\ell}}\cdots\\ &\qquad\qquad\qquad\qquad\qquad\widetilde{F}_{i_s}\widetilde{F}_{i_s+m{\ell}}\cdots\widetilde{F}_{i_s+(p-m){\ell}} v_{\Lambda}\pmod{{v L(\Lambda)_{A}}},\end{aligned}$$ and the image of $\eta$ consists of all crystal basis element $\widetilde{F}_{i_1}\cdots \widetilde{F}_{i_t}v_{\Lambda}+v L(\Lambda)_{A}$ satisfying $\widetilde{F}_{i_1}\cdots \widetilde{F}_{i_t}v_{\Lambda}\equiv \widetilde{F}_{i_1+m{\ell}}\cdots \widetilde{F}_{i_t+m{\ell}}v_{\Lambda}\pmod{{v L(\Lambda)_{A}}}. $ We translate the language of crystal bases into the language of Kleshchev multipartitions, we get the following combinatorial result, which seems of independent interest. \[cor609\] Let $\operatorname{h}$ be as in Theorem \[main1\]. Let ${\varepsilon}':={\varepsilon}^{p/m}$. Let $q'=\sqrt[{\ell}]{{\varepsilon}'}$, which is a primitive $m{\ell}$-th root of unity. Then there exists a bijection $\eta:\,\check{{\lambda}}\mapsto{\lambda}$ from the set of Kleshchev $m$-multipartitions $\check{{\lambda}}$ of $nm/p$ with respect to $(q',1,{\varepsilon}',{\varepsilon}'^{2},\cdots,({\varepsilon}')^{m-1})$ onto the set of Kleshchev $p$-multipartitions ${\lambda}$ of $n$ with respect to $(q,1,{\varepsilon},{\varepsilon}^{2},\cdots,{\varepsilon}^{p-1})$ satisfying $\operatorname{h}^m({\lambda})={\lambda}$, such that if $$\underbrace{(\emptyset,\cdots,\emptyset)}_{m}\overset{r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \cdots\cdots \overset{r_s}{\twoheadrightarrow}\check{\lambda}$$ is a path from $\underbrace{(\emptyset,\cdots,\emptyset)}_{m}$ to $\check{\lambda}$ in Kleshchev’s good lattice with respect to $(q',1,{\varepsilon}',({\varepsilon}')^{2},\cdots,({\varepsilon}')^{m-1})$, where $s:=nm/p$, then the sequence $$\begin{aligned}&\underbrace{(\emptyset,\cdots,\emptyset)}_{p} \overset{r_1}{\twoheadrightarrow}\cdot\overset{m{\ell}+r_1}{\twoheadrightarrow}\cdot \cdots\overset{(p-m){\ell}+r_1}{\twoheadrightarrow}\cdot \overset{r_2}{\twoheadrightarrow}\cdot \overset{m{\ell}+r_2}{\twoheadrightarrow}\cdot \cdots\overset{(p-m){\ell}+r_2}{\twoheadrightarrow}\cdot\\ &\qquad \qquad\cdots\,\cdot \overset{r_s}{\twoheadrightarrow}\cdot\overset{m{\ell}+r_s}{\twoheadrightarrow}\cdots \overset{(p-m){\ell}+r_s}{\twoheadrightarrow}{\lambda}\end{aligned}$$ defines a path in Kleshchev’s good lattice (w.r.t., $(q,1,{\varepsilon},{\varepsilon}^2,\cdots,{\varepsilon}^{p-1})$) satisfying $\operatorname{h}^m({\lambda})={\lambda}$. [Proof:]{} This follows from (\[equa63\]), Theorem \[main1\] and the realization of crystal graph as Kleshchev’s good lattice. We remark that one can derive a similar combinatorial result for FLOTW $p$-partitions by using the same arguments. \[mainthm3\] Suppose that $K=\mathbb{C}$, $q,{\varepsilon},q',{\varepsilon}'\in \mathbb{C}$ be such that ${\varepsilon}=q^{\ell}$ (resp., ${\varepsilon}'=(q')^{\ell}$) is a primitive $p$-th (resp., primitive $m$-th) root of unity, and $q$ (resp., $q'$) is a primitive $p\ell$-th (resp., primitive $m{\ell}$-th) root of unity. Let $1\leq m\leq p$ be an integer such that $m|p$. Then $\widetilde{N}(m)$ is equal to the number of simple ${\mathcal{H}}_{q',1,{\varepsilon}',\cdots,({\varepsilon}')^{m-1}}(m,mn/p)$-modules, where ${\mathcal{H}}_{q',1,{\varepsilon}',\cdots,({\varepsilon}')^{m-1}}(m,mn/p)$ is the Ariki–Koike algebra with parameters $(q',1,{\varepsilon}',\cdots,({\varepsilon}')^{m-1})$ and of size $mn/p$. In particular, for each integer $1\leq \widetilde{m}<p$ such that $\widetilde{m}|p$, we get that $$\label{equa611}\begin{split} N(\widetilde{m})&=\sum_{1\leq m\leq\widetilde{m}, m|\widetilde{m}}\mu(\widetilde{m}/m)\widetilde{N}(m)\\ &=\sum_{1\leq m\leq\widetilde{m}, m|\widetilde{m}}\mu(\widetilde{m}/m)\Bigl(\#\operatorname{Irr}\bigl({\mathcal{H}}_{q',1,{\varepsilon}',\cdots,({\varepsilon}')^{m-1}}(m,mn/p)\bigr)\Bigr). \end{split}$$ In the special case where ${\ell}=m=1$, the set $\check{\mathbb{B}}(\check{\Lambda})$ contains only one element—a highest weight vector, which corresponds to the empty partition $\emptyset$. Hence we have If ${\ell}=1$, then for any integer $n\geq 1$, $$\bigl\{{\lambda}\in{K}_n\bigm|\operatorname{h}({\lambda})={\lambda}\bigr\}=\emptyset.$$ Equivalently, $\widetilde{N}(1)=0=N(1)$. With the same assumption as in Theorem \[mainthm3\], if $(p,n)=1$, then $\widetilde{N}(m)=0$ for any integer $1\leq m<p$. In this case, for any irreducible ${\mathcal{H}}_q(p,n)$-module $D$, $D\downarrow_{{\mathcal{H}}_q(p,p,n)}$ remains irreducible. [**Remark 6.14**]{}Combing (\[equa62\]) with (\[equa611\]) we get an explicit formula for the number of simple ${\mathcal{H}}_{q}(p,p,n)$-modules in the case when ${\varepsilon}=q^{\ell}$ is a primitive $p$-th root of unity, and $q$ is a primitive $p\ell$-th root of unity. Note that (by Theorem \[main3\]) this formula is valid over any field $K$ as long as $K$ contains primitive $p$-th root of unity and ${\mathcal{H}}_q(p,p,n)$ is split over $K$. Our formula generalizes earlier results of Geck [@Ge] on the Hecke algebra of type $D_n$ (i.e., of type $G(2,2,n)$). Note that Geck’s method depends on explicit information on character tables and Kazhdan–Lusztig theory for Iwahori–Hecke algebras associated to finite Weyl group, which are not presently available in our general $G(p,p,n)$ cases. We now deal with the remaining cases. That is, we assume the following setup: $K=\mathbb{C}$, $q,{\varepsilon}\in K$. $q$ is a primitive $d\ell$-th root of unity, $q^{\ell}={\varepsilon}^{k}$ is a primitive $d$-th root of unity, and $1\leq k<p$ is the smallest positive integer such that ${\varepsilon}^{k}\in\langle q\rangle$. In this case, we fix the order of parameters $\{1,{\varepsilon},\cdots,{\varepsilon}^{p-1}\}$ as ${\overrightarrow\operatorname{Q}}:=({\overrightarrow\operatorname{Q}}_1,\cdots,{\overrightarrow\operatorname{Q}}_k)$, where ${\overrightarrow\operatorname{Q}}_i=({\varepsilon}^{i-1},{\varepsilon}^{k+i-1},\cdots, {\varepsilon}^{(d-1)k+i-1})$ for $i=1,2,\cdots,k$. Let ${\lambda}\in{K}_n$. We write ${\lambda}=({\lambda}^{[1]},\cdots,{\lambda}^{[k]})$ (concatenation of ordered tuples), where each ${\lambda}^{[i]}$ is a $d$-multipartition. By Theorem \[main2\], $$\operatorname{h}({\lambda})=\Bigl(\operatorname{h}'({\lambda}^{[k]}),{\lambda}^{[1]},\cdots,{\lambda}^{[k-1]}\Bigr),$$ where $\operatorname{h}'$ is as defined in Corollary \[maincor\]. As before, it suffices to derive a formula for $\widetilde{N}(m)$ for each integer $1\leq m<p$ satisfying $m|p$. Suppose that $1\leq a\leq\min\{m,k\}$ is the greatest common divisor of $m$ and $k$. By the division algorithm, we know that there exist integers $r_1, r_2$ such that $a=r_1k+r_2m$. Let ${\lambda}\in{K}_n$ be such that $\operatorname{h}^m({\lambda})={\lambda}$. Then $$\begin{aligned} &\quad \Bigl((\operatorname{h}')^{r_1}\bigl({\lambda}^{[1]}\bigr), (\operatorname{h}')^{r_1}\bigl({\lambda}^{[2]}\bigr),\cdots,(\operatorname{h}')^{r_1}\bigl({\lambda}^{[k]}\bigr)\Bigr) =\operatorname{h}^{r_1k}({\lambda})=\operatorname{h}^{a-r_2m}({\lambda})=\operatorname{h}^a({\lambda})\\ &=\Bigl((\operatorname{h}')\bigl({\lambda}^{[k-a+1]}\bigr),(\operatorname{h}')\bigl({\lambda}^{[k-a+2]}\bigr),\cdots, (\operatorname{h}')\bigl({\lambda}^{[k]}\bigr),{\lambda}^{[1]},{\lambda}^{[2]},\cdots,{\lambda}^{[k-a]}\Bigr).\end{aligned}$$ It follows that $$\label{equa614} (\operatorname{h}')^{r_1}\bigl({\lambda}^{[i]}\bigr)=\begin{cases} (\operatorname{h}')\bigl({\lambda}^{[k-a+i]}\bigr), &\text{if $1\leq i\leq a,$}\\ {\lambda}^{[i-a]},&\text{if $a+1\leq i\leq k$.}\end{cases}$$ By an easy induction, we get that $$(\operatorname{h}')^{-r_2m/a}\bigl({\lambda}^{[i]}\bigr)=(\operatorname{h}')^{r_1k/a-1}\bigl({\lambda}^{[i]}\bigr)={\lambda}^{[i]},\,\,\forall\,1\leq i\leq k.$$ We claim that there exist positive integers $r'_1, r'_2$ such that $a=r'_1k+r'_2m$ and $(r_2,r'_2)=1$. In fact, since $a|m$, it is easy to check that $a=\bigl(-(m/a-1)r_1\bigr)k+\bigl(-(m/a-1)r_2+1\bigr)m$. Taking $r'_1=-(m/a-1)r_1$, $r'_2=-(m/a-1)r_2+1$, we prove our claim. Now applying our previous argument again, we get that $$(\operatorname{h}')^{-r'_2m/a}\bigl({\lambda}^{[i]}\bigr)={\lambda}^{[i]},\,\,\forall\,1\leq i\leq k.$$ Let $x,y$ be two integers such that $xr_2+yr'_2=1$. Then $$(\operatorname{h}')^{-m/a}\bigl({\lambda}^{[i]}\bigr)=(\operatorname{h}')^{-xr_2m/a-yr'_2m/a}\bigl({\lambda}^{[i]}\bigr)= {\lambda}^{[i]},\,\,\forall\,1\leq i\leq k.$$ Equivalently, $(\operatorname{h}')^{m/a}\bigl({\lambda}^{[i]}\bigr)={\lambda}^{[i]},\,\,\forall\,1\leq i\leq k$. We write ${\lambda}=({\lambda}^{[1]},\cdots,{\lambda}^{[k]})$ (concatenation of ordered tuples), where each ${\lambda}^{[i]}$ is a $d$-multipartition. Let $n_i:=|{\lambda}^{[i]}|$ for each $1\leq i\leq k$. Then $\sum_{i=1}^{k}n_i=n$. Moreover, in this case we see (from (\[equa614\])) that $${\lambda}^{[k-ja+i]}=(\operatorname{h}')^{jr_1-1}\bigl({\lambda}^{[i]}\bigr),\,\,\text{for}\,\, j=1,2,\cdots,k/a.$$ As a consequence, $n_1+\cdots +n_a=na/k$. By Theorem \[thm23\], Lemma \[lm24\], Theorem \[Ariki\] and our definition of the $p$-tuple ${\overrightarrow\operatorname{Q}}$, we know that ${\lambda}\in{K}_n$ if and only if for each $1\leq i\leq k$, ${\lambda}^{[i]}\in\mathcal{K}_{n_i}$, where $\mathcal{K}_{n_i}$ denotes the set of Kleshchev $d$-multipartitions of $n_i$ with respect to $(q,1,{\varepsilon}',\cdots,{{\varepsilon}'}^{d-1})$, ${\varepsilon}'={\varepsilon}^k$. We define $$\begin{aligned} &\widetilde{\Sigma}(k,m):=\biggl\{\bigl({\lambda}^{[1]},\cdots,{\lambda}^{[a]}\bigr)\vdash \frac{na}{k}\biggm|\begin{matrix} \text{${\lambda}^{[i]}\in\mathcal{K}_{n_i}, (\operatorname{h}')^{m/a}({{\lambda}^{[i]}})={{\lambda}^{[i]}}$,}\\ \text{$\forall\,1\leq i\leq a,\,\,\sum_{i=1}^{a}n_i=\frac{na}{k}$}\end{matrix}\biggr\},\\ &\widetilde{N}(k,m):=\#\widetilde{\Sigma}(k,m).\end{aligned}$$ \[lm616\] With the above notations, the map which sends ${\lambda}=\bigl({\lambda}^{[1]},\cdots,$ ${\lambda}^{[k]}\bigr)$ to $\overline{{\lambda}}:=\bigl({\lambda}^{[1]},\cdots,{\lambda}^{[a]}\bigr)$ defines a bijection from the set $\widetilde{\Sigma}(m)$ onto the set $\widetilde{\Sigma}(k,m)$. [Proof:]{} Our previous discussion shows that the $p$-multipartition ${\lambda}=({\lambda}^{[1]},{\lambda}^{[2]},$ $\cdots,{\lambda}^{[k]})\in\widetilde{\Sigma}(m)$ can be recovered from the $da$-multipartition $\overline{{\lambda}}:=({\lambda}^{[1]},{\lambda}^{[2]},$ $\cdots,{\lambda}^{[a]})$ and the automorphism $\operatorname{h}'$. In particular, the above map is injective. It remains to prove the map is surjective. Let $\alpha:=\bigl({\lambda}^{[1]},\cdots,{\lambda}^{[a]}\bigr)\in \widetilde{\Sigma}(k,m)$. Recall that $r_1k+r_2m=a$. For each integer $1\leq i\leq a$, we define $${\lambda}^{[k-ja+i]}:=(\operatorname{h}')^{jr_1-1}\bigl({\lambda}^{[i]}\bigr),\,\,\text{for}\,\, j=1,2,\cdots,k/a.$$ This is well-defined, since $(\operatorname{h}')^{kr_1/a-1}\bigl({\lambda}^{[i]}\bigr)=(\operatorname{h}')^{-r_2m/a}\bigl({\lambda}^{[i]}\bigr)={\lambda}^{[i]}$ (because $\alpha\in\widetilde{\Sigma}(k,m)$). Note also that the above definition is equivalent to (\[equa614\]). Therefore, if we set ${\lambda}:=\bigl({\lambda}^{[1]},\cdots,$ ${\lambda}^{[k]}\bigr)$, then the discussion above (\[equa614\]) implies that $\operatorname{h}^{r_2m}({\lambda})={\lambda}$. Now recall that $$r'_1=-(m/a-1)r_1,\,\, r'_2=-(m/a-1)r_2+1,\,\,a=r'_1k+r'_2m.$$ Therefore, for each integer $1\leq i\leq a$, we have that $$(\operatorname{h}')^{jr'_1-1}\bigl({\lambda}^{[i]}\bigr)=(\operatorname{h}')^{-j(m/a-1)r_1-1}\bigl({\lambda}^{[i]}\bigr)= {\lambda}^{[k-ja+i]},\,\,\text{for}\,\, j=1,2,\cdots,k/a.$$ Therefore, the discussion above (\[equa614\]) also implies that $\operatorname{h}^{r'_2m}({\lambda})={\lambda}$. Since $xr_2+yr'_2=1$, it follows that $\operatorname{h}^m({\lambda})=\operatorname{h}^{xr_2m+yr'_2m}({\lambda})={\lambda}$. In other words, ${\lambda}\in\widetilde{\Sigma}(m)$ with $\overline{{\lambda}}=\alpha$, as required. This proves the surjectivity, and hence completes the proof of the lemma. Let $\widetilde{d}:=(d,\frac{m}{a})$. Note that $(\operatorname{h}')^{d}=\operatorname{id}$. Therefore, $(\operatorname{h}')^{m/a}({\lambda}^{[i]})={\lambda}^{[i]}$ if and only $(\operatorname{h}')^{\widetilde{d}}({\lambda}^{[i]})={\lambda}^{[i]}$. Now applying Lemma \[lm616\], Theorem \[mainthm3\] and (\[equa63\]), we get that \[mainthm4\] Suppose that $K=\mathbb{C}$, $q,{\varepsilon}\in K$. $q$ is a primitive $d\ell$-th root of unity, $q^{\ell}={\varepsilon}^{k}$ is a primitive $d$-th root of unity, and $1\leq k<p$ is the smallest positive integer such that ${\varepsilon}^{k}\in\langle q\rangle$. Let $1\leq m<p$ be an integer such that $m|p$. Let $a=(m,k)$, $\widetilde{d}:=(d,\frac{m}{a})$. Then $$\begin{aligned} &\quad\,\widetilde{N}(m)=\widetilde{N}(k,m)\\ &=\sum_{n_1+\cdots+n_{a}=\frac{na}{k}}\prod_{i=1}^{a} \biggl(\#\operatorname{Irr}{\mathcal{H}}_{q'',1,{\varepsilon}'',\cdots,({\varepsilon}'')^{\widetilde{d}-1}}\Bigl(\widetilde{d},\frac{\widetilde{d}n_i}{d}\Bigr) \biggr), \end{aligned}$$ where ${\varepsilon}''=(q'')^{{\ell}}$ is a primitive $\widetilde{d}$-th root of unity, and $q''$ is a primitive $(\widetilde{d}{\ell})$-th root of unity. In particular, for each integer $1\leq \widetilde{m}<p$ such that $\widetilde{m}|p$, we get that $$\label{equa617}\begin{split} &\quad\,N(\widetilde{m})=\sum_{1\leq m\leq\widetilde{m}, m|\widetilde{m}}\mu(\widetilde{m}/m)\widetilde{N}(m)\\ &=\sum_{1\leq m\leq\widetilde{m}, m|\widetilde{m}}\mu(\widetilde{m}/m) \sum_{n_1+\cdots+n_{a}=\frac{na}{k}}\prod_{i=1}^{a} \biggl(\#\operatorname{Irr}{\mathcal{H}}_{q'',1,{\varepsilon}'',\cdots,({\varepsilon}'')^{\widetilde{d}-1}}\Bigl(\widetilde{d},\frac{\widetilde{d}n_i}{d}\Bigr) \biggr).\end{split}$$ [**Remark 6.19**]{}Combing (\[equa62\]) with (\[equa617\]) we get an explicit formula for the number of simple ${\mathcal{H}}_{q}(p,p,n)$-modules in the case when $q$ is a primitive $d\ell$-th root of unity, $q^{\ell}={\varepsilon}^{k}$ is a primitive $d$-th root of unity, and $1\leq k<p$ is the smallest positive integer such that ${\varepsilon}^{k}\in\langle q\rangle$. Thus the problem on determining explicit formula for the number of simple ${\mathcal{H}}_{q}(p,p,n)$-modules is solved by our Theorem \[mainthm3\] and Theorem \[mainthm4\] in all cases. As before, this formula is valid over any field $K$ as long as $K$ contains primitive $p$-th root of unity and ${\mathcal{H}}_q(p,p,n)$ is split over $K$. Finally, we remark that, these explicit formulas strongly indicate that there are some new intimate connections between the representation of ${\mathcal{H}}_{q}(p,p,n)$ at roots of unity and the representation of various Ariki–Koike algebras of smaller sizes at various roots of unity. It seems very likely that the decomposition matrix of the latter can be naturally embedded as a submatrix of the decomposition matrix of the former. We will leave it to a future project. [99]{} S. Ariki, On the semi-simplicity of the Hecke algebra of $(Z/rZ)\wr {{\mathfrak S}}_n$, J. Alg. [**169**]{} (1994) 216–225. S. Ariki, On the decomposition numbers of the Hecke algebra of $G(m,1,n)$, J. Math. Kyoto Univ. [**36**]{} (1996) 789–808. S. Ariki, Cyclotomic $q$-Schur algebras as quotients of quantum algebras, J. reine angew. Math. [**513**]{} (1999) 53–69. S. Ariki, On the classification of simple modules for cyclotomic Hecke algebras of type $G(m,1,n)$ and Kleshchev multi-partitions, Osaka J. Math. (4) [**38**]{} (2001) 827–837. S. Ariki, Representations of quantum algebras and combinatorics of Young tableaux, Translated from the 2000 Japanese edition and revised by the author, University Lecture Series, [**26**]{}, American Mathematical Society, Providence, RI, 2002. S. Ariki and K. Koike, A Hecke algebra of $({\mathbb Z}/r{\mathbb Z})\wr{\mathfrak S}_n$ and construction of its representations, Adv. Math. [**106**]{} (1994) 216–243. S. Ariki and A. Mathas, The number of simple modules of the Hecke algebras of type $G(r,1,n)$, Math. Z. (3) [**233**]{} (2000) 601–623. R.E. Borcherds, Generalized Kac-Moody algebras, J. Alg. (2) [**115**]{} (1988) 501–512. J. Brundan, Modular branching rules and the Mullineux map for Hecke algebras of type $A$, Proc. London. Math. Soc. (3) [**77**]{} (1998) 551–581. J. Brundan and A. Kleshchev, Hecke-Clifford superalgebras, crystals of type $A_{2l}^{(2)}$ and modular branching rules for $\widehat{S}_n$, Representation Theory [**5**]{} (2001) 317–403. M. Broué and G. Malle, Zyklotomische Heckealgebren, Astérisque [**212**]{} (1993) 119–189. C. W. Curtis and L. Reiner, Methods of Representations Theory, I, Wiley-Interscience, New York, 1981. R. Dipper and G. D. James, Representations of Hecke algebras of general linear groups, Proc. London. Math. Soc. (3) [**52**]{} (1986) 20–52. R. Dipper, G.D. James and A. Mathas, Cyclotomic $q$-Schur algebras, Math. Z. [**229**]{} (1998) 385–416. R. Dipper and A. Mathas, Morita equivalence of Ariki-Koike algebras, Math. Z. [**240**]{} (2002) 579–610. O. Foda, B. Leclerc, M. Okado, J.-Y. Thibon and T. Welsh, Branching functions of $A_{(n-1)}^{(1)}$ and Jantzen-Seitz problem for Ariki-Koike algebras, Adv. Math. (2) [**141**]{} (1999) 322–365. J. Fuchs, U. Ray and C. Schweigert, Some automorphisms of generalized Kac-Moody algebras, J. Alg. [**191**]{} (1997) 518–540. J. Fuchs, B. Schellekens and C. Schweigert, From Dynkin diagram symmetries to fixed point structures, Comm. Math. Phys. [**180**]{} (1996) 39–97. M. Geck, On the representation theory of Iwahori-Hecke algebras of extended finite Weyl groups, Represent. Theory [**4**]{} (2000) 370–397. G. Genet and N. Jacon, Modular representations of cyclotomic Hecke algebras of type $G(r,p,n)$, preprint, math.RT/0409297. J.J. Graham and G.I. Lehrer, Cellular algebras Invent. Math. [**123**]{} (1996) 1–34. T. Hayashi, $q$-analogues of Clifford and Weyl algebras—spinor and oscillator representations of quantum enveloping algebras, Comm. Math. Phys. [**127**]{} (1990) 129–144. J. Hu, A Morita equivalence theorem for Hecke algebras of type $D_n$ when $n$ is even, Manuscr. Math. [ **108**]{} (2002) 409–430. J. Hu, Crystal basis and simple modules for Hecke algebra of type $D_n$, J. Alg. (1) [**267**]{} (2003) 7–20. J. Hu, Modular representations of Hecke algebras of type $G(p,p,n)$, J. Alg. (2) [**274**]{} (2004) 446–490. J. Hu, Branching rules for Hecke Algebras of Type $D_{n}$, Math. Nachr. [**280**]{} (2007) 93–104. J. Hu, The number of simple modules for the Hecke algebras of type G(r,p,n), preprint, math.RT/0601572. N. Jacon, Représentations modulaires des algèbres de Hecke et des algèbres de Ariki-Koike, Ph.D. thesis, Université Claude Bernard Lyon I, 2004. N. Jacon, On the parameterization of the simple modules for Ariki-Koike algebras at roots of unity, J. Math. Kyoto Univ. [**44**]{} (2004) 729–767. M. Jimbo, K. Misra, T. Miwa and M. Okado, Combinatorics of representations of $U_q(\hat{sl}(n))$ at $q=0$, Comm. Math. Phys. [**136**]{} (1991) 543–566. M. Kashiwara, On crystal bases of the $q$-analogue of universal enveloping algebras, Duke J. Math. [**63**]{} (1991) 465–516. A. Kleshchev, Branching rules for the modular representations of symmetric groups III; some corollaries and a problem of Mullineux, J. London Math. Soc. [**54**]{} (1995) 25–38. P. Littelmann, A Littlewood-Richardson rule for symmetrizable Kac-Moody algebras, Invent. Math. [**116**]{} (1994) 329–346. P. Littelmann, Paths and root operators in representation theory, Ann. of Math. (2) [**142**]{} (1995) 499–525. A. Lascoux, B. Leclerc and J.-Y. Thibon, Hecke algebras at roots of unity and crystal bases quantum affine algebras, Comm. Math. Phys. [**181**]{} (1996) 205–263. B. Leclerc and J.-Y. Thibon, Canonical bases of $q$-deformed Fock spaces, Int. Math. Res. Notices, [**9**]{} (1996) 447–456. G. Lusztig, Introduction to Quantum Groups, Progress in Math. [**110**]{} Birkhäuser, Boston, 1990. G. Malle and A. Mathas, Symmetric cyclotomic Hecke algebras, J. Alg. [**205**]{} (1998) 275–293. T. Misra and K.C. Miwa, Crystal bases for the basic representations of $U_q({\widehat{\mathfrak{sl}}}_n)$, Comm. Math. Phys. [**134**]{} (1990) 79–88. S. Naito and D. Sagaki, Lakshmibai-Seshadri paths fixed by a diagram automorphism, J. Alg. [**245**]{} (2001) 395–412. S. Naito and D. Sagaki, Standard paths and standard monomials fixed by a diagram automorphism, J. Alg. [**251**]{} (2002) 461–474. C. Pallikaros, Representations of Hecke algebras of type $D_n$, J. Alg. [**169**]{} (1994) 20–48. M. Varagnolo and E. Vasserot, On the decomposition matrices of the quantized Schur algebras, Duke Math. J. [**100**]{} (1999) 267–297. M. Vazirani, Irreducible modules over the affine Hecke algebra: a strong multiplicity one result, Ph.D. thesis, University of California at Berkeley, 1998. [^1]: Research supported by National Natural Science Foundation of China (Project 10401005) and by Program for New Century Excellent Talents in University and partly by the URF of Victoria University of Wellington. [^2]: We do not deal with the general $G(r,p,n)$ case here because this paper has already been cited in Ariki’s book [@A3 [\[]{}cyclohecke12[\]]{}] on the one hand; and on the other hand, the proof in the $G(r,p,n)$ case needs nothing more than the proof in the $G(p,p,n)$ case except some sophisticated notations. [^3]: The readers should not confuse the element $d$ here with the integer $d$ we used before.
{ "pile_set_name": "ArXiv" }
--- author: - Alexander Schmidt and Kay Wingberg date: 'September 25, 2008' title: '**Extensions of profinite duality groups**' --- Let $G$ be a profinite group and let $p$ be a prime number. By $\Mod_p(G)$ we denote the category of discrete $p$-primary $G$-modules. For $A\in \Mod_p(G)$ and $i\geq 0$, let $$D_i(G,A)=\varinjlim_U\, H^i(U,A)^*,$$ where $\null^*$ is $\Hom(-,{{\mathbb Q}}_p/{{\mathbb Z}}_p)$, the direct limit is taken over all open subgroups $U$ of $G$ and the transition maps are the duals of the corestriction maps. $D_i(G,A)$ is a discrete $G$-module in a natural way. Assume that $n=\cd_p\,G$ is finite. Then the $G$-module $$I(G)=\varinjlim_{\nu\in {{\mathbb N}}}\,D_n(G,{{\mathbb Z}}/p^\nu{{\mathbb Z}})$$ is called the [**dualizing module**]{} of $G$ at $p$. Its importance lies in the functorial isomorphism $$H^n(G,A)^*\cong\Hom_G(A,I(G))$$ for all $A\in\Mod_p(G)$. This isomorphism is induced by the cup-products ($V\subseteq U$) $$H^n(G,A)^* \times \null_{p^\nu} A^U \longrightarrow H^n(V,{{\mathbb Z}}/p^\nu{{\mathbb Z}})^*,\ (\phi,a)\longmapsto \big(\alpha \mapsto \phi(\cor_G^V(\alpha \cup a))\big)$$ by passing to the limit over $\nu$ and $V$, and then over $U$. The identity-map of $I(G)$ gives rise to the homomorphism $$\tr :H^n(G,I(G))\lpfeil {{\mathbb Q}}_p/{{\mathbb Z}}_p\,,$$ called the [**trace map**]{}. The profinite group $G$ is called a [**duality group at $p$ of dimension $n$**]{} if for all $i\in{{\mathbb Z}}$ and all finite $G$-modules $A\in\Mod_p(G)$, the cup-product and the trace map $$H^i(G,\Hom(A,I(G)) \times H^{n-i}(G,A)\lpfeil^\cup H^n(G,I(G))\lpfeil^{\tr}{{\mathbb Q}}_p/{{\mathbb Z}}_p$$ yield an isomorphism $$H^i(G,\Hom(A,I(G)))\cong H^{n-i}(G,A)^*.$$ [**Remark:**]{} In [@V], J.-L. Verdier used the name [**strict Cohen-Macaulay at $p$**]{} for what we call a profinite duality group at $p$ here. In [@48], A. Pletch defined $D_p^n$-groups (and called them duality groups at $p$ of dimension $n$). The $D_p^n$-groups of Pletch are exactly the duality groups at $p$ (in our sense) which, in addition, satisfy the following finiteness condition: $FC(G,p)$: [*$H^i(G,A)$ is finite for all finite $A\in \Mod_p(G)$ and for all $i\geq 0$*]{}. Since any finite, discrete $G$-module is trivialized by an open subgroup $U$ of $G$, condition $FC(G,p)$ can also be rephrased in the form: $FC(G,p)$: [*$H^i(U,{{\mathbb Z}}/p{{\mathbb Z}})$ is finite for all open subgroups $U$ of $G$ and all $i\geq 0$*]{}. By a duality theorem due to J. Tate, see [@T] Thm.3 or [@V] Prop.4.3 or [@NSW] (3.4.6), a profinite group $G$ of cohomological $p$-dimension $n$ is a duality group at $p$ if and only if $$D_i(G,{{\mathbb Z}}/p{{\mathbb Z}})=0 \quad \hbox{ for $0\leq i<n$}.$$ As a consequence we see that every open subgroup of a duality group at $p$ is a duality group at $p$ as well (of the same cohomological dimension), and if an open subgroup of $G$ is a duality group at $p$ and $\cd_p\,G<\infty$, then $G$ is a duality group at $p$ of the same cohomological dimension (use [@NSW] (3.3.5)(ii)). Furthermore, any profinite group of cohomological $p$-dimension $1$ is a duality group at $p$. We call a profinite group $G$ [**virtually a duality group at $p$ of (virtual) dimension $\vcd_p \,G=n$**]{} if an open subgroup $U$ of $G$ is a duality group at $p$ of dimension $n$. The objective of this paper is to give a proof of Theorem \[1\] below, which states that the class of duality groups is closed under group extensions $1\to H\to G\to G/H \to 1$ if the kernel satisfies $FC(H,p)$. Weaker forms of Theorem \[1\] were first proved by A. Pletch (for $D_p^n$-groups, see [@48][^1]) and by the second author (for Poincaré groups, see [@71]). \[1\] Let $$1 \lpfeil H\lpfeil G\lpfeil G/H\lpfeil 1$$ be an exact sequence of profinite groups such that condition $FC(H,p)$ is satisfied. Then the following assertions hold: [(i)]{} If $G$ is a duality group at $p$, then $H$ is a duality group at $p$ and $G/H$ is virtually a duality group at $p$. [(ii)]{} If $H$ and $G/H$ are duality groups at $p$, then $G$ is a duality group at $p$. Moreover, in both cases we have: $$\cd_p\,G=\cd_p\,H+\vcd_p\,G/H,$$ and there is a canonical $G$-isomorphism $$I(G)^\vee\cong I(H)^\vee \; {{\mathop{\hat\otimes}}}_{{{\mathbb Z}}_p}\; I(G/H)^\vee,$$ where $\null^\vee$ is the Pontryagin dual and ${{\mathop{\hat\otimes}}}_{{{\mathbb Z}}_p}$ is the tensor-product in the category of compact ${{\mathbb Z}}_p$-modules. [**Remark:**]{} The assumption $FC(H,p)$ is necessary, as the following examples show: - Let $G$ be the free pro-$p$-group on two generators $x,y$ and let $H\subset G$ be the normal subgroup generated by $x$. Then $H$ is free of infinite rank, $G/H$ is free of rank one and $1 \pfeil H\pfeil G\pfeil G/H\pfeil 1$ is an exact sequence in which all three groups are duality groups of dimension one. - Let $D$ be a duality group at $p$ of dimension $2$, $F$ a duality group at $p$ of dimension $1$ and $G=F\ast D$ their free product. The kernel of the projection $G\twoheadrightarrow D$ has cohomological $p$-dimension $1$, hence is a duality group a $p$ of dimension $1$. The group $G$ has cohomological $p$-dimension $2$ but is is not a duality group at $p$. In the proof of Theorem \[1\], we make use of the following \[spectral\] Let $$1 \lpfeil H\lpfeil G\lpfeil G/H\lpfeil 1$$ be an exact sequence of profinite groups. Assume that FC($H,p$) holds. Then there is a spectral sequence of homological type $$E^2_{ij}= D_i(G/H,{{\mathbb Z}}/p{{\mathbb Z}}) \otimes D_j(H,{{\mathbb Z}}/p{{\mathbb Z}}) \Longrightarrow D_{i+j}(G,{{\mathbb Z}}/p{{\mathbb Z}}).$$ Let $g$ run through the open normal subgroups of $G$. Then $gH/H \cong g/g\cap H$ runs through the open normal subgroups of $G/H$. For a $G$-module $A\in\Mod_p(G)$, we consider the Hochschild-Serre spectral sequence $$E(g,g\cap H,A):\ E_2^{ij}(g,g\cap H,A)=H^i(g/g\cap H, H^j(g\cap H,A))\Longrightarrow H^{i+j}(g,A).$$ If $g'\subseteq g$ is another open normal subgroup of $G$, then the corestriction yields a morphism $$\cor:E(g',g'\cap H,A)\lpfeil E(g,g\cap H,A)$$ of spectral sequences. The map $$E_2^{ij}(g',g'\cap H,A) \lpfeil E_2^{ij}(g,g\cap H,A)$$ is the composite of the maps $$H^i\big(g'/g'\cap H, H^j(g'\cap H,A)\big) \lpfeil^{\cor_{g\cap H}^{g'\cap H}} H^i\big(g'/g'\cap H,H^j(g\cap H,A)\big)$$ $$\lpfeil^{\cor_{g/g\cap H}^{g'/g'\cap H}} H^i\big(g/g\cap H,H^j(g\cap H,A)\big)$$ and the map between the limit terms is the corestriction $$\cor^{g'}_g:H^{i+j}(g',A)\lpfeil H^{i+j}(g,A).$$ For $2\leq r\leq\infty$ we set $$E^2_{ij}=D^r_{ij}(G,H,A):=\varinjlim_g\,E^{ij}_r(g,g\cap H,A)^*.$$ As taking duals and direct limits are exact operations, the terms $D^r_{ij}(G,H,A)$, $2\leq r \leq \infty$, establish a homological spectral sequence which converges to $D_n(G,A)$. If $h$ runs through the open subgroups of $H$ which are normal in $G$, then the cohomology groups $H^j(h,A)$ are $G$-modules in a natural way. If $g$ is open in $G$ with $g\cap H\subseteq h$, then these groups are $g/g\cap H$-modules. We see that $$D^2_{ij}(G,H,A)=\varinjlim_{\substack{h\subseteq H\\h\trianglelefteq G}}\ \varinjlim_{\substack{g\subseteq G\\ g\cap H\subseteq h}}\,H^i(g/g\cap H,H^j(h,A))^*,$$ where for both limits the transition maps are (induced by) $\cor^*$. In order to conclude the proof of the proposition, it remains to construct isomorphisms $$D^2_{ij}(G, H,{{\mathbb Z}}/p{{\mathbb Z}})\cong D_{i} (G/H,{{\mathbb Z}}/p{{\mathbb Z}}) \otimes D_{j}(H,{{\mathbb Z}}/p{{\mathbb Z}})$$ for all $i$ and $j$. To this end note that all occurring abelian groups are ${{\mathbb F}}_p$-vector spaces, so that $\null^*$ is $\Hom(-,{{\mathbb F}}_p)$. Further note that for vector spaces $V,W$ over a field $k$ the homomorphism $$V^*\otimes W^* \lpfeil (V\otimes W)^*,\ \phi\otimes \psi \longmapsto \big(v\otimes w\mapsto \phi(v)\psi(w)\big)$$ is an isomorphism provided that $V$ or $W$ is finite-dimensional. Let $h$ be an open subgroup of $H$ which is normal in $G$ and let $g'\subseteq g$ be open subgroups of $G$ such that $g$ acts trivially on the finite group $H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}})$. Then, by [@NSW] (1.5.3)(iv), the diagram $$\renewcommand{\arraystretch}{1.6} \begin{array}{ccc} H^{i}(g'/g'\cap H,{{\mathbb Z}}/p{{\mathbb Z}})\otimes H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}}) & \lriso{\cup} & H^{i}\big(g'/g'\cap H, H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}})\big)\\ \phantom{xxxmN}\mapd{\cor \otimes \mathit{id}}&&\mapd{\cor}\\ H^{i}(g/g\cap H,{{\mathbb Z}}/p{{\mathbb Z}})\otimes H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}}) & \lriso{\cup} & H^{i}\big(g/g\cap H, H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}})\big) \end{array} \renewcommand{\arraystretch}{1}$$ commutes. For fixed $h$, we therefore obtain isomorphisms $ \ds D_{i}(G/H,{{\mathbb Z}}/p{{\mathbb Z}}) \otimes H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}})^* $ $$\begin{array}{lcl} \phantom{uuuuuuuuuuuuuuuuuuuu}&\cong& \ds\big(\varinjlim_g H^{i}(g/g\cap H,{{\mathbb Z}}/p{{\mathbb Z}})^*\big) \otimes H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}})^* \\ &\cong&\ds\varinjlim_g \; H^{i}(g/g\cap H,{{\mathbb Z}}/p{{\mathbb Z}})^* \otimes H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}})^* \\ &\cong&\ds\varinjlim_g \big(H^{i}(g/g\cap H,{{\mathbb Z}}/p{{\mathbb Z}}) \otimes H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}})\big)^* \\ &\cong&\ds\varinjlim_g H^{i}\big(g/g\cap H, H^{j}(h,{{\mathbb Z}}/p{{\mathbb Z}})\big)^*. \end{array}$$ Passing to the limit over $h$, we obtain the required isomorphism $$D_{i} (G/H,{{\mathbb Z}}/p{{\mathbb Z}}) \otimes D_{j}(H,{{\mathbb Z}}/p{{\mathbb Z}}) \cong D^2_{ij}(G,H,{{\mathbb Z}}/p{{\mathbb Z}}).$$ \[lemma\] Under the assumptions of Proposition [\[spectral\]]{}, let $i_0$ and $ j_0$ be the smallest integers such that $D_{i_0}(G/H,{{\mathbb Z}}/p{{\mathbb Z}})\neq 0$ and $D_{j_0}(H,{{\mathbb Z}}/p{{\mathbb Z}})\neq 0$, respectively. Then $D_{i_0+j_0}(G,{{\mathbb Z}}/p{{\mathbb Z}})\neq 0$. The spectral sequence constructed in Proposition \[spectral\] induces an isomorphism $$D_{i_0+j_0}(G,{{\mathbb Z}}/p{{\mathbb Z}}) \cong D_{i_0}(G/H,{{\mathbb Z}}/p{{\mathbb Z}}) \otimes D_{j_0}(H,{{\mathbb Z}}/p{{\mathbb Z}})\neq 0.$$ Assume that $G$ is a duality group at $p$ of dimension $d$. Let $\cd_p\, H=m$ and $n=d-m$. Then there exists an open subgroup $H_1$ of $H$ such that $H^m(H_1,{{\mathbb Z}}/p{{\mathbb Z}})\neq 0$. Let $G_1$ be an open subgroup of $G$ such that $H_1=G_1\cap H$. Then $G_1$ is a duality group at $p$ of dimension $d$, $\cd_p\, H_1=m$ and $G_1/H_1$ is an open subgroup of $G/H$. We consider the exact sequence $$1\lpfeil H_1\lpfeil G_1 \lpfeil G_1/H_1 \lpfeil 1.$$ As $H^m(H_1,{{\mathbb Z}}/p{{\mathbb Z}})$ is finite and nonzero, we have $\vcd_p\,G_1/H_1=n$, see [@NSW] (3.3.9). Furthermore, $D_i(G_1,{{\mathbb Z}}/p{{\mathbb Z}})=0$, $i<n+m$. Using Corollary \[lemma\], we see that $D_i(G_1/H_1,{{\mathbb Z}}/p{{\mathbb Z}})=0$ for all $i<n$ and $D_j(H_1,{{\mathbb Z}}/p{{\mathbb Z}})=0$ for all $j<m$. Thus $G_1/H_1$, hence $G/H$, is virtually a duality group at $p$ of dimension $n$, and $H_1$, and so $H$, is a duality group at $p$ of dimension $m$. This shows (i). Assume now that $H$ and $G/H$ are duality groups at $p$ of dimension $m$ and $n$. Then, $\cd_p G=n+m$ by [@NSW] (3.3.8), and in the spectral sequence of Proposition \[spectral\] we have $E^2_{ij}=0$ for $(i,j)\neq (n,m)$. Hence $D_r(G,{{\mathbb Z}}/p{{\mathbb Z}})=0$ for $r\neq n+m$ showing that $G$ is a duality group at $p$ of dimension $n+m$. In order to prove the assertion about the dualizing modules, let $h$ run through all open subgroups of $H$ which are normal in $G$ and $g$ runs through the open subgroups of $G$. Since $m=\cd_p\,H$, the Hochschild-Serre spectral sequence induces isomorphisms $$H^{m+n}(g,{{\mathbb Z}}/p^\nu{{\mathbb Z}})\cong H^n(g/g\cap H,H^m(g\cap H,{{\mathbb Z}}/p^\nu{{\mathbb Z}})),$$ and we obtain $$\begin{array}{rcl} I(G) & \cong& \ds\varinjlim_\nu\,\ds\varinjlim_g\,H^{m+n}(g,{{\mathbb Z}}/p\null^\nu{{\mathbb Z}})^* \\ & \cong &\ds\varinjlim_\nu\,\ds\varinjlim_{h}\, \ds\varinjlim_{g}\,H^n\big(g/g\cap H,H^m(h,{{\mathbb Z}}/p\null^\nu{{\mathbb Z}})\big)^*\\ & \cong &\ds\varinjlim_\nu\,\ds\varinjlim_{h}\, \ds\varinjlim_{g,\res}\,H^0\big(g/g\cap H,\Hom\,(H^m(h,{{\mathbb Z}}/p\null^\nu{{\mathbb Z}}),I(G/H))\big) \\ & \cong & \ds\varinjlim_\nu\,\ds\varinjlim_{h}\,\Hom(H^m(h,{{\mathbb Z}}/p\null^\nu{{\mathbb Z}}), I(G/H))\\ &\cong &\Hom_{cts}(\ds\varprojlim_\nu\,\ds\varprojlim_{h}\,H^m(h,{{\mathbb Z}}/p\null^\nu{{\mathbb Z}}), I(G/H))\\ &\cong &\Hom_{cts}\big((\ds\varinjlim_\nu\,\ds\varinjlim_{h} H^m(h,{{\mathbb Z}}/p^\nu{{\mathbb Z}})^*)^\vee ,I(G/H)\big)\\ & \cong & \Hom_{cts}\,(I(H)^\vee,I(G/H))\cong\big(I(H)^\vee{\mathop{\hat\otimes}}_{{{\mathbb Z}}_p} I(G/H)^\vee\big)^\vee \end{array}$$ (see [@NSW] (5.2.9) for the last isomorphism). This completes the proof of the theorem. [999]{} Neukirch, J., Schmidt, A., Wingberg, K. [*Cohomology of Number Fields.*]{} sec.ed. Springer 2008 Pletch, A. [*Profinite duality groups I.*]{} J. Pure Applied Algebra [**16**]{} (1980) 55–74 and 285–297 Tate, J. [*Letter to Serre.*]{} Annexe 1 to Chap.I in Serre, J.-P. [*Cohomologie Galoisienne.*]{} Lecture Notes in Mathematics [**5**]{}, Springer 1964 (Cinquième édition 1994) Verdier, J.-L. [*Dualité dans la cohomologie des groupes profinis.*]{} Annexe 2 to Chap.I in Serre, J.-P. [*Cohomologie Galoisienne.*]{} Lecture Notes in Mathematics [**5**]{}, Springer 1964 (Cinquième édition 1994) Wingberg, K. [*On Poincaré groups.*]{} J. London Math. Soc. [**33**]{} (1986) 271–278 [^1]: The proof given by Pletch in [@48] is only correct for pro-$p$-groups as the author assumes that finitely generated projective modules over the complete group ring ${{\mathbb Z}}_p{[\![}G{]\!]}$ are free.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Recent studies have shown that distant red galaxies (DRGs), which dominate the high-mass end of the galaxy population at $z \sim 2.5$, are more strongly clustered than the population of blue star-forming galaxies at similar redshifts. However these studies have been severely hampered by the small sizes of fields having deep near-infrared imaging. Here we use the large UKIDSS Ultra Deep Survey to study the clustering of DRGs. The size and depth of this survey allows for an unprecedented measurement of the angular clustering of DRGs at $2 < z_{\rm phot} < 3$ and $K<21$. The correlation function shows the expected power law behavior, but with an apparent upturn at $\theta \lesssim 10\arcsec$. We deproject the angular clustering to infer the spatial correlation length, finding $10.6 \pm 1.6 h^{-1} \textrm{Mpc}$. We use the halo occupation distribution framework to demonstrate that the observed strong clustering of DRGs is not consistent with standard models of galaxy clustering, confirming previous suggestions that were based on smaller samples. Inaccurate photometric redshifts could artificially enhance the observed clustering, however significant systematic redshift errors would be required to bring the measurements into agreement with the models. Another possibility is that the underlying assumption that galaxies interact with their large-scale environment only through halo mass is not valid, and that other factors drive the evolution of the oldest, most massive galaxies at $z \sim 2$.' author: - 'Ryan F. Quadri, Rik J. Williams, Kyoung-Soo Lee, Marijn Franx, Pieter van Dokkum Gabriel B. Brammer' title: A Confirmation of the Strong Clustering of Distant Red Galaxies at $2 < z <3$ --- Introduction {#sec:introduction} ============ Thus far, the precise measurements of galaxy clustering at $z \gtrsim 2$ that are necessary for meaningful physical interpretation have only been possible for the relatively blue, star-forming galaxies that dominate optical surveys [@adelberger05; @lee06; @ouchi05]. However the most massive galaxies at these redshifts tend to be faint in the optical, and are more appropriately selected in the near-infrared [e.g. @vandokkum06]. In particular, galaxies meeting the $J-K > 2.3$ criterion for distant red galaxies [DRGs; @franx03] have been shown to dominate the high-mass end of the galaxy mass function [@vandokkum06; @marchesini07a; @rudnick06]. Measurements of DRG clustering have been severely hampered by small fields and by the near-complete reliance on photometric redshifts. Using the largest DRG sample then available, @quadri07a confirmed the very strong clustering found by previous authors [@daddi03; @grazian06]. They also showed that strong clustering implies that DRGs occupy dark matter halos with $M \gtrsim 10^{13} \rm{M_\odot}$. However, the number density of these dark matter halos is at least an order of magnitude smaller than the number density of DRGs, suggesting that the DRG clustering measurements are incompatible with models of dark matter clustering. Here we use a $\sim$0.65$\rm{deg}^2$ field from the UKIRT Infrared Deep Sky Survey (UKIDSS), which is $\sim$8 times larger than the area used by @quadri07a but with similar NIR depth, to determine whether the strong observed clustering of DRGs found in previous studies was an artifact due to limited field sizes or whether some other explanation must be found. We use $(\Omega_M, \Omega_\Lambda, \sigma_8, h) = (0.3, 0.7, 0.9, 0.7)$. Small changes in these parameters do not affect our basic conclusions. Magnitudes are given in the Vega system, except where noted. Data {#sec:data} ==== The UKIDSS project covers different areas to different depths; here we make use of the deepest UKIDSS dataset, known as the Ultra Deep Survey (UDS). We use the UDS Data Release 1 images [@warren07], which reach 5 $\sigma$ point-source depths of $J \sim 23$ and $K \sim 21.6$. We note that @foucaud07 used the UDS Early Data Release to study the clustering of DRGs down to $K \sim 19$, however such bright DRGs lie primarily at $z < 2$ and may not be directly relevant to the galaxies that are the subject of this work. Most of the UDS field has coverage in the optical bands from the Subaru-XMM deep Survey (SXDS)[^1]. We use the beta-release of the $BRi'z'$ images, which reach depths of $\sim$25.3–27.5 (AB magnitudes). Finally, we combine these data with $3.6\mu$ and $4.5\mu$ imaging from the *Spitzer* Wide-Area Infrared Extragalactic Survey [SWIRE; @lonsdale03]. The procedures used to create a multicolor $K$-selected catalog from these imaging data are detailed by @williams08. Photometric redshifts {#sec:zphots} ===================== Accurate redshift information is fundamentally important for clustering studies. Photometric redshifts were calculated using the EAZY software [@brammer08][^2]. EAZY fits linear combinations of galaxy templates to the observed photometry using a Bayesian prior for the distribution of redshifts as a function of apparent magnitude. The template set was carefully chosen to provide high-quality photometric redshifts for $K$-selected galaxies in several current deep surveys. To assess the accuracy of our photometric redshifts we compare to a sample 119 spectroscopic redshifts available in this field [for further description, see @williams08]. The normalized median absolute deviation of $\Delta z/(1+z)$ is $\sigma_{\rm NMAD} = 0.033$. As nearly all spectroscopic redshifts in this field are at $z<1.5$, and none are for DRGs, we use public data on the well-studied Chandra Deep Field-South (CDF-S) to obtain an estimate of the expected photometric redshift accuracy for the galaxies currently under consideration. We use the photometric catalog of @wuyts08, and exclude the $UVH$ filters in order to match the filter set available in the UDS. Restricting the sample to the DRGs in CDF-S with spectroscopic redshifts, we find $\sigma_{\rm NMAD} = 0.065$, with no apparent dependence on DRG redshift and negligible systematic offsets. This suggests that the DRG photometric redshifts in our study also have an accuracy of $\sigma_{\rm NMAD} \sim 0.06-0.07$, although this can only be taken as a rough indication. The overall redshift distribution of the sample is used to deproject the observed angular correlation function in order to infer the spatial correlation function. This distribution is often estimated simply using a histogram of the photometric redshifts, without taking into account the redshift uncertainties. Because we select DRGs in a specific photometric redshift range, it is expected that random errors will cause galaxies to scatter into the sample from both lower and higher redshifts. It is also expected that galaxies within the redshift selection window will scatter out, however this will not affect the clustering so long as such galaxies are drawn randomly from the sample (systematic errors which cause galaxies to scatter into or out of the selection window are much more problematic). To estimate the *intrinsic* redshift distribution of the galaxies in our sample, we use Monte Carlo simulations in which the observed photometry is perturbed according the the photometric uncertainties [see also @quadri07a]. This procedure broadens the distribution relative to what would be inferred from a histogram of photometric redshifts, resulting in a higher correlation length; we return to this point in § \[sec:discussion\]. Measurements of the DRG correlation function {#sec:wtheta} ============================================ For a detailed description of the techniques used to perform correlation function measurements, see @quadri07a. Here we briefly describe the method, and point out differences between the method used here and in the previous paper. The angular two-point correlation function is calculated using the @landy93 estimator. Uncertainties are estimated using bootstrap resampling, which yields uncertainties that are significantly larger than those expected from Poisson statistics. We follow the method of @adelberger05 to correct for the integral constraint, although the correction is small given the large size of the field. A potentially serious problem for measurements of the auto-correlation function arises from variable sensitivity or calibration errors across the images. We measured the correlation function of stars in order to test whether such variations induce artificial clustering, obtaining the expected result that stars are unclustered. We have also measured the clustering of DRGs selected from catalogs in which the $J$ and $K$ zeropoints are varied within their uncertainties independently for each of the pointings that make up the UDS mosaic. Although it is difficult to rule out significant errors in the correlation function arising from such variations, it appears that they are not a dominant source of error. The redshift distribution of galaxies selected using the DRG color criterion is rather wide, extending from $z \sim 1$ to at least $z \sim 3.5$ [e.g. @grazian06; @quadri07b]. We therefore impose the additional criterion $2 < z_{\rm phot} < 3$. Over this redshift range, the DRG criterion effectively selects galaxies with red rest-frame optical colors [@quadri07b]. We also limit the sample to objects with $K<21$, ensuring that the catalog is complete and that objects have a sufficient signal-to-noise ratio to calculate accurate photometric redshifts. We visually inspected each object in the sample, rejecting those that appeared to be artifacts in the images or were otherwise deemed to have unreliable photometry. Objects were also rejected if the templates used for photometric redshifts gave poor fits. The final sample consists of 1528 DRGs with $K<21$. Figure \[fig:acf\] shows the angular correlation function $w(\theta)$ of DRGs. Our measurements confirm the strong angular clustering previously reported by @grazian06 and @quadri07a, but with reduced uncertainties and extending to much larger angular scales. Immediately apparent is the departure of $w(\theta)$ from a power law on small scales. To illustrate the significance of this excess clustering signal, and to show how the results differ from what would be measured using a smaller field, we fit a power law $A_w\theta^{-\beta}$ over the range $2\arcsec < \theta <40\arcsec$ and over $40\arcsec < \theta <500\arcsec$. The former fit yields $(A_w,\beta) = (12 \pm 8, 1.2 \pm 0.3)$ while the latter fit yields $(A_w,\beta) = (1.1 \pm 0.8, 0.47 \pm 0.14)$. Note that there is significant covariance between the two fitting parameters, so the uncertainties should be taken as representative only. As noted by previous authors [@adelberger05; @lee06; @quadri07a], the large-scale fit is preferred when comparing galaxy samples, as it better probes the clustering of dark matter halos that host the galaxies. The bottom panel of Figure \[fig:acf\] shows the residuals $w(\theta)$ from the large-scale power law. A small-scale excess has also been observed in large samples of optically selected galaxies at high redshift [e.g. @ouchi05; @lee06], and for various galaxy samples at $0 \lesssim z \lesssim 2$ [e.g. @zehavi05]. The spatial correlation length $r_0$ can be estimated from $w(\theta)$ using the Limber projection; see e.g. @quadri07a for further details. The result is $r_0 = 10.6 \pm 1.6 h^{-1} \textrm{Mpc}$, where we have used the large-scale fitting parameters and the uncertainty is estimated using the bootstrap simulations described above. This uncertainty accounts for the uncertainty in $\gamma$, but not in the redshift distribution $N(z)$. We use the method of @hamana04 to estimate the linear bias directly from $w(\theta)$, finding $b=5.0 \pm 0.4$. The Halo Model {#sec:hod} ============== @quadri07a found that DRGs significantly outnumber the dark matter halos that are clustered strongly enough to host them. Models of the halo occupation distribution (HOD) naturally account for this type of discrepancy by allowing halos to host multiple galaxies, but @quadri07a suggest that the large numbers of galaxies that would be required to share halos can be ruled out by the observed small-scale clustering. Given that the DRG clustering results presented in this work are largely consistent with those from previous works, it is expected that the observations are still in conflict with the models. In this section we use an HOD model to show this explicitly. In the HOD framework, the galaxy correlation function is understood to be the sum of two components. On large scales the correlation function follows that of the host dark matter halos. On small scales there is an additional contribution from galaxy pairs within individual halos. We follow the modeling procedures described by @lee06, to which we refer the reader for details. Briefly, we calculate the number density and bias of halos using the prescriptions of @sheth99 [@sheth01]. The halo occupation number, which describes the number of galaxies per halo, is parameterized as $N_{occ}(M_h) = (M_h/M_1)^\alpha$ for halo mass $M_h>M_{min}$ and $N_{occ}(M_h) = 0$ otherwise. Thus there are three free parameters, $M_1$, $M_{min}$, and $\alpha$. If the number density $n_g$ of galaxies is known, then one of these parameters can be fixed for assumed values of the other two parameters. We obtain a rough estimate of $n_g \approx 6.5 \times 10^{-4} h^{3} \rm{Mpc}^{-3}$ using the effective volume probed by our sample, which is determined using the redshift selection function described in § \[sec:zphots\]. This density is in reasonable agreement with an independent estimate of $n_g \approx (5.0 \pm 0.9) \times 10^{-4} h^{3} \rm{Mpc}^{-3}$, which is based on the luminosity functions of @marchesini07a. Figure \[fig:hod\] shows two models chosen according to $\chi^2$ fits. The lower solid line shows a model that is fit over $2\arcsec < \theta < 500\arcsec$. While this model provides an adequate fit on smaller angular scales, it systematically under-predicts the clustering on larger scales. To better illustrate the nature of the disagreement, it is useful to inspect the upper solid line, which shows a model that is fit only over $50\arcsec < \theta < 500\arcsec$. While this model provides an adequate fit at larger scales, the small-scale fit is unacceptable. As already noted, the fundamental reason that no model can fit the data is that the strong clustering on large scales implies that DRGs must occupy very massive halos. But DRGs outnumber these halos by a factor of $\sim$20, which is only possible if individual halos host a large number of DRGs. This would mean that each DRG has a high probability of having several neighbors in the immediate vicinity, leading to a very prominent small-scale excess in $w(\theta)$.[^3] As can be seen, the observed excess is much smaller than expected. Discussion {#sec:discussion} ========== We have used the UKIDSS-UDS to perform the first precise measurement of the clustering of red, $K$-selected galaxies at $2 < z_{\rm phot} < 3$. These DRGs show strong angular clustering that is well-described by a power law, but with an excess at small scales. We use photometric redshifts to deproject the angular clustering, finding the spatial correlation length $r_0 = 10.6 \pm 1.6 h^{-1} \textrm{Mpc}$. This value is comparable to that measured for luminous red galaxies in the local universe [@zehavi05], however DRGs are significantly more numerous. We show that standard models of halo occupation statistics are unable to simultaneously reproduce the observed clustering and number density, because DRGs outnumber their inferred host dark matter halos by too large a margin. The most obvious explanation is that we have used the incorrect redshift distribution in deprojecting the angular correlation function. A narrower distribution would reduce the correlation length (while a moderate shift in the overall distribution makes a relatively smaller difference). However, a narrower redshift distribution would also decrease the effective volume probed by our sample, thereby increasing $n_g$. We illustrate these effects in Figure \[fig:r0\_n\], which shows the observed $r_0$ and $n_g$ compared to the range of values for a typical HOD model. It also shows how estimating $N(z)$ directly from the unperturbed photometric redshifts — which, as mentioned in § \[sec:zphots\], represents the extreme assumption of no random photometric redshift errors — affects the results. It may still be the case that we are subject to *systematic* redshift errors, but we also note that a significantly narrower $N(z)$ would adversely affect the reasonably good agreement between our estimate of $n_g$ and the luminosity functions derived by @marchesini07a. Finally, we have verified that our basic results hold when using a different photometric redshift code [HYPERZ @bolzonella00] and with a different template set [@bruzual03]. Given the apparently high quality of our photometric redshifts, as well as the consistency with previous results for the clustering of DRGs, it is worth considering alternative explanations. One possibility is that current HOD models are too simplistic, and that massive red galaxies occupy halos in unexpected ways. The fundamental assumption underlying these models is that galaxy observables depend on halo mass, and not on the larger-scale environment or on halo properties such as structure or age. But environment may play a role, for instance via its effect on mass accretion rates [@scannapieco03; @furlanetto06]. Additionally, halo clustering varies with several halo properties, even at fixed mass; this phenomenon is generically referred to as “assembly bias” [e.g. @gao07]. To the extent that these properties affect galaxy observables, they will also affect galaxy clustering measurements. However, we note that current estimates for the strength of the assembly bias are too small to account for the observed discrepancies. As an example, @gao07 have found that, at $z \sim 2-3$ and in the relevant halo mass range, halos in the upper 20$\%$ tail of the distribution of halo spins have a $\sim$20$\%$ larger bias than the mean value. It might then be supposed that DRGs occupy less massive and more numerous halos with higher spin. But the increased number density of these low mass halos is approximately cancelled by the requirement that DRGs can only occupy 20$\%$ of them, so the discrepancy in number densities is unchanged. The existence of massive red galaxies at $z \gtrsim 2$ was not predicted by models of galaxy formation, although progress has been made on this front [e.g. @croton06; @delucia07 but see @marchesini07b]. The results shown here suggest that the conflict between models and observations extends to the relationship between galaxies and dark matter halos. However, clustering measurements are sensitive to a number of systematic effects, so our conclusions remain tentative. The most obvious source of error comes from our use of photometric redshifts. Ongoing medium-band NIR observations will significantly reduce the photometric redshift uncertainties [@vandokkum08], and in the longer term multi-object NIR spectrographs will also improve the situation. If future work confirms our results, clustering measurements such as those presented here may provide a new way to understand the detailed relationship between galaxy and halo properties. We thank Danilo Marchesini, Qi Guo, Simon White, and Chuck Steidel for useful discussions, as well as the anonymous referee for a constructive report. This work is based data made public by UKIDSS, SXDS, and SWIRE teams. R.F.Q. is supported by a NOVA Postdoctoral Fellowship. Support from National Science Foundation grant CAREER AST-0449678 is also gratefully acknowledged. Adelberger, K. L., Steidel, C. C., Pettini, M., Shapley, A. E., Reddy, N. A., & Erb, D. K. 2005, , 619, 697 Bolzonella, M., Miralles, J.-M., & Pell[ó]{}, R. 2000, , 363, 476 Brammer, G., van Dokkum, P., & Coppi, P. 2008, , submitted Bruzual, G., & Charlot, S. 2003, , 344, 1000 Bullock, J. S., Wechsler, R. H., & Somerville, R. S. 2002, , 329, 246 Croton, D. J., et al. 2006, , 365, 11 De Lucia, G., & Blaizot, J. 2007, , 375, 2 Daddi, E., et al. 2003, , 588, 50 Foucaud, S., et al. 2007, , 376, L20 Franx, M., et al. 2003, , 587,L79 Furlanetto, S. R., & Kamionkowski, M. 2006, , 366, 529 Gao, L., & White, S. D. M. 2007, , 377, L5 Grazian, A., et al. 2006b, , 453, 5, 07 Hamana, T., Ouchi, M., Shimasaku, K., Kayo, I., & Suto, Y. 2004, , 347, 813 Kravtsov, A. V., et al. 2004, , 609, 35 Landy, S. D., & Szalay, A. S. 1993, , 412, 64 Lee, K., Giavalisco, M., Gnedin, O. Y., Somerv ille, R. S., Ferguson, H. C., Dickinson, M., & Ouchi, M. 2006, , 642, 63 Lonsdale, C. J., et al. 2003, , 115, 897 Marchesini, D., et al. 2007a, , 656, 42 Marchesini, D., & van Dokkum, P. G. 2007b, , 663, L89 Ouchi, M., et al. 2005, , 635, L117 Quadri, R., et al. 2007, , 654, 138 Quadri, R., et al. 2007, , 134, 1103 Roche, N., Eales, S. A., Hippelein, H., Willot, C. J. 1999, , 306, 538 Rudnick, G., et al. 2006, 650, 624 Scannapieco, E., & Thacker, R. J. 2003, , 590, L69 Sheth, R. K., & Tormen, G. 1999, , 308, 119 Sheth, R. K., Mo, H. J., & Tormen, G. 2001, , 323, 1 van Dokkum, P. G., et al. 2006, , 638, L59 van Dokkum, P., et al. 2008, NOAO/NSO Newsl. 94, http://www.noao.edu/noao/noaonews/jun08/pdf/94news.pdf Warren, S. J., et al. 2007, 375, 213 Williams, R. J., Quadri, R. F., Franx, M., van Dokkum, P., Labbé, I. 2008, , submitted Wuyts, S., Labbé, I., Förster Schreiber, N. M., Franx, M., Rudnick, G., Brammer, G. B. 2008, , in press (arXiv:0804.0615) Zehavi, I., et al. 2005, , 621, 22 Zheng, Z., et al. 2005, , 633, 791 [^1]: http://www.naoj.org/Science/SubaruProject/SXDS/ [^2]: http://www.astro.yale.edu/eazy/ [^3]: Specifically, the amplitude of the one-halo term depends on the second factorial moment of $N_{occ}(M_h)$, which we parameterize following @bullock02. Note that this particular choice does not affect the main result of this section because, for the high $\langle N_{occ} \rangle$ value found here, it is generically expected that $N_{occ}(M_h)$ follows a Poisson distribution for a fixed $M_h$ [e.g. @zheng05]; this fully specifies the moments.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The possibility of noncommutative topological gravity arising in the same manner as Yang-Mills theory is explored. We use the Seiberg-Witten map to construct such a theory based on a SL(2,[**C**]{}) complex connection, from which the Euler characteristic and the signature invariant are obtained. This gives us a way towards the description of noncommutative gravitational instantons as well as noncommutative local gravitational anomalies.' author: - 'H. García-Compeán' - 'O. Obregón' - 'C. Ramírez' - 'M. Sabido' title: Noncommutative Topological Theories of Gravity --- -1truecm -1.3truecm Introduction ============ The idea of the noncommutative nature of space-time coordinates is quite old [@ref1]. Many authors have extensively studied it from the mathematical [@ref2], as well as field theoretical points of view (for a review, see for instance [@douglas; @szabo]). Recently, noncommutative gauge theory has attracted a lot of attention, specially in connection with M(atrix) [@connes] and string theory [@ref3]. In particular, Seiberg and Witten [@ref3] have found noncommutativity in the description of the low energy excitations of open strings (possibly attached to D-branes) in the presence of a NS constant background $B-$field. Moreover, they have observed that, depending on the regularization scheme of the two dimensional correlation functions, Pauli-Villars or point splitting, ordinary and noncommutative gauge fields can be induced from the same worldsheet action. Thus, the independence of the regularization scheme tells us that there is a relation of the resulting theory of noncommutative gauge fields, deformed by the Moyal star-product, or Kontsevich star product for systems with general covariance, with a gauge theory in terms of usual commutative fields. This relation is the so-called Seiberg-Witten map. In string theory, gravity and gauge theories are realized in very different ways. The gravitational interaction is associated with a massless mode of closed strings, while Yang-Mills theories are more naturally described in open strings or in heterotic string theory. Furthermore, as mentioned, string theory predicts a noncommutative effective Yang-Mills theory. Thus the question emerges, whether a noncommutative description of gravity would arise from it. This is a difficult question and it will not be addressed here. However, in a recent paper [@ardalan], gravitation on noncommutative D-branes has been discussed. In this context, recently Chamseddine has made several proposals for noncommutative formulations of Einstein’s gravity [@cham; @cham1; @cham2], where a Moyal deformation is done. Moreover, in [@cham1; @cham2], he gives a Seiberg-Witten map for the vierbein and the Lorentz connection, which is obtained starting from the gauge transformations, of $SO(4,1)$ in the first work, and of $U(2,2)$ in the second one. However, in both cases the actions are not invariant under the full noncommutative transformations. Namely, in [@cham1] the action does not have a definite noncommutative symmetry, and in [@cham2] the Seiberg-Witten map is obtained for $U(2,2)$, but the action is invariant under the subgroup $U(1,1) \times U(1,1).$ These actions deformed by the Moyal product, with a constant noncommutativity parameter, are not diffeomorphism invariant. However, as pointed out in these works, [@cham1; @cham2], they could be made diffeomorphism invariant, substituting the Moyal $*_M$-product by the Kontsevich $*_K$-product. For other recent proposals of noncommutative gravity, see [@moffat; @chandia; @nishino; @nair; @klemm; @dosdim]. Further, as shown in [@wess1; @wess2; @wess3; @wess4; @wess5], starting from the Seiberg-Witten map, noncommutative gauge theories with matter fields based on any gauge group can be constructed. In this way, a proposal for the noncommutative standard model based on the gauge group product $SU(3) \times SU(2) \times U(1)$ has been constructed [@wess6]. In these developments, the key argument is that no additional degrees of freedom have to be introduced in order to formulate noncommutative gauge theories. That is, although the explicit symmetry of the noncommutative action corresponds to the enveloping algebra of the limiting commutative symmetry group, it is also invariant with respect to this commutative group, fact made manifest by the Seiberg-Witten map. In this paper, following these results, we present a first step towards a noncommutative theory of gravity in four dimensions, fully symmetric under the noncommutative symmetry. We make a proposal for a noncommutative topological quadratic theory of gravity from which the corresponding topological invariants of Riemannian manifolds, the Euler characteristic and the signature, can be obtained. These invariants should classify gravitational instantons. Further, in this context of noncommutative gravity, we explore the notion of gravitational instanton. Other possible global aspects of noncommutative gravity like gravitational anomalies will be briefly addressed as well. The paper is organized as follows. In section 2 we quickly review the noncommutative gauge theories. In section 3 the main features of topological quadratic gravity are introduced, for the $SO(3,1)$ gauge group, by means of the complex formulation based on the self-dual topological quadratic gravity. In section 4 we present noncommutative topological gravity, with explicit results up to order $\theta^{3}$. In section 5, based in the study of the global properties of the noncommutative version of the Lorentz and diffeomorphism groups, we explore the possibility of a definition of noncommutative gravitational instantons, as well as local gravitational anomalies for a theory of gravity. Finally, section 6 contains our conclusions. 1truecm Noncommutative Gauge Symmetry and the Seiberg-Witten Map ======================================================== We start this section with conventions and properties of noncommutative spaces for future reference. For a recent review see e.g. [@zachos]. Noncommutative spaces can be understood as generalizations of the usual quantum mechanical commutation relations, by the introduction of a linear operator algebra ${\cal A}$, with a noncommutative associative product, $$[ \widehat x^{\mu},\widehat x^{\nu} ] =i\theta ^{\mu \nu}, \label{comm}$$ where $\widehat x^{\mu}$ are linear operators acting on the Hilbert space $L^2({\bf R}^{n})$ and $\theta^{\mu \nu}=-\theta ^{\nu \mu}$ are real numbers. The Weyl-Wigner-Moyal correspondence establishes (under certain conditions) an isomorphic relation between ${\cal A}$ and the algebra of functions on ${\bf R}^{n}$, the last with an associative and noncommutative $\star$-product, the Moyal product, given by $$f(x)\star g(x)\equiv \left[ \exp \bigg(\frac{i}{2}\theta^{\mu \nu}{\frac{\partial }{\partial \varepsilon^{\mu}}}{\frac{\partial }{\partial \eta^{\nu}}} \bigg) % f(x+\varepsilon )g(x+\eta )\right] _{\varepsilon =\eta =0}\ \ . \label{moyal}$$ In order to avoid causality problems we will take $\theta^{0\nu} = 0.$ Due to the fact that we will be working with nonabelian groups, we must include also matrix multiplication, so a $\ast$-product will be used as the matrix multiplication with $\star$-product. Inside integrals, this product has the property Tr$\int f_{1}\ast f_{2}\ast f_{3}\ast \cdots \ast f_{n} = {\rm Tr}\int f_{n}\ast f_{1}\ast f_{2}\ast f_{3}\ast \cdots \ast f_{n-1}$. In particular, the trace of the integral of the product of two functions has the property that Tr$\int f_{1}\ast f_{2}={\rm Tr}\int f_{1} f_{2}$. Let us consider a gauge theory with a hermitian connection, invariant under a symmetry Lie group G, with gauge fields $A_\mu$, $$\delta_{\lambda }{A}_{\mu} =\partial _{\mu}{\lambda}+i\left[\lambda,{A}_{\mu}\right], \label{tress}$$ where $\lambda=\lambda^iT_i$, and $T_i$ are the generators of the Lie algebra ${\cal G}$ of the group G, in the adjoint representation. These transformations are generalized for the noncommutative theory as, $$\delta_{\lambda }\widehat{A}_{\mu} =\partial _{\mu}\widehat{\Lambda }+i\left[\widehat\Lambda\stackrel{\ast}{,}\widehat{A}_{\mu}\right], \label{trafoanc}$$ where the noncommutative parameters $\widehat\Lambda$ have some dependence on $\lambda$ and the connection $A$. The commutators $\left[A\stackrel{\ast}{,}B\right]\equiv A\ast B-B\ast A$ have the correct derivative properties when acting on products of noncommutative fields. Due to noncommutativity, commutators like $\left[\widehat\Lambda\stackrel{\ast}{,}\widehat{A}_{\mu}\right]$ take values in the enveloping algebra of ${\cal G}$ in the adjoint representation, ${\cal U}({\cal G},{\rm ad})$. Therefore, $\widehat\Lambda$ and the gauge fields $\widehat{A}_\mu$ will also take values in this algebra. In general, for some representation $R$, we will denote ${\cal U} ({\cal G},R)$ the corresponding section of the enveloping algebra ${\cal U}({\cal G})$. Let us write for instance $\widehat\Lambda=\widehat\Lambda^I T_I$ and $\widehat A=\widehat{A}^I T_I$, then, $$\left[\widehat\Lambda\stackrel{\ast}{,}\widehat{A}_{\mu}\right]= \left\{\widehat{\Lambda}^{I}\stackrel{\ast}{,}\widehat{A}_{\mu}^{J}\right\} \left[ T_{I},T_{J}\right] +\left[ \widehat\Lambda^{I}\stackrel{\ast}{,}\widehat{A}_{\mu}^{J}\right] \left\{ T_{I},T_{J}\right\},$$ where $\{A\stackrel{\ast}{,}B\}\equiv A\ast B+B\ast A$ is the noncommutative anticommutator. Thus all the products of the generators $T_I$ will be needed in order to close the algebra ${\cal U}({\cal G},{\rm ad})$. Its structure can be obtained by successive computation of commutators and anticommutators starting from the generators of ${\cal G}$, until it closes, $$\left[ T_{I},T_{J}\right]=i{f_{IJ}}^KT_{K}, \ \ \ \ \left\{ T^{I},T^{J}\right\} = {d_{IJ}}^KT_{K}. \nonumber$$ The field strength is defined as $\widehat{F}_{\mu \nu} =\partial _{\mu}\widehat{A}_{\nu}- \partial _{\nu}\widehat{A}_{\mu}-i [\widehat{A}_{\mu}\stackrel{\ast}{,}\widehat{A}_{\nu}]$, hence it takes also values in ${\cal U}({\cal G},{\rm ad})$. From Eq. (\[trafoanc\]) it turns out that, $$\delta_{\lambda }\widehat{F}_{\mu \nu} =i\left( \widehat{\Lambda }% \ast \widehat{F}_{\mu \nu}-\widehat{F}_{\mu \nu}\ast\widehat{\Lambda }\right). \label{trafoefenc}$$ We see that these transformation rules can be obtained from the commutative ones, just by replacing the ordinary product of smooth functions by the Moyal product, with a suitable product ordering. This allows constructing in simple way invariant quantities. If the components of the noncommutativity parameter $\theta$ are constant, then Lorentz invariance is spoiled. In order to recover it [@cham1; @cham2; @wess3] one should change the Moyal star product by the Kontsevich star product $*_K$ [@kontsevich]. However, as a result of the diffeomorphism invariance, for an even dimensional (symplectic) spacetime $X$, there exists a local coordinate system (which coincides with Darboux’s coordinate system) in which $\theta^{\mu\nu}$ is constant. Therefore, without loss of generality, the Kontsevich product can be reduced to the Moyal one, which will be used from now on. The fact that the observed world is commutative, means that there must be possible to obtain it from the noncommutative one by taking the limit $\theta\rightarrow 0$. Thus the noncommutative fields $\widehat A$ are given by a power series expansion on $\theta$, starting from the commutative ones $A$, $$\widehat{A}=A+\theta^{\mu\nu}A^{(1)}_{\mu\nu}+ \theta^{\mu\nu}\theta^{\rho\sigma}A^{(2)}_{\mu\nu\rho\sigma}+\cdots \ \ . \label{camposnc}$$ The coefficients of this expansion are determined by the Seiberg-Witten map, which states that the symmetry transformations of (\[camposnc\]), given by (\[trafoanc\]), are induced by the symmetry transformations of the commutative fields (\[tress\]). In order that these transformations be consistent, the transformation parameter $\widehat\Lambda$ must satisfy [@wess2], $$\delta_\lambda\widehat\Lambda(\eta)-\delta_\eta\widehat\Lambda(\lambda)- i[\widehat\Lambda(\lambda)\stackrel{*}{,}\widehat\Lambda(\eta)]= \widehat\Lambda(-i[\lambda,\eta]).\label{parametros}$$ Similarly, the coefficients in Eq. (\[camposnc\]) are functions of the commutative fields and their derivatives, and are determined by the requirement that $\widehat A$ transforms as (\[trafoanc\]), [@wess5]. The fact that the noncommutative gauge fields take values in the enveloping algebra, has the consequence that they have a bigger number of components than the commutative ones, unless the enveloping algebra coincides with the Lie algebra of the commutative theory, as is the case of $U(N)$. However, the physical degrees of freedom of the noncommutative fields can be related one to one to the physical degrees of freedom of the commutative fields by the Seiberg-Witten map [@ref3], fact used in references [@wess1; @wess2; @wess3; @wess4; @wess5] to construct noncommutative gauge theories, in principle for any Lie group. In order to obtain the Seiberg-Witten map to first order, the noncommutative parameters are first obtained from Eq. (\[parametros\]) [@ref3; @wess1; @wess2; @wess3; @wess4; @wess5], $$\widehat{\Lambda }\left( \lambda ,A\right) =\lambda +\frac{1}{4}\theta ^{\mu \nu}\left\{ \partial _{\mu}\lambda ,A_{\nu}\right\} +{\cal O}\left( \theta ^{2}\right). \label{difeq}$$ Then, from Eqs. (\[trafoanc\]) and (\[camposnc\]), the following solution is given $$\widehat{A}_{\mu}\left( A\right) =A_{\mu}-\frac{1}{4}\theta ^{\rho \sigma}\left\{ A_{\rho},\partial _{\sigma}A_{\mu}+F_{\sigma \mu}\right\} +{\cal O}\left( \theta ^{2}\right) , \label{asw}$$ and then for the field strength it turns out that, $$\widehat{F}_{\mu \nu} =F_{\mu \nu}+\frac{1}{4}\theta ^{\rho \sigma}\bigg( 2\left\{ F_{\mu \rho},F_{\nu \sigma}\right\} -\left\{ A_{\rho},D_{\sigma}F_{\mu \nu}+\partial _{\sigma}F_{\mu \nu}\right\} \bigg) +{\cal O}\left( \theta ^{2}\right). \label{swf}$$ The higher coefficients in Eq. (\[camposnc\]) can be obtained from the observation that the Seiberg-Witten map preserves the operations of the commutative function algebra, hence the following differential equation can be written [@ref3], $$\delta\theta^{\mu\nu}\frac{\partial}{\partial\theta^{\mu\nu}}\widehat A(\theta)= \delta\theta^{\mu\nu}\widehat{A^{(1)}_{\mu\nu}}(\theta),\label{e}$$ where $\widehat{A^{(1)}_{\mu\nu}}$ is obtained from ${A^{(1)}_{\mu\nu}}$ in Eq. (\[camposnc\]), by substituting the commutative fields by the noncommutative ones under the $\ast$-product. Let us take the generators $T^i$ of the Lie algebra ${\cal G}$ to be hermitian, then the generators $T^I$ of the corresponding enveloping algebra can be chosen to be also hermitian, for instance if they are given by the symmetrized products $:T^{i_1}T^{i_2}\cdots T^{i_n}:\ $. Further, the noncommutative transformation parameters $\widehat\Lambda(\lambda,A)$ are functions, whose arguments are matrices. Let us now substitute the matrix products inside $\widehat\Lambda(\lambda,A)$, by $MN\rightarrow \frac{1}{2}\{M,N\}-\frac{i}{2}(i[M,N])$, for any two matrices $M$ and $N$. Hence $\widehat\Lambda(\lambda,A)$ can be understood as a function whose nonlinear part of depends polynomially, with complex numerical coefficients, on anticommutators $\{\cdot,\cdot\}$ and commutators $i[\cdot,\cdot]$, of $\lambda$, $A$, and their derivatives. With this understanding, we will continue to write it as $\widehat\Lambda(\lambda,A)$, and we have $$[\widehat\Lambda(\lambda,A)]^\dagger= \widehat\Lambda^\dagger(\lambda^\dagger,A^\dagger), \label{dagger}$$ where $\widehat\Lambda^\dagger$ is obtained by complex conjugating the mentioned numerical coefficients. Let us now consider the hermitian conjugation of the transformation law (\[tress\]), $(\delta_{\lambda }{A}_{\mu})^\dagger =\partial _{\mu}{\lambda}^\dagger+ i\left[\lambda^\dagger,{A}_{\mu}^\dagger\right]$. From it and (\[parametros\]), taking into account (\[dagger\]), we get, $$\delta_{\lambda^\dagger} \widehat\Lambda^\dagger(\lambda^\dagger,A^\dagger)- \delta_{\eta^\dagger}\widehat\Lambda^\dagger(\lambda^\dagger,A^\dagger)- i[\widehat\Lambda^\dagger(\lambda^\dagger,A^\dagger)\stackrel{*}{,} \widehat\Lambda^\dagger(\eta^\dagger,A^\dagger)]= \widehat\Lambda^\dagger(-i[\lambda^\dagger,\eta^\dagger],A^\dagger).\label{parametrosnc}$$ Comparing this equation with (\[parametros\]), with the mentioned convention, it can be seen that the noncommutative parameters satisfy $[\widehat\Lambda(\lambda,A)]^\dagger=\widehat\Lambda(\lambda^\dagger,A^\dagger)$. From the transformation law (\[trafoanc\]), a similar conclusion can be obtained for the noncommutative connection, $[\widehat{A}_{\mu}(A)]^\dagger=\widehat{A}_{\mu}(A^\dagger)$, as well for the field strength, $[\widehat{F}_{\mu\nu}(A)]^\dagger= \widehat{F}_{\mu\nu}(A^\dagger)$. By this means, if we have a group with real parameters and hermitian generators, with a hermitian connection, then the noncommutative connection and the noncommutative field strength will be also hermitian. Topological Gravity =================== In this section we shortly review four-dimensional topological gravity. We start from the following $SO(3,1)$ invariant action $$I_{TOP}=\frac{\Theta _{G}^{P}}{2\pi }{\rm Tr}\int_{X}R\wedge R+ i\frac{\Theta _{G}^{E}}{2\pi }{\rm Tr}\int_{X}R\wedge \widetilde{R}, \label{topo}$$ where $R$ is the field strength, corresponding to a $SO(3,1)$ connection $\omega$ $$R_{\mu \nu }^{\ \ ab}=\partial _{\mu }\omega _{\nu }^{\ ab}-\partial _{\nu }\omega _{\mu }^{\ ab}+\omega_{\mu }^{\ ac}\omega_{\nu\, c}^{\ \ b}- \omega_{\mu }^{\ bc}\omega_{\nu\, c}^{\ \ a},\label{riemann}$$ $X$ is a four dimensional closed pseudo-Riemannian manifold and $\widetilde{ R}_{\mu \nu }^{\ \ \ ab}=-\frac{i}{2}{\varepsilon^{ab}}_{cd}{R_{\mu \nu }}^{cd}$ is the dual with respect to the group. Here the coefficients are the gravitational analogs of the $\Theta -$vacuum in QCD [@ref7; @ref8; @ref9]. In this action, the connection satisfies the first Cartan structure equation, which relates it to a given tetrad. This action can be written as the integral of a divergence, and a variation of it with respect to the tetrad vanishes, hence it is metric independent, and therefore topological. The action (\[topo\]) arises naturally from the MacDowell-Mansouri type action [@ref10]. A similar construction can be done for $(2+1)$-dimensional Chern-Simons gravity [@ref11]. Keeping this philosophy in mind, action (\[topo\]) can be rewritten in terms of the self-dual and anti-self-dual parts, $R^\pm=\frac{1}{2}(R\pm\widetilde R)$, of the Riemann tensor as follows: $$I_{TOP}={\rm Tr}\int_{X}\left( \tau R^{+}\wedge R^{+}+\overline\tau R^{-}\wedge R^{-}\right) ={\rm Tr}\int_{X}\left( \tau R^{+}\wedge R^{+}+ \overline\tau\overline{R^{+}}\wedge\overline{R^{+}}\right),\label{topopm}$$ where $\tau=\left( \frac{1}{2\pi }\right) \left( \Theta _{G}^{E}+i \Theta _{G}^{P}\right)$, and the bar denotes complex conjugation. In local coordinates on $X$, this action can be rewritten as $$I_{TOP}=2 {\rm R}e\ \tau\int_{X}d^{4}x\ \varepsilon ^{\mu \nu \rho \sigma } {R_{\mu \nu }^{+}}^{ab}R_{\rho \sigma ab}^{+}, \label{selftopo}$$ Therefore, it is enough to study the complex action, $$I=\int_{X}d^{4}x\ \varepsilon ^{\mu \nu \rho \sigma } {R_{\mu \nu }^{+}}^{ab}R_{\rho \sigma ab}^{+}. \label{selftopo1}$$ Further, the self-dual Riemann tensor satisfies, ${\varepsilon^{ab}}_{cd}{R_{\mu \nu }^{+}}^{cd}= 2i{R_{\mu \nu}^{+}}^{ab} $. This tensor has the useful property that it can be written as a usual Riemann tensor, but in terms of the self-dual components of the spin connection, $\omega _{\mu }^{+ \ ab}=\frac{1}{2}\left( \omega _{\mu }^{ab}- \frac{i}{% 2}{\varepsilon^{ab}}_{cd}\omega _{\mu }^{cd}\right) $, as $$R_{\mu \nu }^{+ \ ab}=\partial _{\mu }\omega _{\nu }^{+ \ ab}-\partial _{\nu }\omega _{\mu }^{+ \ ab}+\omega_{\mu }^{+\ ac}\omega_{\nu\ c}^{+\ b}- \omega_{\mu }^{+\ bc}\omega_{\nu\ c}^{+\ a}.\label{riemannpm}$$ In this case, the action (\[selftopo\]) can be rewritten as, $$I=\int_{X}d^{4}x\ \varepsilon ^{\mu \nu \rho \sigma }\left[ 2{R_{\mu \nu }}^{0i}(\omega^{+})R_{\rho \sigma 0i}(\omega^{+})+ {R_{\mu \nu }}^{ij}(\omega^{+})R_{\rho \sigma ij}(\omega^{+})\right].$$ Now, we define ${\omega_\mu}^i=i\omega_{\mu }^{+ 0i}$, from which we obtain, by means of the self-duality properties, $\omega_\mu^{+ij}=-{\varepsilon^{ij}}_k{\omega_{\mu}}^k$. Then it turns out that $$\begin{aligned} R_{\mu \nu }^{\ \ oi}(\omega ^{+}) &=&-i(\partial _{\mu }\omega _{\nu }^{i}-\partial _{\nu }\omega _{\mu }^{i}+2 \varepsilon _{jk}^{i}\omega _{\mu }^{j}\omega _{\nu c}^{k})=-i{\cal R}_{\mu \nu }^{\ \ i}(\omega ) \\ R_{\mu \nu }^{\ \ ij}(\omega ^{+}) &=&\partial_\mu\omega_\nu^{+ij}-\partial_\nu\omega_\mu^{+ij}- 2(\omega_\mu^{\ i}\omega_\nu^{\ j}-\omega_\nu^{\ i}\omega_\mu^{\ j})= -\varepsilon^{ij}_{\ \ k}{\cal R}_{\mu \nu}^{\ \ k}(\omega ).\end{aligned}$$ This amounts to the decomposition $SO(3,1)=SL(2,{\bf C}) \times SL(2,{\bf C})$, such that $\omega_\mu^{\ i}$ is a complex $SL(2,{\bf C})$ connection. If we choose the algebra $s\ell(2,{\bf C})$ to satisfy $[T_i,T_j]=2i\varepsilon_{ij}^{\ \,k}T_k$ and Tr$(T_iT_j)=2\delta_{ij}$, then we can write $$I={\rm Tr}\int_{X}d^{4}x\ \varepsilon^{\mu \nu \rho \sigma} {\cal R}_{\mu \nu }(\omega){\cal R}_{\rho \sigma}(\omega), \label{accion2c}$$ where, ${{\cal R}_{\mu\nu}}=\partial _{\mu }\omega _{\nu}-\partial _{\nu }\omega _{\mu }-i[\omega _{\mu},\omega _{\nu}]$ is the field strength. This action is invariant under the $SL(2,{\bf C})$ transformations, $\delta_\lambda\omega_\mu=\partial_\mu\lambda+i[\lambda,\omega_\mu]$. In the case of a Riemannian manifold $X$, the signature and the Euler topological invariants of $X$, are the real and imaginary parts of (\[accion2c\]) $$\begin{aligned} \sigma(X)&=&-\frac{1}{24\pi^2}{\rm Re}\,{\rm Tr}\int_{X}d^{4}x\ \varepsilon^{\mu \nu \rho \sigma} {\cal R}_{\mu \nu }(\omega){\cal R}_{\rho \sigma}(\omega),\label{sigmac}\\ \chi(X)&=&\frac{1}{32\pi^2}{\rm Im}\,{\rm Tr}\int_{X}d^{4}x\ \varepsilon^{\mu \nu \rho \sigma} {\cal R}_{\mu \nu }(\omega){\cal R}_{\rho \sigma}(\omega).\label{eulerc}\end{aligned}$$ 1truecm Noncommutative Topological Gravity ================================== We wish to have a noncommutative formulation of the $SO(3,1)$ action (\[topo\]). Its first term, can be straightforwardly made noncommutative, in the same way as for usual Yang-Mills theory, $${\rm Tr}\int_{X}\widehat R\wedge \widehat R. \label{topop}$$ If the $SO(3,1)$ generators are chosen to be hermitian, for example in the spin $\frac{1}{2}$ representation given by $\gamma^{\mu\nu}$, then from the discussion at the end of the second section, it turns out that $\widehat R_{\mu\nu}$ is hermitian and consequently (\[topop\]) is real. Instead, for the second term of (\[topo\]) such an action cannot be written, because it involves the Levi-Civita symbol, an invariant Lorentz tensor, but which is not invariant under the full enveloping algebra. However, as mentioned at the end of the preceding section, this term can be obtained from Eq. (\[accion2c\]). Thus, in general we will consider as the noncommutative topological action of gravity, the $SL(2,{\bf C})$ invariant action, $$\widehat I={\rm Tr}\int_{X}d^{4}x\ \varepsilon^{\mu \nu \rho \sigma} \widehat{\cal R}_{\mu \nu }\widehat{\cal R}_{\rho \sigma}, \label{accion2cnc}$$ where ${\widehat{\cal R}_{\mu\nu}}=\partial _{\mu }\widehat\omega _{\nu}- \partial _{\nu }\widehat\omega _{\mu }-i [\widehat\omega _{\mu}\stackrel{\ast}{,}\widehat\omega _{\nu}]$, is the $SL(2,{\bf C})$ noncommutative field strength. This action does not depend on the metric of X. Indeed, as well as the commutative one, it is given by a divergence, $$\widehat I={\rm Tr}\ \int_{X}d^{4}x\ \varepsilon^{\mu \nu \rho \sigma} \partial_\mu\left(\widehat{\omega}_{\nu}\ast\partial_\rho \widehat{\omega}_{\sigma}+ \frac{2}{3}\widehat\omega_\nu\ast\widehat\omega_\rho\ast\widehat\omega_\sigma\right). \label{accion2cnc3}$$ Thus, a variation of (\[accion2cnc\]) with respect to the noncommutative connection, will vanish identically because of the noncommutative Bianchi identities, $$\delta_{\widehat\omega} \widehat I=8{\rm Tr}\int\varepsilon^{\mu\nu\rho\sigma}\delta\widehat\omega_\mu\ast \widehat D_\mu \widehat R_{\rho\sigma}\equiv 0,$$ where $\widehat D_\mu$ is the noncommutative covariant derivative. Further, from the first Cartan structure equation, the SO(3,1) connection, and thus its $SL(2,{\bf C})$ projection $\omega_\mu^{\ i}$, can be written in terms of the tetrad and the torsion. Furthermore, from the Seiberg-Witten map, the noncommutative connection can be written as well as $\widehat\omega(e)$. Therefore, a variation of the action (\[accion2cnc\]) with respect to the tetrad of the action, can be written as $$\delta_{e} \widehat I=8{\rm Tr}\int\varepsilon^{\mu\nu\rho\sigma}\delta_e \widehat\omega_\mu(e)\ast \widehat D_\mu \widehat R_{\rho\sigma}\equiv 0,$$ hence it is topological, as the commutative one. Thus, we see from (\[accion2cnc3\]) that, in a $\theta -$power expansion of the action, each one of the resulting terms will be independent of the metrics, as well as they will be given by a divergence. Thus, unless these terms vanish identically, they will be topological. Furthermore, the whole noncommutative action, expressed in terms of the commutative fields by the Seiberg-Witten map, is invariant under the SO(3,1) transformations. Thus, each term of the expansion will be also invariant. Thus these terms will be topological invariants. The action (\[accion2cnc\]) is not real, as well as the limiting commutative action. Hence, it is not obvious that the signature (\[topop\]) will be precisely its real part. In this case we could neither say that $\widehat\chi(X)$ is given by its imaginary part. In fact we could only say that $\widehat\chi(X)$ could be obtained from the difference of (\[accion2cnc\]) and (\[topop\]). However, the real and the imaginary parts of (\[accion2cnc\]) are invariant under SL(2,[**C**]{}) and consequently under SO(3,1), and thus they are the natural candidates for $\widehat\sigma(X)$ and $\widehat\chi(X)$, as in (\[sigmac\]) and (\[eulerc\]). In order to write down these noncommutative actions as an expansion in $\theta $, we will take as generators for the algebra of $SL(2,{\bf C})$, the Pauli matrices. In this case, to second order in $\theta$, the Seiberg-Witten map for the Lie algebra valued commutative field strength ${\cal R}_{\mu\nu}={{\cal R}_{\mu\nu}}^{i}(\omega)\sigma_{i}$, is given by $$\widehat{{\cal R}}_{\mu\nu}={\cal R}_{\mu\nu}+ \theta^{\alpha\beta} {\cal R}_{\mu\nu\alpha\beta}^{(1)}+ \theta^{\alpha\beta}\theta^{\gamma\delta}{\cal R}_{\mu\nu\alpha\beta\gamma\delta}^{(2)}+ \cdots \ , \label{riemann}$$ where, from Eq. (\[swf\]) we get, $$\theta^{\rho\sigma}{\cal R}_{\mu\nu\rho\sigma}^{(1)}= \frac{1}{2}\theta^{\rho\sigma}\left[2{\cal R}_{\mu\rho}^{\ \ i}{\cal R}_{\nu\sigma i}- \omega_{\rho}^{\ \ i}\left(\partial_\sigma {\cal R}_{\mu\nu i}+ D_\sigma{\cal R}_{\mu\nu i}\right)\right]{\bf 1} , \label{r1}$$ where ${\bf 1}$ is the unity 2$\times$2 matrix. Further, by means of Eq. (\[e\]), we get, $$\begin{aligned} \theta^{\rho\sigma}\theta^{\tau\theta}{\cal R}_{\mu\nu\rho\sigma\tau\theta}^{(2)}= \frac{1}{4}\theta^{\rho\sigma}\theta^{\tau\theta}\bigg(\varepsilon^i_{jk}\left[i\partial_\tau {\cal R}^j_{\mu\rho}\partial_\theta {\cal R}^k_{\nu\sigma}+\partial_\tau\omega^j_\rho \partial_\theta (\partial_\sigma+D_\sigma){\cal R}^k_{\mu\nu}\right]\nonumber\\ -\omega^i_\rho\partial_\tau\omega^j_\sigma\partial_\theta {\cal R}_{\mu\nu j}+ {\cal R}^i_{\mu\rho}[2{\cal R}^j_{\nu\tau}{\cal R}_{\sigma\theta j}- \omega^j_\tau(\partial_\theta+D_\theta){\cal R}_{\nu\sigma j}]\nonumber\\ -{\cal R}^i_{\nu\rho}\left[2{\cal R}^j_{\mu\tau}{\cal R}_{\sigma\theta j}- \omega^j_\tau(\partial_\theta+D_\theta){\cal R}_{\mu\sigma j}\right]+ \frac{1}{2}\omega^j_\tau(\partial_\theta \omega_{\rho j}+{\cal R}_{\theta\rho j})(\partial_\sigma+D_\sigma){\cal R}^i_{\mu\nu}\nonumber\\ -2\omega^i_\rho \left\{2\partial_\sigma {\cal R}^j_{\mu\tau}{\cal R}_{\nu\theta j}-\partial_\sigma[\omega^j_\tau(\partial_\theta+ D_\theta){\cal R}_{\mu\nu j}]\right\}\bigg)\sigma_i. \label{r2}\end{aligned}$$ Therefore, to second order in $\theta$, the action (\[accion2cnc\]) will be given by, $$\widehat I={\rm Tr}\ \int_{X}d^{4}x\ \varepsilon^{\mu \nu \rho \sigma}\left[ {\cal R}_{\mu \nu }{\cal R}_{\rho \sigma}+ 2 \theta^{\tau\vartheta}{\cal R}_{\mu\nu}{\cal R}^{(1)}_{\rho \sigma \tau \vartheta}+ \theta^{\tau\theta}\theta^{\vartheta\zeta}\left(2{\cal R}_{\mu\nu} {\cal R}^{(2)}_{\rho\sigma\tau\theta\vartheta\zeta}+{\cal R}^{(1)}_{\mu\nu\tau\theta} {\cal R}^{(1)}_{\rho\sigma\vartheta\zeta}\right)\right]. \label{accion2cnc1}$$ Taking into account (\[r1\]), we get that the first order term is proportional to Tr$(\sigma_i)$ and thus vanishes identically. Further using (\[r2\]), we finally get, $$\begin{aligned} \widehat I&=& \int_{X}d^{4}x\ \varepsilon^{\mu \nu \rho \sigma}\bigg\{ 2{\cal R}^i_{\mu \nu }{\cal R}_{\rho \sigma i}+ \frac{1}{4}\theta^{\tau\theta}\theta^{\vartheta\zeta} \bigg[-\varepsilon_{ijk}R^i_{\mu\nu}\left[ \partial_\vartheta R^j_{\rho\tau}\partial_\zeta R^k_{\sigma\theta}- \partial_\vartheta\omega^j_\tau\partial_\zeta(\partial_\theta+D_\theta)R^k_{\rho\sigma}\right] \nonumber\\ &+&[R^i_{\mu\tau}R_{\nu\theta i}- \frac{1}{2}\omega^i_{\tau}(\partial_\theta+D_\theta)R_{i \mu\nu}] [R^j_{\rho\vartheta}R_{\sigma\zeta j}- \frac{1}{2}\omega^j_\vartheta(\partial_\zeta+D_\zeta)R_{\rho\sigma j}]\nonumber\\ &+&R^i_{\mu\nu}\big\{R_{i \sigma\theta}[2R^j_{\rho\vartheta}R_{\tau\zeta j}-\omega^j_\vartheta (\partial_\zeta+D_\zeta)R_{\rho\tau j}]+\frac{1}{4}(\partial_\theta+D_\theta)R_{\rho\sigma i} \omega^j_\vartheta(\partial_\zeta\omega_{\tau j}+R_{\zeta\tau j})\nonumber\\ &+&\omega_{\theta i}[\partial_\tau(R^j_{\rho\vartheta}R_{\sigma\zeta j})-\frac{1}{2} \partial_\tau\omega^j_\vartheta(\partial_\zeta+D_\zeta)R_{\rho\sigma j}]\big\}- \frac{1}{2}R^i_{\mu\nu}\omega_{\tau i}\partial_\vartheta\omega^j_\theta\partial_\zeta R_{\rho\sigma j}\bigg]\bigg\}, \label{accion2cnc2}\end{aligned}$$ where the second order correction does not identically vanish. Similarly to the second order term (\[r2\]), the third order term for $\widehat{\cal R}$ can be computed by means of Eq. (\[e\]). The result is given by a rather long expression, which however is proportional to the unity matrix ${\bf 1}$, like (\[r1\]). Thus the third order term in (\[accion2cnc1\]), given by $$2\theta^{\tau_1\theta_1}\theta^{\tau_2\theta_2}\theta^{\tau_3\theta_3} {\rm Tr} \int_{X} \varepsilon^{\mu\nu\rho\sigma}\left({\cal R}_{\mu\nu} {\cal R}^{(3)}_{\rho\sigma\tau_1\theta_1\tau_2\theta_2\tau_3\theta_3}+ {\cal R}^{(1)}_{\mu\nu\tau_1\theta_1}{\cal R}^{(2)}_{\rho\sigma\tau_2\theta_2\tau_3\theta_3}\right),$$ vanishes identically, because ${\cal R}^{(2)}$ is proportional to $\sigma_i$. Thus, (\[accion2cnc2\]) is valid to third order. In fact, it seems that all its odd order terms vanish. 1truecm Towards Noncommutative Gravitational Instantons and Anomalies ============================================================= Towards Noncommutative Gravitational Instantons ----------------------------------------------- In the Euclidean signature, the action (\[topo\]), with local Lorentz group $SO(4)$, is proportional to a linear combination of integer valued topological invariants, the Euler $\chi(X)$ and the signature $\sigma(X)$, which characterize the gravitational instantons. In fact, $\sigma(X)$ and $\chi(X)$ are the analogue of the instanton number $k$ of $SU(2)$-Yang-Mills instantons, which is a manifestation of the gauge group topology, through $k \in \pi_3(SU(2))$. These topological invariants $\chi$ and $\sigma$, should of course include the corresponding boundary and $\eta$-invariant terms. Gravitational instantons are finite action solutions of the self-dual Einstein equations, which are asymptotically Euclidean [@swh], or asymptotically locally Euclidean (ALE) [@gi], at infinity (for a review, see [@ginstantons]). Then one would ask about the possibility to get gravitational instanton solutions in noncommutative gravity. The first natural step would be to analyze the positive action conjecture [@pac], in the context of noncommutative gravity, although it would requires a more complete version of noncommutative gravity. However, it is possible to give some generic arguments, and we will focus on the description of the global aspects, by analyzing invariants $\chi$ and $\sigma$ in the noncommutative context. In order to do that, we concentrate in the spin connection dependence, leaving the explicit metrics for later analysis. In the previous section, from explicit computations of the noncommutative corrections (in the noncommutative parameter $\theta$) of the topological invariants (see Eq. (\[accion2cnc2\])), we got that they do not vanish at ${\cal O}(\theta^2)$, hence the classical topological invariants are clearly modified. Thus, the use of the Seiberg-Witten map for the Lorentz group leads to essentially modified invariants $\widehat{\chi}$ and $\widehat{\sigma}$, which would characterize ‘noncommutative gravitational instantons’. Further, the corresponding deformed equation under the Seiberg-Witten map, $\widehat{R}^+_{\mu \nu} =0,$ does admit an expansion in $\theta$ with the term at the zero order being $R^+_{\mu \nu}$. Thus these corrections should be associated to the $\theta -$corrections of the self-duality equation $R^+_{\mu \nu} = 0$. Furthermore, we could expect for the gravitational instantons similar effects as for the case of Yang-Mills instantons [@ns; @ref3], where the singularities of moduli space are resolved by the noncommutative deformations. We already know from models of the minisuperspace in quantum cosmology, that noncommutative gravity leads to a version of noncommutative minisuperspace [@ncqc]. Thus, one would expect some new physical effects from the moduli space of metrics of a noncommutative gravity theory, which may help to resolve spacetime singularities. Comments on Gravitational Anomalies in Noncommutative Spaces ------------------------------------------------------------ - [*A Brief Survey on Gravitational Anomalies*]{} The study of topological invariants, leads us also to other nontrivial topological effects, like the anomalies, in our gravitational case. Gravitational anomalies, as well as gauge anomalies, are classified in local and global anomalies. In this paper we will mainly focus on local anomalies, whereas global anomalies will be mentioned as reference for future work. Local anomalies are associated to the lack of invariance of the quantum one-loop effective action, under infinitesimal local transformations. There are different types of local gravitational anomalies, depending on the type of transformations, like the Lorentz (or automorphisms) anomaly, and the diffeomorphisms anomaly. Let ${\cal G}^L_0$ be the group of vertical automorphisms of the frame bundle over the spacetime $X$. In a local trivialization, the frame bundle ${\cal G}^L_0$ can be identified with the set of continuous maps from $X$ to $SO(4)$, which approach to the identity at infinity, i.e. ${\cal G}^L_0 \equiv Map_0(X,SO(4)) \equiv \{ g: X \to SO(4), \ g \ {\rm continuous}\}$. Let ${\cal W}$ be the space of gauge field configurations, which consists of all spin connections $\omega^{ab}_{\mu}(x)$ with appropriate boundary conditions, and let ${\cal B}= {\cal W}/{\cal G}^L_0$. The automorphisms group ${\cal G}^L_0$ acts on ${\cal W}$ in such a way that one can construct the gauge bundle: ${\cal G}^L_0 \to {\cal W} \buildrel{\pi}\over{\to} {\cal B}$. In spacetimes $X$ of $n=dim X = 2m$ dimensions, the existence of the local Lorentz gravitational anomaly is associated to the non-triviality of the non-torsion part of the homotopy of ${\cal B}$, i.e. $\pi_2({\cal B}) \cong \pi_1({\cal G}^L_0) = \pi_{2m + 1} (SO(2m))\not= 1.$ For instance for $X = S^4,$ we get the pure topological torsion $\pi_1({\cal G}^L_0) \cong \pi_5(SO(4)) = \pi_5(SU(2) \times SU(2)) = {\bf Z}_2 \oplus {\bf Z}_2$. Thus, in four dimensions there is no local Lorentz anomaly. However, in $n=4k +2$ dimensions, for $k=0,1,... \ $, it certainly exists. For local diffeomorphisms transformations, the moduli space involves a richer phase space structure, given by the quotient space of the Teichmüller space, and the mapping class group. These anomalies can exist only for $n=4k + 2$ dimensions for $k=0,1, 2,\cdots$. However, mixed local Lorentz and diffeomorphism anomalies can exist in $2k+2$ dimensions [@anomaly]. Global gravitational Lorentz anomalies arise from the fact that Lorentz transformations are disconnected, which is related to the nontrivial topology of the group ${\cal G}^L_{\infty} = {\cal G}^L/{\cal G}^L_0$, where ${\cal G}^L$ is the set of local Lorentz transformations which have a limit at infinity. In particular for $X=S^4$, $\pi_0({\cal G}^L_{\infty}) \cong \pi_4(SO(4)) = \pi_4(SU(2) \times SU(2)) = {\bf Z}_2 \oplus {\bf Z}_2$, and a nontrivial global Lorentz anomaly arises. Similarly, the global gravitational diffeomorphisms anomalies are related to the disconnectedness of the mapping class group $\Gamma^+_{\infty}$, i.e. $\pi_0(\Gamma^+_{\infty}) \not= 1$ [@global]. .5truecm - [*Noncommutative Local Lorentz Anomalies*]{} Let us turn to the noncommutative side. The noncommutative version of the the Lorentz group will be denoted by $\widehat{SO(4)}$, and it is defined in terms of some suitable operator algebra on a [*real*]{} Hilbert space. Here and in the following, unless otherwise stated, the noncommutative spaces and groups corresponding to the ones in the preceding section, will be denoted by hated ones. Following [@harvey], we propose that $\widehat{SO(4)}$ will be given by the set of compact orthogonal operators ${\bf O}_{cpt}({\cal H})$, defined on the separable real Hilbert space ${\cal H}$. The compactness property avoids the Kuiper theorem, which states that the set of pure orthogonal operators ${\bf O}({\cal H})$ has trivial homotopy groups [@kuiper]. However, the restriction to subalgebras of normed orthogonal operators ${\bf O}_p({\cal H}) = \{ \alpha \ | \ \alpha = {\bf 1} + K\}$ has very important consequences. Here $K$ stands for compact, finite rank, trace class and Hilbert-Schmidt operator. By a mathematical result [@palais], the family of normed operator algebras $({\bf O}_{p}({\cal H}),||\cdot ||_p),$ with the $L^p-$norm given by $||D||_{p} = ({\rm Tr} |D|^p)^{1/p}$, together with the set $({\bf O}_{cpt}({\cal H}), ||\cdot ||_{\infty}),$ have exactly the same stable homotopy groups as $SO(\infty)$ (defined through the Bott periodicity theorem). Further, the stable homotopy groups of $SO(\infty)$, $\pi_j(SO(\infty))$, are given by ${\bf Z}_2$ for $j=0$, ${\bf Z}_2$ for $j=1$, ${\bf Z}$ for $j=3$, and $1$ otherwise. Also these groups have Bott periodicity mod 8, i.e. $\pi_n(SO(\infty)) = \pi_{n +8}(SO(\infty))$. Thus, the stable homotopy groups of $\widehat{SO(4)} = {\bf O}_{cpt}({\cal H})$ are in general nontrivial, and new topological effects in noncommutative gravity theories are possible. Le us turn now to the noncommutative analogue of the local Lorentz anomaly. It is determined by the nontrivial non-torsion part of homotopy groups of a suitable noncommutative version of the Lorentz group $\widehat{\cal G}^L_0$, which could be defined as the set $\widehat{\cal G}^L_0 \equiv Map_0(X, {\bf O}_{cpt}({\cal H}))$. The noncommutative local Lorentz anomaly is detected by the homotopy group $\pi_2(\widehat{\cal B}) = \pi_1(\widehat{\cal G}^L_0) = \pi_{j}({\bf O}_{cpt}({\cal H}))\not= 1$ for $j=0,1,3$ mod 8. For $j=0,1$ we have $\pi_{j}({\bf O}_{cpt}({\cal H})) = {\bf Z}_2$, while for $j=3$, $\pi_{j}({\bf O}_{cpt}({\cal H})) = {\bf Z}$. Thus for $j=3$ a non-torsion part is detected, and therefore the existence of a local Lorentz anomaly. .5truecm Finally, in the global perspective, the Seiberg-Witten map can be regarded as a map $SW:{\cal B} \to \widehat{\cal B}$, which preserves the infinitesimal Lorentz transformation (the gauge equivalence relation), and thus the locally Lorentz invariant observables of the theory. The Seiberg-Witten map is not well defined globally since both spaces ${\cal B}$ and $\widehat{\cal B}$ are different, and their corresponding topologies can be different as well. However, in some specific cases the operator representation of the Seiberg-Witten map is quite useful to define the Seiberg-Witten map globally [@poly]. 1truecm Concluding Remarks ================== In this paper, we propose a noncommutative version for topological gravity with quadratic actions. We start by the complex action (\[accion2cnc\]), in terms of the self-dual and antiself-dual connections, and which contains both the signature and the Euler topological invariants (for a Riemannian manifold). This action is then written as a $SL(2,{\bf C})$ action, whose noncommutative counterpart can be obtained in the same way as in the Yang-Mills case, by means of the Seiberg-Witten map. We compute this action up to third $\theta$-order, and we obtain that the first and the third order vanish, but the second order is different from zero. The action to this order is given by (\[accion2cnc2\]). It seems that all odd $\theta$-orders vanish identically. The noncommutative signature and the Euler topological invariants are given by the real and imaginary parts of (\[accion2cnc\]). For a Riemannian manifold, these topological invariants characterize gravitational instantons. Thus the study of noncommutative topological invariants should allow us, through the Seiberg-Witten map, to deform gravitational instantons into noncommutative versions for them. In order to make explicit computations, specific gravitational (noncommutative) metrics have to be chosen. In this context, it would be very interesting to give a noncommutative formulation for dynamical gravity, following the lines of this work. This analysis will be reported in a forthcoming paper [@ashtekar]. Similarly to the gauge theories case, we propose a definition of noncommutative local gravitational Lorentz anomaly, by a suitable definition of the noncommutative Lorentz group $\widehat{SO(4)}$ in compact spacetime of Euclidean signature. The application of these ideas to the diffeomorphism transformations connected to the identity might predict new nontrivial noncommutative gravitational effects, which should be computed explicitly as a noncommutative correction to the gravitational contribution to the chiral anomaly. The usual gravitational correction was computed for the standard commutative case in Refs. [@salam; @anomaly]. Moreover, this effect can also be regarded as a noncommutative gravitational correction of the local chiral anomaly in noncommutative gauge theory. This latter case of the pure noncommutative gauge field was discussed recently in Refs. [@ncanomaly]. It would be very interesting to pursue this way and compare with the results given recently by Perrot [@perrot]. Regarding noncommutative global Lorentz anomalies, in order to understand them, we would need to specify the connected components of the corresponding group $\widehat{\cal G}^L_{\infty}$. In this case one would have to compute $\pi_1(\widehat{\cal W}/\widehat{\cal G}^L_{\infty}) = \pi_0(\widehat{\cal G}^L_{\infty}) \not=1$. Of course a suitable operator definition of $\widehat{\cal G}^L_{\infty}$ is necessary like in the case of the local Lorentz anomaly. This is a difficult open problem. Finally, the ALE gravitational instantons is an important case of gravitational instantons, which can be obtained as smooth resolutions of [**A-D-E**]{} orbifold singularities ${\bf C}^2 /\Gamma$, with $\Gamma$ being an [**A-D-E**]{} finite subgroup of $SU(2)$. These gravitational instantons are classified through the Kronheimer construction [@kronheimer], which is the analogue construction to the ADHM construction of Yang-Mills instantons. There is a proposal to extend the ADHM construction to the noncommutative case [@ns]. Thus, it would be interesting to give the noncommutative analogue of the Kronheimer construction of ALE instantons. 1truecm **Acknowledgments** This work was supported in part by CONACyT México Grants Nos. 37851E and 33951E, as well as by the sabbatical grants 020291 (C.R.) and 020331 (O.O.). 2truecm H. Snyder, [Phys. Rev.]{} [**71**]{} (1947) 38. A. Connes, [*Noncommutative Geometry*]{}, Academic Press (1994). M.R. Douglas and N.A. Nekrasov, Rev. Mod. Phys. [**73**]{} (2002), 977. R.J. Szabo, “Quantum Field Theory on Noncommutative Spaces”, hep-th/0109162. A. Connes, M. R. Douglas, and A. Schwarz, [JHEP]{} [**9802:003**]{} (1998). N. Seiberg and E. Witten, [JHEP]{} [**9909:032**]{} (1999). F. Ardalan, H. Arafaei, M.R. Garousi and Ghodsi, "Gravity on noncommutative D-branes, hep-th/0204117. A.H. Chamseddine, [Commun. Math. Phys.]{} [**218**]{} (2001) 283. A.H. Chamseddine, [Phys. Lett.]{} B [**504**]{} (2001) 33. A.H. Chamseddine, “Invariant Actions for Noncommutative Gravity”, hep-th/0202137. J.W. Moffat, [Phys. Lett.]{} B [**491**]{} (2000) 345; [Phys. Lett.]{} B [**493**]{} (2000) 142. M. Bañados, O. Chandia, N. Grandi, F.A. Schaposnik and G.A. Silva, Phys. Rev. D [**64**]{} (2001) 084012. H. Nishino and S. Rajpoot, Phys. Lett. B [**532**]{} (2002) 334. V.P. Nair, “Gravitational Fields on a Noncommutative Space”, hep-th/0112114. S. Cacciatori, D. Klemm, L. Martucci and D. Zanon, Phys. Lett. B [**536**]{} (2002) 101. S. Cacciatori, A.H. Chamseddine, D. Klemm, L. Martucci, W.A. Sabra and D. Zanon, “Noncommutative Gravity in Two Dimensions”, hep-th/0203038. J. Madore, S. Schraml, P. Schupp and J. Wess, [Eur. Phys. J. C]{} [**16**]{} (2000) 161. B. Jurco, S. Schraml, P. Schupp and J. Wess, [Eur. Phys. J. C]{} [**17**]{} (2000) 521. B. Jurco, P. Schupp and J. Wess, [Nucl. Phys.]{} B [**604**]{} (2001) 148. J. Wess, [Commun. Math. Phys.]{} [**219**]{} (2001) 247. B. Jurco, L. Moller, S. Schraml, P. Schupp and J. Wess, [Eur. Phys. J. C]{} [**21**]{} (2001) 383. X. Calmet, B. Jurco, P. Schupp, J. Wess and M. Wohlgenannt, [*Eur. Phys. J. C*]{} [**23**]{} (2002) 363. C.K. Zachos, Int. J. Mod. Phys. A [**17**]{} (2002) 297. M. Kontsevich, “Deformation Quantization of Poisson Manifolds I”, q-alg/9709040. S. Deser, M. J. Duff, and C. J. Isham, [Phys. Lett. B]{} [**93**]{} (1980) 419. A. Ashtekar, A. P. Balachandran, and So Jo, [Int. J. Mod. Phys. A]{} [**4**]{} (1989) 1493. L. Smolin, [J. Math. Phys.]{} [**36**]{} (1995) 6417. J. A. Nieto, O. Obregón, and J. Socorro, [Phys. Rev. D]{} [**50**]{} (1994) R3583. H. García-Compeán, O. Obregón, C. Ramírez, and M. Sabido, [Phys. Rev. D]{} [**61**]{} (2000) 085022. S.W. Hawking, Phys. Lett. A [**60**]{} (1977) 81. T. Eguchi and A.J. Hanson, Phys. Lett. B [**74**]{} (1978) 249; G.W. Gibbons and S.W. Hawking, Phys. Lett. B [**78**]{} (1978) 430. T. Eguchi and A.J. Hanson, Ann. Phys. [**120**]{} (1979) 82; T. Eguchi, P.B. Gilkey and A.J. Hanson, Phys. Rep. [**66**]{} (1980) 213; [*Euclidean Quantum Gravity*]{}, eds. G.W. Gibbons and S.W. Hawking, World Scientific, Singapore (1993). G.W. Gibbons, S.W. Hawking and M.J. Perry, Nucl. Phys. B [**138**]{} (1978) 141; G.W. Gibbons and C.N. Pope, Commun. Math. Phys. [**66**]{} (1979) 267; R. Schoen and S.T. Yau, Phys. Rev. Lett. [**42**]{} (1979) 547; E. Witten, Commun. Math. Phys. [**80**]{} (1981) 381. N. Nekrasov and A. Schwarz, Commun. Math. Phys. [**198**]{} (1998) 689. H. García-Compeán, O. Obregón and C. Ramírez, Phys. Rev. Lett. [**88**]{} (2002) 161301. L. Alvarez-Gaumé and E. Witten, Nucl. Phys. B [**234**]{} (1983) 269. E. Witten, Commun. Math. Phys. [**100**]{} (1985) 197. J.A. Harvey, “Topology of the Gauge Group in Noncommutative Gauge Theory”, hep-th/0105242. N.H. Kuiper, Topology [**3**]{} (1965) 19. R.S. Palais, Topology [**3**]{} (1965) 271. P. Kraus and M. Shigemori, “Noncommutative Instantons and the Seiberg-Witten Map”, hep-th/0110035; A.P. Polychronakos, “Seiberg-Witten Map and Topology”, hep-th/0206013. H. García-Compeán, O. Obregón, C. Ramírez, and M. Sabido, to appear. R. Delbourgo and A. Salam, Phys. Lett. B [**40**]{} (1972)381; T. Eguchi and P.G.O. Freund, Phys. Rev. Lett. [**37**]{} (1976) 1251. F. Ardalan and N. Sadooghi, Int. J. Mod. Phys. A [**16**]{} (2001) 3151; J.M. Gracia-Bondi and C.P. Martin, Phys. Lett. B [**479**]{} (2000) 321. D. Perrot, J. Geom. Phys. [**39**]{} (2001) 82. P.B. Kronheimer, J. Diff. Geom. [**29**]{} (1989) 665.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this era of “big” data, not only the large amount of data keeps motivating distributed computing, but concerns on data privacy also put forward the emphasis on distributed learning. To conduct feature selection and to control the false discovery rate in a distributed pattern with multi-[*machines*]{} or multi-[*institutions*]{}, an efficient aggregation method is necessary. In this paper, we propose an adaptive aggregation method called ADAGES which can be flexibly applied to any machine-wise feature selection method. We will show that our method is capable of controlling the overall FDR with a theoretical foundation while maintaining power as good as the Union aggregation rule in practice.' author: - 'Yu Gui[^1]' bibliography: - 'ygbib.bib' title: 'ADAGES: adaptive aggregation with stability for distributed feature selection' --- =1 Introduction ============ In recent decades, the idea of distributed learning and data decentralization has been frequently discussed. On one hand, the notion of distributed learning is motivated by the advanced techniques of data collection and storage which leads to a large amount of accessible data. Distributed storage and parallel computing are put forward to address the concerns, which further requires statistical learning methods in this distributed scenario. On the other hand, statisticians focus on distributed learning since privacy protection is of main interest nowadays. A representative example is the collaborative clinical research among different hospitals on certain diseases, where hospitals will not share patients’ data for privacy protection. Therefore, statisticians have to deal with certain “encoded’’ statistics collected from distributed institutions. Many recent works focusing on different statistical perspectives have contributed to this field. Estimation is the most fundamental topic in statistics, some works adopt the divide and conquer algorithm for distributed estimation and also study the accuracy of estimation under various contexts, among which are [@battey2015distributed], [@JMLR:v16:zhang15d], [@zhao2014general] and [@cai2020distributed]. Distributed hypothesis testing is discussed in works such as [@ramdas], [@sreekumar2018distributed], [@gilani2019distributed] and is also covered in [@battey2015distributed] and [@zhao2014general]. Specifically, [@su2015communicationefficient], [@Emery2019ControllingTF] and [@nguyen2020aggregation] have studied the aggregated feature selection based on multiple knockoffs. Originated from applications, communication constraints and privacy constraints ought to be taken into consideration, [@zhangandberger], [@10.1145/2897518.2897582], [@cai2020distributed] study the tradeoff between communication constraints and estimation accuracy. In addition, many other works contribute to distributed learning theories such as [@garg2014communication], [@dobriban2018distributed], [@doi:10.1080/01621459.2018.1429274] and [@kipnis2019mean]. [1.5]{} **Controlled feature selection.** In addition to feature selection methods such as regularized regression (e.g. [@10.2307/2346178],[@doi:10.1198/016214501753382273]), controlled feature selection aims to select important features and reduce false selections under some criteria. In this paper, we focus on a fundamental criterion in feature selection: false discovery rate ($\FDR$). The notion of $\FDR$ is introduced in [@benjamini1995controlling]. With the definition of the subset $\cS \subset \{1,\dots,d\}$ of relevant features, feature selection is equivalent to recovering $\cS$ based on observations. When the estimated set $\hat{\cS}$ is produced, the false discoveries can be denoted as $\hat{\cS} \cap \cS^{c}$ and [*false discovery proportion*]{} ($\FDP$) is defined in the form $$\begin{aligned} {\rm FDP} = \frac{|\hat{\cS} \cap \cS^{c}|}{|\hat{\cS}|}.\end{aligned}$$ The expectation of $\FDP$ is called the [*false discovery rate*]{} ($\FDR$), i.e. $ {\rm FDR} = \mathbb{E}\left[{\rm FDP}\right]$. In addition, power of feature selection illustrates the ability to recover true features and thus is defined as $$\begin{aligned} {\rm Power} = \EE[\frac{|\hat{\cS} \cap \cS|}{|\cS|}],\end{aligned}$$ which is the expected number of true discoveries over the total number of true features $|\cS |$. A series of $\FDR$-based methods originate from the invention of $\FDR$ in [@benjamini1995controlling] which utilizes the rank of z-scores for selecting important features. Based on this, [@benjamini2001control] relaxes the independence assumption as an extension. Knockoff filter is introduced in [@barber2015controlling] with exact control of $\FDR$ and can be extended in a model-free way in [@Cands2016PanningFG]. Recently, methods based on mirror statistics are put forward under this topic: [@xing2019gm] creates Gaussian mirror variables for all features that get rid of the conditional correlation within each mirrored pair; [@dai2020false] utilizes the data splitting and multiple splitting techniques to ensure the recovery of feature importance with stability. ![image](Plot0.pdf){width="\textwidth"} [1.5]{} **Stability selection.** As an improvement to general feature selection methods, the notion of stability selection is introduced by [@Meinshausen2008StabilityS] which conducts subsampling of size $\left[n/2\right]$ and identifies the most frequently selected features. The idea is close to a “voting process” where each sub-sample votes for each feature once and it is in line with our belief that important features will stably become outstanding with more votes. The spirit of stability selection later motivates works such as [@Shah2011VariableSW], [@Hofner2015ControllingFD] and also stimulates our idea of adaptive aggregation in distributed feature selection. [1.5]{} **Our contribution.** With the belief in the future of data decentralization, in this paper, we consider the topic of distributed feature selection with a controlled error rate. We present a general aggregation method for distributed feature selection called ADAGES (**AD**aptive **AG**gr**E**gation with **S**tability) that can apply for any controlled feature selection method. Without looking into the original datasets, we operate on Boolean variables in $\left\{0,1\right\}^{d}$ that is equivalent to subset of features of dimension $d$. Therefore, there is no complex communication or privacy concern in this context. Unlike [@su2015communicationefficient], [@Emery2019ControllingTF] and [@nguyen2020aggregation] that transfer knockoff statistics for aggregation, ADAGES does not depend on any specific feature selection method and is thus more flexible in application. Besides, in this paper, we assume the feature selection procedures of all the machines are independent of each other, i.e. as random Boolean vectors, $\hat{\cS}_i \perp \hat{\cS}_j$ for all $i,j \in [k]$. It is noticeable that in practice, the dependence exists due to the overlap of samples for different machines, e.g. the common patients for different hospitals. The generalized case to study the dependence is a promising topic for future work. [1.5]{} **Outline.** We begin with the problem formulation in section \[back\] and then in section \[method\], we introduce the detail of ADAGES as an adaptive improvement on empirical rules. In section \[result\], the main theorem will be established to guarantee the exact control of overall , theoretical proofs of which are in section \[proof\]. The results of numerical experiments are shown in section \[simu\]. [1.5]{} **Notations.** Suppose the dimension of the $n$ observed features is $d$, i.e. $\bx \in \RR^{d}$. Define $\cS \subset \left\{1,\dots,d\right\}$ is the subset of true features of interest. There are $k$ different machines or institutions contributing to the problem and we denote them as $M_1, \dots, M_k$. For each $i \in \left\{1,\dots,k\right\}$, machine $M_i$ produces an estimated subset $\hat{\cS}_i$ before aggregation and our goal is to obtain $\hat{\cS}$ based on $\left\{\hat{\cS}_1,\dots,\hat{\cS}_k\right\}$. Notation $\hat{\cS}_{(c)}$ refers to the subset produced by the aggregation method with threshold $c$, which will be introduced in section \[method\]. Also, $\hat{\cS}_{I}$ and $\hat{\cS}_{U}$ are aggregated subsets of the Intersection rule and the Union rule respectively. Background {#back} ========== In the context of distributed learning, imagine there is a central machine (the yellow one in Figure \[fig:plot0\]) and $k$ machines $\left\{M_i:i=1,\dots,k\right\}$ which can be $k$ hospitals or servers. In the current task, the dataset of interest is distributed over all $k$ machines due to concerns of privacy or distance and assume the $i$th machine deals with a sub-dataset $D_i$ with $n_i$ observations. All the machines share the same set of features in the same task, i.e. $\left\{X_j:j=1,\dots,d\right\}$ and they focus on $\FDR$ control with the universal pre-defined level of $q \in \left(0,1\right)$. Suppose the selection result for the $i$th machine is $\hat{\cS}_i$. We should note that the feature selection method adopted for each machine can be arbitrary and the only requirement is that the method should be capable of exact $\FDR$ control. With our adaptive aggregation with stability, we produce the final selection result $\hat{\cS}$ based on controlled selections $\left\{\hat{\cS}_i:i=1,\dots,k\right\}$. For each machine $M_i,i=1,\dots,k$, we define ${\rm FDR}_i = \EE\left[\frac{|\hat{\cS}_i \cap \cS^{c}|}{|\hat{\cS}_i|}\right]$. Empirical aggregation methods for distributed feature selection --------------------------------------------------------------- First, three empirical aggregation methods are introduced and we will later cover them as special cases in a generalized family. Define $z^{\left(i\right)}_j = \bfm{1}_{\left\{j \in \hat{\cS}_i\right\}}$ for each feature, then $\hat{\cS}_i$ is equivalent to an indicator vector $\bz^{\left(i\right)} = \left(z^{\left(i\right)}_1,\dots,z^{\left(i\right)}_d\right)^\top$ and aggregation algorithms can be viewed as operation rules for Boolean variables. Also, in the sense of privacy protection, the selected subset $\hat{\cS}_i$ as the statistics with less sensitive information can be publicly transferred to the “center machine” for aggregation. Among aggregation methods, union and intersection of sets are usually adopted empirically. As the simplest rule similar to the OR rule in Boolean operation, we obtain the Union rule $$\begin{aligned} \hat{\cS}_{U} = \bigcup_{i=1}^{k} \hat{\cS}_i.\end{aligned}$$ Also, the intersection of all selected subsets produces the Intersection rule: $$\begin{aligned} \hat{\cS}_{I} = \bigcap_{i=1}^{k} \hat{\cS}_i.\end{aligned}$$ The Union rule is not strict, thus requires the stricter $\FDR$ control for each machine. It indicates that if each machine has $\FDR$ control at $q$, then the overall $\FDR$ may far exceeds the pre-defined level. On the other hand, the Intersection rule is far more stricter and will result in the loss of power in aggregation. The phenomenon is illustrated in the left plot of Figure \[fig:plot1\]. We will show that these two rules will have a more general representation and are thus included in a family of threshold-based aggregation rules. Generalized threshold-based aggregation --------------------------------------- As an extension to the operation of Boolean variables, we first define $$\begin{aligned} m_j = \sum_{i=1}^{k} \bfm{1}_{\left\{j \in \hat{\cS}_{i}\right\}},\;\;j=1,\dots,d.\end{aligned}$$ Then the [*threshold-based rule*]{} is conducted as $$\begin{aligned} \hat{\cS}_{\left(c\right)} = \left\{j \in \left[d\right]: m_j \geq c\right\}\end{aligned}$$ for an integer $c$. We should notice that [*the Union rule*]{} is a special case of [*the threshold-based rule*]{} with $\hat{\cS}_{U} = \hat{\cS}_{\left(c=1\right)}$. And for [*the Intersection rule*]{}, $\hat{\cS}_{I} = \bigcap_{i=1}^{k} \hat{\cS}_i = \hat{\cS}_{\left(c=k\right)}$. Lying between the Intersection and the Union rules, the threshold $c = \left[\left(k+1\right)/2\right]$ can be adopted as a mild rule and we call it “median-aggregation”. However, we rarely have prior information to determine a universal threshold $c$ and the suitable threshold may also vary in different cases. Therefore, we introduce ADAGES, the adaptive aggregation method in the following section. Adaptive aggregation for distributed feature selection {#method} ====================================================== Based on the definition of $\hat{\cS}_{\left(c\right)}$, $\hat{\cS}_{\left(c_1\right)} \subseteq \hat{\cS}_{\left(c_2\right)}$ for any $c_1 \geq c_2$, thus $|\hat{\cS}_{\left(c\right)}|$ is a decreasing function of $c$. Further, adaptive information aggregation from $k$ machines utilizes the data-driven threshold which is determined conditionally on $\left\{\hat{\cS}_i,i=1,\dots,k\right\}$, thus it is meaningful to look into the behavior of $\hat{\cS}_{\left(c\right)}|\left(\hat{\cS}_1,\dots,\hat{\cS}_k\right)$. Denote $\bar{s} = \frac{1}{k} \sum_{i=1}^{k} |\hat{\cS}_i|$ and $M = \max_{i=1}^{k} |\hat{\cS}_i|$. Candidate region for threshold ------------------------------ Restrictions on the size of $\hat{\cS}_{(c)}$ is one traditional way to regularize model complexity, and in the first step, we determine the candidate region for threshold $c$ by restricting the model complexity measure $|\hat{\cS}_{(c)}|$. In the contrast to the usual upper bounds for model complexity, we use the mean $\bar{s} = \frac{1}{k} \sum_{i=1}^{k} |\hat{\cS}_i|$ as a lower bound for $|\hat{\cS}_{\left(c\right)}|$, which is in line with the purpose of power maintenance in multiple testing. We define $c_0$ as an upper bound as $$\begin{aligned} \label{c0} c_0 = \max \left\{c: |\hat{\cS}_{\left(c\right)}| \geq \bar{s}\right\}%\wedge \left[\frac{k+1}{2}\right],\end{aligned}$$ and it is trivial that $c_0 \geq 1$ since $$\begin{aligned} |\hat{\cS}_{U}| = |\hat{\cS}_{\left(c=1\right)}| \geq \max \left\{|\hat{\cS}_i|:i=1,\dots,k\right\} \geq \bar{s}.\end{aligned}$$ Therefore, we can choose any integer $c \leq c_0$ as a mild threshold for aggregation, but in the meanwhile, a threshold ought to be chosen to balance the tradeoff between false discovery rate and power. ![image](Plot1.pdf){width="\textwidth"} ![image](Freq_c1.pdf){width="\textwidth"} Choice of threshold for recovery accuracy ----------------------------------------- Besides, to improve the tradeoff between $\FDR$ and Power, we adopt the following rule emphasizing stable recovery. With $c_0$ as an upper bound, smaller threshold leads to higher selection power as well as more false discoveries. [1.5]{} **Complexity ratio.** First, we consider the complexity ratio $$\eta_c = \begin{cases} \frac{|\hat{\cS}_{\left(c\right)}|}{|\hat{\cS}_{\left(c+1\right)}|},& |\hat{\cS}_{\left(c+1\right)}| > 0,\\ \infty,& |\hat{\cS}_{\left(c+1\right)}| = 0, \end{cases}$$ for thresholds decreasing from $c_0$ and the minimum of complexity ratio is a sign of stable and accurate recovery. Then, the adaptive threshold $c^*$ for aggregation can be chosen by $$\begin{aligned} c^* = {\rm argmin} \left\{\eta_c: 1 \leq c \leq c_0,\right\}.\end{aligned}$$ In practice, to avoid infinite values, we can also use a surrogate $(1+|\hat{\cS}_{\left(c\right)})|/(1+|\hat{\cS}_{\left(c+1\right)}|)$. As is shown in a simple example in the right plot of Figure \[fig:plot1\] with true $|\cS | = 20$, threshold with the minimum ratio $\eta_c$ produces a more stable recovery of the true $\cS$, and in this figure we adopt a modified form $20 \times {\rm log}\left(\eta_c\right)$ to represent the magnitude of ratio. To illustrate the complexity ratio, the idea is similar to the eigenvalue ratio in PCA for determining the number of meaningful eigen-components. We can also consider a toy example where $|\cS \cap \hat{\cS}| \sim B(k,p)$ with $p = \PP(j \in \cS: j \in \hat{\cS}_i)$. In this case, minimizing the ratio $\eta_c$ approximately produces the mode of Bernoulli distribution that recovers the threshold in line with the most likely frequency for important features. It is noticeable that another rule with theoretical intuition for choosing the threshold is given by $$\begin{aligned} \widetilde{c} = {\rm argmin}_{1 \leq c \leq c_0} c |\hat{\cS}_{(c)}|,\end{aligned}$$ which explicitly focuses on the tradeoff between the magnitude of threshold and the size of selected subset. As we will show in Lemma \[lem:shrink\], the power shrinkage term $\left(\frac{c|\hat{\cS}_{(c)}|}{k|\cS|} \cdot {\rm FDP}\right)$ plays the leading role in the lower bound for the true positive proportion. Then, for ${\rm FDP}$ at a certain level, minimizing the product $c |\hat{\cS}_{(c)}|$ is equivalent to maximizing the true positive proportion. Details of numerical simulations will be discussed in section \[simu\] and the implementation of adaptive aggregation based on the complexity ratio is shown in the Algorithm \[alg1\]. Aggregated feature selection is an initial case dealing with binary variables. It is more exciting to extend this threshold-based aggregation method to estimation and inference based on communication of more informative statistics, and we leave this for future work. **Input** $\left\{\hat{\cS}_i:i=1,\dots,k\right\}$: $\hat{\cS}_i \subset \left[d\right]$ is the selected subset for the $i$th machine **Output** $\hat{\cS} = \hat{\cS}_{\left(c^*\right)}$ as an estimation for $\cS$ Calculate $m_j = \sum_{i=1}^{k} \bfm{1}_{\left\{j \in \hat{\cS}_{i}\right\}}$, $\forall j \in \left\{1,\dots,d\right\}$ Calculate $\bar{s} = \frac{1}{k} \sum_{i=1}^{k} |\hat{\cS}_i|$ $\hat{\cS}_{\left(c\right)} = \left\{j \in \left[d\right]: m_j \geq c\right\}$ Calculate the complexity ratio $\eta_c = \frac{|\hat{\cS}_{\left(c\right)}|+1}{|\hat{\cS}_{\left(c+1\right)}|+1}$, $c \leq k-1$;   $\eta_k = \infty$ Determine $c_0 = \max \left\{c: |\hat{\cS}_{\left(c\right)}| \geq \bar{s}\right\}$ Produce adaptive threshold $c^* = {\rm argmin} \left\{\eta_c: 1 \leq c \leq c_0\right\}$\ $\hat{\cS} = \hat{\cS}_{(c^*)} = \left\{j \in \left[d\right]: m_j \geq c^*\right\}$ Main result {#result} =========== In this section, we will show the theoretical properties of ADAGES for adaptive aggregation in the scenario of distributed feature selection. First, we obtain the control of overall false discovery rate in theorem \[thm1\]; besides, we establish the connection of overall power and machine-wise power: theorem \[thm2\] shows the simultaneous control of $\FDR$ and a power shrinkage term and theorem \[thm3\] compares the power of ADAGES with the “optimal” power produced by the Union rule. Distributed FDR control ----------------------- Based on the adaptive threshold for aggregation, the ADAGES produces exact control of the false discovery rate. \[thm1\] For a pre-defined level $q \in \left(0,1\right)$, suppose machine-wise ${\rm FDR}_i \leq q$ for $i=1,\dots,k$ and $\lambda \geq \max_{1\leq i \leq k}\frac{|\hat{\cS}_i|}{c^*} \sum_{j=1}^{k} \frac{1}{|\hat{\cS}_j|}$. Then, ADAGES with $c^* \in \left[1,c_0\right] \cap \ZZ$ produces $$\begin{aligned} {\rm FDR}_{\left(c^*\right)} = \EE\left[\frac{|\hat{\cS}_{\left(c^*\right)} \cap \cS^{c}|}{|\hat{\cS}_{\left(c^*\right)}|}\right] \leq \lambda q.\end{aligned}$$ Then, we discuss two special cases with fixed thresholds $c=1$ and $c=k$ respectively, which may reveal their shortcomings to some extend. \[union\] For a pre-defined level $q \in \left(0,1\right)$, if machine-wise ${\rm FDR}_i \leq q$ for all $i=1,\dots,k$, the Union rule produces $$\begin{aligned} {\rm FDR}_{U} = \EE\left[\frac{|\hat{\cS}_{U} \cap \cS^{c}|}{|\hat{\cS}_{U}|}\right] \leq kq.\end{aligned}$$ More generally, as is pointed out in [@xie2019aggregated], if there is a sequence of pre-defined FDR levels $\left(q_1,\dots,q_k\right)$ such that ${\rm FDR}_i \leq q_i$ for all $i \in \left[k\right]$, then the overall $\FDR$ can be exactly controlled at level $q=\sum_{i=1}^{k} q_i$. If we would like to have overall FDR controlled at level $q$, it requires that $\sum_{i=1}^{k} q_i = q$ and a simple case is $q_i = q/k$ for all $k$ machines. Besides, in the case with $c = k$, based on $k|\hat{\cS}_{I} \cap \cS^{c}| \leq \sum_{i=1}^{k} |\hat{\cS}_{i} \cap \cS^{c}|$, we have the following proposition: \[intersec\] For a pre-defined level $q \in \left(0,1\right)$, if machine-wise ${\rm FDR}_i \leq q$ for $i=1,\dots,k$ and there is a constant $\kappa \geq 1$such that $\max_{i \in \left[k\right]} \frac{|\hat{\cS}_i|}{|\hat{\cS}_{I}|} \leq \kappa$, then the Intersection rule produces $$\begin{aligned} {\rm FDR}_{I} = \EE\left[\frac{|\hat{\cS}_{I} \cap \cS^{c}|}{|\hat{\cS}_{I}|}\right] \leq \kappa q.\end{aligned}$$ Comparing the overall $\FDR$ bounds, the Union rule as a less strict aggregation rule produces $\FDR$ at an expected level as high as $kq$. Instead, the Intersection rule is the most conservative and has theoretical $\FDR$ control at $q$ multiplied by a factor $\kappa$. However, with an adaptive threshold, ADAGES summarizes machine-wise information more efficiently and has the control of overall $\FDR$ at level $\lambda q$. Here, as an illustration, we compare the magnitude of $k,\lambda,\kappa$ to show the abilities of $\FDR$ control of the three methods. First, if $c^*/k$ has a positive lower bound such that $c^* \geq b\cdot k$ and $\max_{i \in [k]} \frac{|\hat{\cS}_i|}{|\hat{\cS}_j|} = O(1)$ for all $j$, then we obtain $\lambda = o(k)$. Comparison between $\lambda$ and $\kappa$ is of more interest, which is summarized in the following proposition. Denote the tight bound $\bar{\lambda} = \max_{i \in [k]}\frac{|\hat{\cS}_i|}{c^*} \sum_{j=1}^{k} \frac{1}{|\hat{\cS}_j|}$ and $\bar{\kappa} = \max_{i \in \left[k\right]} \frac{|\hat{\cS}_i|}{|\hat{\cS}_{I}|}$. Then, we have $$\begin{aligned} \frac{\bar{\lambda}}{\bar{\kappa}} = \frac{1}{c^*} \sum_{j=1}^{k} \frac{|\hat{\cS}_{I}|}{|\hat{\cS}_j|}.\end{aligned}$$ Further, if $(1-\epsilon)c^* < \sum_{j=1}^{k} \frac{|\hat{\cS}_{I}|}{|\hat{\cS}_j|} < (1+\epsilon)c^*$ for any $\epsilon \in (0,1)$, then $|\frac{\bar{\lambda}}{\bar{\kappa}} - 1| < \epsilon$. Power analysis -------------- We also establish a lower bound for the Power based on $\left\{ {\rm Power}_i\right\}$, $i=1,\dots,k$ as well as the power produced by the Union bound, before which we introduce the basic lemma to establish the connection between overall true positive proportion (TPP) with machine-wise $\TPP_i$, $i=1,\dots,k$. \[lem:shrink\] Based on the ADAGES algorithm, we obtain $$\begin{aligned} \TPP \geq \frac{1}{k} \sum_{i=1}^{k} \TPP_i - \frac{c^*}{k} \frac{| \cS^c \cap \hat{\cS}_{(c^*)} |}{| \cS |}.\end{aligned}$$ The second term $\frac{c^*}{k} \frac{| \cS^c \cap \hat{\cS}_{(c^*)} |}{| \cS |}$ acts as the term of “power shrinkage” and can be connected with ${\rm FDP}$ in the form: $$\begin{aligned} {\rm Power~shrinkage} = \frac{c^*|\hat{\cS}_{(c^*)} |}{k| \cS |}{\rm FDP},\end{aligned}$$ which involves a tradeoff between $c^*$ and $|\hat{\cS}_{(c^*)} |$. Therefore, with proper restriction on $|\hat{\cS}_{(c^*)} |$, i.e. a proper choice of $c^*$, we can simultaneously control $\FDR$ and the power shrinkage term, which is shown in theorem \[thm2\]. \[thm2\] Denote ${\rm Power}_i$ as the selection of for the $i$th machine. Suppose there exists constant $\gamma \in (0,1/2)$ such that $|\hat{\cS}_{(c^*)} | \leq (1+\gamma)|\cS|$ and $c^* \leq k/2$. If the overall $\FDR$ is controlled at level $q \in (0,1)$, then for a constant $\alpha \leq 3/4$, we have $$\begin{aligned} {\rm Power} \geq \frac{1}{k} \sum_{i=1}^{k} {\rm Power}_i - \alpha q.\end{aligned}$$ It is noticeable that power produced by the Union bound is the maximum power one aggregation method can achieve. Denote ${\rm diff} = |(\hat{\cS}_{U}\cap \cS)\backslash(\hat{\cS}_{U}\cap \cS)| = |(\hat{\cS}_{U}\cap \cS)| - |(\hat{\cS}\cap \cS)|$, with which we obtain the following theorem. \[thm3\] Suppose we have a uniform lower bound for ${\rm Power}_i,i\in [k]$ that $\PP(j \in \cS, j \in \hat{\cS}_i) \geq \eta_{n,d}$ for $i\in [k],j \in[d]$. If we further have $c^* \leq k/2$, then $\exists \xi \leq 2$ such that $$\begin{aligned} \EE[{\rm diff}] \leq \xi (1-\eta_{n,d})|\cS|.\end{aligned}$$ Further, if the selection method has the property that $\eta_{n,d} \rightarrow 1$ as $n,d \rightarrow \infty$, we have $|{\rm Power} - {\rm Power}_{U}| \rightarrow 0$ as $n,d \rightarrow \infty$. Numerical simulation {#simu} ==================== In this section, we study the performance of our adaptive aggregation method by comparisons with the empirical Union, Intersection and median-aggregation rules in simulations. We also compare with the performance of the aggregation method in [@xie2019aggregated], which is a modified version of the Union rule. In numerical simulations, we use model-X knockoffs with second-order construction for each machine which produces exact $\FDR$ control, so the method can be named as “model-X knockoffs + ADAGES” to illustrate the procedure. In this case, we are also interested in the comparison between our algorithm-free ADAGES and the knockoff-based aggregation method AKO in [@nguyen2020aggregation]. We consider the AKO with BY step-up with theoretical guarantee and use $\gamma = 0.3$ that is adopted in [@nguyen2020aggregation]. In experiments, ADAGES refers to our adaptive method with $c^* = {\rm argmin} \left\{\eta_c: 1 \leq c \leq c_0\right\}$ while ${\rm ADAGES}_m$ is the modified method with threshold $\widetilde{c} = {\rm argmin}_{1 \leq c \leq c_0} c |\hat{\cS}_{(c)}|$. A simple linear model is adopted for feature selection: $$\begin{aligned} \mathbf{y} = \mathbf{X} \bfm{\beta} + \mathbf{\epsilon},\end{aligned}$$ where $\mathbf{X} \in \RR^{n \times d} \sim \cN(\mathbf{0},\bSigma)$ is the design matrix, where $\bSigma \in \RR^{d \times d}$ and $\bSigma_{ls} = \rho^{|l-s|}$ for all $l,s \in [d]$. $\mathbf{y} \in \RR^{n}$ is the vector of $n$ responses and elements in the noise vector $\mathbf{\epsilon}$ are drawn [*i.i.d.*]{} from standard Gaussian distribution. Feature importance is revealed in $\mathbf{\beta}$ and $\cS = \left\{j \in \left[d\right]: \beta_j \neq 0\right\}$. Comparisons are conducted in the following two aspects, in which the repetition number is $r=100$ and $\rho = 0.25$. We use the criteria of averaged FDP and averaged power as the sample-versions of FDR and power respectively. ![image](Num_k2.pdf){width="\textwidth"} Varying the number of machines $k$ ---------------------------------- Since the number of machines is a vital factor in the context of distributed learning, in the first experiment, we vary $k$ among $\left\{1,2,5,8,10,20\right\}$ with $n=1000$, $d=50$ and $s=|\cS | = 20$ fixed. Here nonzero elements in true $\mathbf{\beta}$ is drawn [*i.i.d.*]{} and uniformly from $\left\{\pm 2\right\}$. From Figure \[fig:num\_k\], we can see that ADAGES obtains a desirable tradeoff between the averaged FDP and power. As an adaptive aggregation method, ADAGES controls FDP exactly under $q=0.2$ while achieves power nearly as good as that of the Union rule, which meets the goal of power maintenance for controlled feature selection. For the three empirical methods, although the Union rule maintains power at the highest level, it produces FDP exceeding the pre-defined level $q=0.2$; the Intersection rule has conservative control of FDP but results in a serious loss of power in feature selection while the power loss of median-aggregation occurs earlier than ADAGES. As an improvement for the Union rule on $\FDR$ control, the method in [@xie2019aggregated] obtains comparable FDP with the Intersection rule; but since the pre-defined level for each machine becomes $q_i = q/k$, this method will sacrifice power as shown in Figure \[fig:num\_k\] and is thus limited in application. On the other hand, in this case without ultra-high dimension or strict sparsity, the AKO that transforms more informative “p-values” in aggregation is capable of controlling the averaged FDP around the level $\kappa q$ where $\kappa \leq 3.24$ is given in [@nguyen2020aggregation]; power of AKO is lower than other algorithm-free methods when $k < 10$, but remains stable as $k$ increases. However, the modified ADAGES with $\widetilde{c} = {\rm argmin}_{1 \leq c \leq c_0} c |\hat{\cS}_{(c)}|$ does not produce higher power in experiments since the power shrinkage term indicates the tradeoff between FDP and $c |\hat{\cS}_{(c)}|$. Here, FDP is also a function of $c$ which ought not to be ignored in the choice of $\widetilde{c}$. ![image](Dim2.pdf){width="\textwidth"} Varying dimension $d$ --------------------- In the second experiment, we vary dimension $d$ in the set\ $\left\{15,30,45,60,75,90\right\}$ while fix model parameters as $n=1000$, $k=10$ and $|\cS | = 10$. True signal $\beta_j$ is generated in the same way mentioned above. In Figure \[fig:dim\], both ADAGES, median-aggregation and the Intersection rule have exact FDP control under $q=0.2$, but the Union rule suffers from “uncensored” aggregation and cannot control the overall FDP. Partially dependent on the property of the feature selection method adopted for each machine, the power goes down as $d$ increases. But it is noticeable that the Union rule can always achieve the highest power after aggregation and ADAGES shows comparable performance due to the use of an adaptive threshold based on $\eta_c$ on an interval with an upper bound. In addition, the aggregation method in [@xie2019aggregated] tends to make null discovery that is $|\hat{\cS}| = 0$ which naturally control $\FDR$ at 0 but also have no power. Similar to our findings with varying $k$, the AKO performs better than empirical aggregation methods as $d$ increases, especially in power; but ADAGES shows better performance in both averaged FDP and empirical power. Discussion ========== In this paper, we present an adaptive aggregation method called ADAGES for distributed false discovery rate control. Our method utilizes selected subsets from all machines to determine the aggregation threshold and shows better performance in the tradeoff of $\FDR$ control and power maintenance compared with empirical aggregation methods. The ADAGES is algorithm-free, which means it can be applied to any machine-wise feature selection method, and is thus more flexible than aggregation rules based on specific statistics produced by each machine-wise method. It is motivating to further study the modified method based on the power shrinkage term, which has theoretical intuition for power maintenance and requires a good estimation of overall FDP. Besides, as potential extensions, we can adopt this adaptive method with stability in other statistical aspects in distributed learning. Selected subsets are binary vectors consisting of limited but private information and we can further take communication constraints and privacy into consideration, which are left for our future work. More importantly, there is a tradeoff between information communication and selection power, thus it is meaningful to study aggregation methods with machines transferring encoded but more informative statistics. As the distributed pattern becomes more common in the statistical community, to promote inter-institutional collaboration, efficient aggregation methods are necessary for distributed computing as well as privacy protection. With the idea of adaptive aggregation, collaboration can adapt to specific scenarios while each institution simply needs to focus on its specific statistical problem, which greatly contributes to the new collaboration mode in data science. However, another direction for future research is to relax the independence assumption among institutions in the learning procedure and to study the influence of inter-institutional dependence in the statistical context. Implementation of ADAGES with R is available and raw codes can be accessed on <https://github.com/yugjerry/ADAGES/blob/master/code_ADAGES.R>. Technical proofs are presented in the following sections. Technical proofs {#proof} ================ In this section, we present the proofs for the main theorems and propositions in this paper. **Proof for Theorem \[thm1\].** With $\FDR_i = \EE\left[\frac{|\hat{\cS}_i \cap \cS^{c}|}{|\hat{\cS}_i|}\right] \leq q$, observe the overall $\FDR$: $$\begin{aligned} \FDR_{\left(c^*\right)} &= \EE\left[\frac{|\hat{\cS}_{(c^*)} \cap \cS^{c}|}{|\hat{\cS}_{(c^*)}|}\right] \nonumber\\&= \EE\left\{\EE\left[\frac{|\hat{\cS}_{(c^*)} \cap \cS^{c}|}{|\hat{\cS}_{(c^*)}|}\Big|\left(\hat{\cS}_1,\dots,\hat{\cS}_k\right)\right]\right\}.\end{aligned}$$ First, we have $c^* |\hat{\cS}_{(c^*)}| \leq \sum_{j:m_j \geq c^*} m_j \leq \sum_{j=1}^{d} m_j = \sum_{i=1}^{k} |\hat{\cS}_i|$, and similarly for features $j \in \cS^{c}$, $$\begin{aligned} c^* |\hat{\cS}_{(c^*)} \cap \cS^{c}| \leq \sum_{j \in \cS^{c}: m_j \geq c^*} m_j \leq \sum_{j \in \cS^{c}} m_j = \sum_{i=1}^{k} |\hat{\cS}_i \cap \cS^{c}|.\end{aligned}$$ Therefore, the overall $\FDR$ can be linked to the machine-wise $\FDR$’s as $$\begin{aligned} \FDR_{\left(c^*\right)} &= \EE\left\{\EE\left[\frac{|\hat{\cS}_{(c^*)} \cap \cS^{c}|}{|\hat{\cS}_{(c^*)}|}\Big|\left(\hat{\cS}_1,\dots,\hat{\cS}_k\right)\right]\right\} \nonumber\\&\leq %\EE\left\{\frac{1}{c^*}\EE\left[\frac{\sum_{i=1}^{k} |\hat{\cS}_i \cap \cS^{c}|}{|\hat{\cS}_{(c^*)}|}|\left(\hat{\cS}_1,\dots,\hat{\cS}_k\right)\right]\right\}\nonumber\\ %& = \EE\left\{\frac{1}{c^*}\sum_{i=1}^{k} \EE\left[\frac{|\hat{\cS}_i \cap \cS^{c}|}{|\hat{\cS}_{(c^*)}|}\Big|\left(\hat{\cS}_1,\dots,\hat{\cS}_k\right)\right]\right\}.\end{aligned}$$ In addition, by definition of $c^*$ in the theorem: $|\hat{\cS}_{(c^*)}| \geq \frac{1}{k} \sum_{i=1}^{k} |\hat{\cS}_i| \geq \frac{k}{\sum_{i=1}^{k} 1/|\hat{\cS}_i|}$, we then obtain $$\begin{aligned} \FDR_{\left(c^*\right)} & \leq \EE\left\{\frac{1}{c^*}\sum_{i=1}^{k} \EE\left[\frac{|\hat{\cS}_i \cap \cS^{c}|}{|\hat{\cS}_{(c^*)}|}\Big|\left(\hat{\cS}_1,\dots,\hat{\cS}_k\right)\right]\right\} \nonumber\\ &\leq \EE\left\{\frac{1}{k c^*}\sum_{j=1}^{k} \sum_{i=1}^{k} \EE\left[\frac{|\hat{\cS}_i \cap \cS^{c}|}{|\hat{\cS}_j|}\Big|\left(\hat{\cS}_1,\dots,\hat{\cS}_k\right)\right]\right\}\nonumber\\ & = \EE\left\{\frac{1}{k c^*}\sum_{1 \leq i,j \leq k} \EE\left[\frac{|\hat{\cS}_i \cap \cS^{c}|}{|\hat{\cS}_i|} \cdot \frac{|\hat{\cS}_i|}{|\hat{\cS}_j|}\Big|\{\hat{\cS}_l\}_{l=1}^{k}\right]\right\}\nonumber\\ & = \EE\left\{\frac{1}{k c^*}\sum_{i=1}^{k} \sum_{j=1}^{k}\EE\left[{\rm FDP}_i\frac{|\hat{\cS}_i|}{|\hat{\cS}_j|} \Big|\left(\hat{\cS}_1,\dots,\hat{\cS}_k\right)\right]\right\} \nonumber\\ & \leq \sum_{i = 1}^{k} \frac{1}{kc^*} {\rm FDP}_i \cdot \left(\max_{1\leq i \leq k}|\hat{\cS}_i| \sum_{j=1}^{k} \frac{1}{|\hat{\cS}_j|}\right)\nonumber\\ & \leq \sum_{i = 1}^{k} \frac{1}{kc^*} {\rm FDP}_i \cdot \lambda c^* \leq \lambda q.\end{aligned}$$ Here, $\lambda$ is a bound that $$\begin{aligned} \lambda \geq \max_{1\leq i \leq k}\frac{|\hat{\cS}_i|}{c^*} \sum_{j=1}^{k} \frac{1}{|\hat{\cS}_j|}.\end{aligned}$$ $\ep$\ \ **Proof for theorem \[thm2\].** We consider the expected number of true discoveries $\EE| \hat{\cS} \cap \cS |$ and denote ${\rm TPP}_i = \frac{|\hat{\cS}_i \cap \cS|}{| \cS |},~i \in \left[k\right]$ and $\TPP = \frac{|\hat{\cS}_{(c^*)} \cap \cS|}{| \cS |}$. Then we have $$\begin{aligned} |\cS| \sum_{i=1}^{k} \TPP_i &= \sum_{i=1}^{k} | \hat{\cS}_i \cap \cS| \nonumber\\&= \sum_{i=1}^{k} \sum_{j \in \cS} \bfm{1}_{\left\{j \in \hat{\cS}_i\right\}} = \sum_{j \in \cS} m_j \nonumber\\&= \sum_{j \in \cS \cap \hat{\cS}_{(c^*)}} m_j + \sum_{j \in \cS^c \cap \hat{\cS}_{(c^*)}} m_j \nonumber\\&\leq k| \cS \cap \hat{\cS}_{(c^*)} | + c^* | \cS^c \cap \hat{\cS}_{(c^*)} |,\end{aligned}$$ which is equivalent to $$\begin{aligned} \TPP \geq \frac{1}{k} \left(\sum_{i=1}^{k} \TPP_i - c^* \frac{| \cS^c \cap \hat{\cS}_{(c^*)} |}{| \cS |} \right).\end{aligned}$$ Based on the assumption with an upper bound on $\hat{\cS}_{(c^*)}$ with $\gamma \in (0,1/2)$, $$\begin{aligned} |\hat{\cS}_{(c^*)}|\leq (1+\gamma) |\cS|,\end{aligned}$$ we take expectation for the inequality and then obtain $$\begin{aligned} {\rm Power} \geq \frac{1}{k} \sum_{i=1}^{k} {\rm Power}_i - \alpha q,\end{aligned}$$ where $\alpha = \frac{c^*}{k}(1+\gamma) < \frac{3}{4}$. $\ep$\ \ **Proof for theorem \[thm3\].** We can write explicitly that $$\begin{aligned} {\rm diff} = \sum_{j \in \cS} \bfm{1}_{\left\{0 < m_j < c^*\right\}}.\end{aligned}$$ Then, for positive $m_j$, $\EE[{\rm diff}] = \sum_{j \in \cS} \PP(m_j < c^*)$ with $$\begin{aligned} \PP(m_j < c^*, j \in \cS) &\leq \frac{k - \EE[m_j|j \in \cS]}{k - c^*} \nonumber\\&= \frac{k - \EE[\sum_{i=1}^{k} \bfm{1}_{\{j \in \hat{\cS}_i\}}|j \in \cS]}{k - c^*} \nonumber\\&\leq \frac{k(1-\eta_{n,d})}{k - c^*} \nonumber\\ &\leq \xi (1-\eta_{n,d}),\end{aligned}$$ where $c^* \leq k/2$ by definition and thus $\xi \leq 2$. $\ep$\ \ **Proof for proposition \[union\].** With the Union rule, $\hat{\cS}_{U} = \bigcup_{i=1}^{k} \hat{\cS}_i$ and thus $\hat{\cS}_i \subset \hat{\cS}_{U}$ for all $i=1,\dots,k$. Since $|\bigcup_{i=1}^{k} A_i| \leq \sum_{i=1}^{k} |A_i|$, we apply this fact to $\hat{\cS}_{U} \cap \cS^c = \bigcup_{i=1}^{k} (\hat{\cS}_i \cap \cS^c)$ and consider the overall FDR: $$\begin{aligned} \FDR &= \EE\left[\frac{|\hat{\cS}_{U} \cap \cS^c|}{|\hat{\cS}_{U}|}\right] \leq \EE\left[\sum_{i=1}^{k} \frac{|\hat{\cS}_{i} \cap \cS^c|}{|\hat{\cS}_{U}|}\right] \nonumber\\&\leq \sum_{i=1}^{k} \EE\left[\frac{|\hat{\cS}_{i} \cap \cS^c|}{|\hat{\cS}_{i}|} \right] \leq \sum_{i=1}^{k} \FDR_i \nonumber\\&\leq \sum_{i=1}^{k} q_i.\end{aligned}$$ $\ep$\ \ **Proof for proposition \[intersec\].** With $\hat{\cS}_{I} = \bigcap_{i=1}^{k} \hat{\cS}_i$, we have $m_j = \sum_{i=1}^{k} \bfm{1}_{\left\{j \in \hat{\cS}_{i}\right\}} = k$ for $j \in \hat{\cS}$. Therefore, we have $$\begin{aligned} k|\hat{\cS}_{I} \cap \cS^c| &= \sum_{j \in \hat{\cS}_{I} \cap \cS^c} m_j \leq \sum_{j \in \cS^c} m_j \nonumber\\&= \sum_{j \in \cS^c} \sum_{i=1}^{k} \bfm{1}_{\left\{j \in \hat{\cS}_{i}\right\}} = \sum_{i=1}^{k} |\hat{\cS}_{i} \cap \cS^c|\end{aligned}$$ We then consider the overall FDR, $$\begin{aligned} \FDR &= \EE\left[\frac{|\hat{\cS}_{I} \cap \cS^c|}{|\hat{\cS}_{I}|}\right] \leq \EE\left[\frac{1}{k}\frac{|\hat{\cS}_{i} \cap \cS^c|}{|\hat{\cS}_{I}|}\right]\nonumber\\ &\leq \EE\left[\frac{1}{k}\frac{|\hat{\cS}_{i} \cap \cS^c|}{|\hat{\cS}_{i}|} \cdot \frac{|\hat{\cS}_{i}}{|\hat{\cS}_{I}|}\right] \nonumber\\&\leq \frac{\kappa}{k} \sum_{i=1}^{k} \EE\left[\frac{|\hat{\cS}_{i} \cap \cS^c|}{|\hat{\cS}_{i}|}\right]\leq \kappa q.\end{aligned}$$ Here $\kappa \geq 1$ is a constant such that $\max_{i \in \left[k\right]} \frac{|\hat{\cS}_i|}{|\hat{\cS}_{I}|} \leq \kappa$. $\ep$ Illustration of the aggregation process ======================================= In this part, results in four cases are provided to illustrate the connection between overall FDR/power and the machine-wise ones. The $k$ grey bars in each plot are the $ \FDR_i$s or ${\rm Power }_i$s for $k$ machines. From the four cases together with the simulation results in our paper, we can see that ADAGES has a better tradeoff than other methods (the Union rule, the Intersection rule, median aggregation and method in [@xie2019aggregated]). For FDR, all methods except the Union rule produce the exact control whenever machine-wise FDR is controlled at the pre-defined level. The Union rule, however, as is shown in proposition \[union\], is only able to control FDR at a higher level. When $k$ or the dimension $d$ is large, strict aggregation methods will cause the power loss, such as the results of the Intersection rule and method in [@xie2019aggregated]. We should note that “strict” refers to strict pre-defined levels for each machine as well as strict aggregation rules. As is shown in the results, ADAGES produces power very close to that of the Union rule, which is the highest power an aggregation method can achieve. ![Representation of the aggregation process: barplot of machine-wise FDR(left)/power(right) and aggregation results under different rules ($q=0.2,k=5,d=20,n=1000,n_i=200$).[]{data-label="fig:p1"}](Bar5_20.pdf){width="80.00000%"} ![Representation of the aggregation process: barplot of machine-wise FDR(left)/power(right) and aggregation results under different rules ($q=0.2,k=5,d=80,n=1000,n_i=200$).[]{data-label="fig:p2"}](Bar5_80.pdf){width="80.00000%"} ![Representation of the aggregation process: barplot of machine-wise FDR(left)/power(right) and aggregation results under different rules ($q=0.2,k=10,d=20,n=1000,n_i=100$).[]{data-label="fig:p3"}](Bar10_20.pdf){width="80.00000%"} ![Representation of the aggregation process: barplot of machine-wise FDR(left)/power(right) and aggregation results under different rules ($q=0.2,k=10,d=80,n=1000,n_i=100$).[]{data-label="fig:p4"}](Bar10_80.pdf){width="80.00000%"} [^1]: The author finished this paper when he was an undergraduate at the University of Science and Technology of China
{ "pile_set_name": "ArXiv" }
--- abstract: | Exceptional sequences of vector bundles over a variety $X$ are special generators of the triangulated category $D^b(Coh\,X)$. Kapranov proved the existence of tilting bundles over homogeneous varieties for the general linear group. King conjectured the existence of tilting sequences of vector bundles on projective varieties which are obtained as quotients of Zariski open subsets of affine spaces. The goal of this paper is to give further examples of strong exceptional sequences of vector bundles on certain projective varieties. These are obtained as geometric invariant quotients of affine spaces by linear actions of reductive groups, as appears in King’s conjecture. author: - Mihai Halic title: Strong exceptional sequences of vector bundles on certain Fano varieties --- Introduction {#introduction .unnumbered} ============ The concept of derived categories has been introduced by Grothendieck and developed further by Verdier. However, their work remained within a very general and abstract setting, and people wished to have concrete examples which arise from geometry. In algebraic geometry one of the essential objects associated to a projective variety is the (bounded) derived category of coherent sheaves over it. Its knowledge allows to recover all the cohomological data of the variety. Beilinson made the first major step by proving that the line bundles $ \mathcal O_{\mathbb P^n},\mathcal O_{\mathbb P^n}(1),\!..., \mathcal O_{\mathbb P^n}(n) $ generate $D^b({\rm Coh}\,\mathbb P^n)$, and actually form a tilting sequence. Afterwards have appeared several other examples of varieties admitting (strong and complete) exceptional sequences of vector bundles. One of the most notable results in this direction has been obtained by Kapranov [@ka]. He explicitly constructed tilting sequences of vector bundles over homogeneous varieties for ${\rm Gl}(n)$, that is over Grassmannians and flag manifolds. Further examples, which are based on Kapranov’s result, have been obtained in [@cm]. In the unpublished preprint [@ki], King conjectured that there are tilting bundles over projective varieties which are obtained as invariant quotients of affine spaces for linear actions of reductive groups. Observe that flag varieties for $Gl(n,\mathbb C)$, and toric varieties are special cases of such quotient varieties. The answer to King’s conjecture is negative in general. Hille and Perling gave in [@hp] an example of a toric variety ($\mbb P^2$ blown-up successively three times) with the property that it does not admit a tilting object formed by line bundles. However it is still a very interesting problem to find classes of examples for which the conjecture holds. In the paper [@ah], Altmann and Hille proved the existence of (partial) strong exceptional sequences on toric varieties arising from thin representations of quivers, but their construction gives sequences of very short length. The goal of this paper is to give further examples of strong exceptional sequences of vector bundles over certain Fano varieties. The varieties considered in this paper are obtained as geometric quotients of open subsets of affine spaces by linear actions of a reductive groups. For the comfort of the reader, we recall that a sequence of vector bundles $(\cal F_1,\ldots,\cal F_z)$ over a variety $Y$ is called [*strongly exceptional*]{} if the following two conditions are fulfilled: 1. $H^0\bigl(Y,{\mathop{\rm Hom}\nolimits}(\cal F_j,\cal F_i)\bigr)=0, \;\forall\,1\les i< j\les z$; 2. $H^q\bigl(Y,{\mathop{\rm Hom}\nolimits}(\cal F_j,\cal F_i)\bigr)=0, \;\forall\,i,j=1,\ldots,z$, and $\forall\,q>0$. 3. A [*tilting sequence*]{} is a strongly exceptional sequence $(\cal F_1,\ldots,\cal F_z)$ with the property that $\cal F_1,\ldots,\cal F_z$ generate $D^b({\rm Coh}\,Y)$. Consider an algebraically closed field $K$ of characteristic zero, a connected, reductive group $G$ over $K$, and a representation $\rho:G\rar{{\rm Gl}}(V)$. Let ${{\mbb V}}:={\mathop{\rm Spec}\nolimits}\bigl({\mathop{\rm Sym}\nolimits}^\bullet V^\vee\bigr)$ be the affine space corresponding to $V$. We denote $\chi_{{\rm ac}}=\chi_{{\rm ac}}(G,V)$ the weight of the $G$-module ${{\rm det}}V$. We make the following assumptions: 1. the ring of invariants $K[{{\mbb V}}]^T\!=\!K$, where $T$ is the maximal torus of $G$; 2. ${\rm codim}_{{{\mbb V}}}{{\mbb V}}^{{\rm us}}(G,\chi_{{\rm ac}})\ges 2$, and $G$ acts freely on the semi-stable locus ${{\mbb V}}^{{\rm ss}}(G,\chi_{{\rm ac}})$. We denote $Y:={{\mbb V}}{{\slash\kern-0.65ex\slash}}_{\chi_{{\rm ac}}}G$ the invariant quotient. The main ingredient that we use for constructing exceptional sequences over $Y$ is the set ${\cal E_1,\ldots,\cal E_N}$ of ‘extremal’ nef vector bundles over $Y$ (see section \[sct:nef-vb\]). They enjoy good cohomology vanishing properties which are required by the definition of exceptional sequences. The first main result of this paper is the following: The estimates appearing in this theorem are not strong enough to recover Kapranov’s construction for partial flag varieties. We have to go on, and exploit the fibre bundle structure. The optimal result would be the following: - Consider a fibre bundle $Y\srel{\phi}{\rar}X$. Suppose that $(\cal F_i)_{i\in I}$ is a strong exceptional sequence of vector bundles on $X$, and that $(\cal E_j)_{j\in J}$ is a sequence of vector bundles on $Y$ whose restriction to the fibres of $\phi$ give rise to strong exceptional sequences relative to $\phi$. Then $(\phi^*\cal F_i\otimes\cal E_j)_{(i,j)\in I\times J}$ is a strong exceptional sequence on $Y$. Unfortunately such a statement is overoptimistic in general. The content of our second main result is that the statement above becomes true under suitable restrictive hypotheses on the fibration $\phi$. More precisely, we place ourselves in the following framework: 1. There is a quotient group $H$ of $G$ with kernel $G_0$, and a quotient $H$-module $W$ of $V$ with kernel $V_0$, such that the natural projection ${\mathop{\rm pr}\nolimits}^{{{\mbb V}}}_{\mbb W}:{{\mbb V}}\rar\mbb W$ has the following property: ${\mathop{\rm pr}\nolimits}^{{{\mbb V}}}_{\mbb W}\bigl(\;{{\mbb V}}^{{\rm ss}}\bigl(G,\chi_{{\rm ac}}(G,V)\bigr)\;\bigr) \subseteq\mbb W^{{\rm ss}}\bigl(G,{\chi_{{\rm ac}}}(H,W)\bigr).$ We denote by $Y\srel{\phi}{\rar}X$ the induced morphism at the quotient level. 2. The unstable loci have codimension at least two, and both quotients ${{\mbb V}}^{{\rm ss}}(G,\chi_{{\rm ac}}(G,V))\rar Y\;$ and $\;\mbb W^{{\rm ss}}(H,\chi_{{\rm ac}}(H,W))\rar X$ are principal bundles. 3. The nef cone of the total space $Y$ is the sum of the nef cones of the base $X$, and that of the fibre: $\crl N(G,V)=\crl N(H,W)+\crl N(G_0,V_0)$. Denote ${{\cal V\kern-.41ex\euf B}}^+(X)$ and ${{\cal V\kern-.41ex\euf B}}^+_0$ the corresponding sets of extremal nef vector bundles. 4. The maximal torus $T_0\subset G_0$ has exactly $\dim T_0$ weights on $V_0$. Our main result in the relative case is the following: We point out that in both cases it [*remains open*]{} the question under which hypothesis these sequences are/extend to [*tilting*]{} objects. However, we remark that, taking into account the example constructed in [@hp], a [*general answer*]{} concerning the (non-)existence of tilting vector bundles over quotients of affine spaces must be involved. The definition of an exceptional set involves two conditions. Accordingly, the paper is divided in two main parts, each focusing on one of the two conditions: – The sections \[sct:stability\] and \[sct:conseq-stab\] form the first part: we prove a stability result for associated vector bundles, and define an order on the set of irreducible $G$-modules for which there are no homomorphisms from a ‘larger’ vector bundle into a ‘smaller’ one (see theorem \[thm:h000\]). – The sections \[numer-crit\], \[sct:nef-vb\] and \[cohom-nef\], have a preparatory character: we introduce the ‘extremal’ nef vector bundles, and study their cohomological properties. – The second part of the article consists of the sections \[sct:main\] and \[sct:main2\]: they contain the proofs of the main results. The main tool used for proving the vanishing of the higher cohomology groups is a result due to Manivel (see [@ma]), and Arapura (see [@ar]). However, this general result is not sufficient to address the relative case, and we have to dwell on our particular context. In theorem \[thm:direct-image\] we prove the following nefness property, which is an essential ingredient in the proof of Theorem B. – Finally, in section \[sct:expl\], we illustrate the general theory. On one hand, we recover Kapranov’s construction for the Grassmannian and for flag varieties, by using our results. On the other hand, we give further examples of strong exceptional sequences over quiver varieties. The very pleasant feature is that we obtain these example by an almost algorithmic procedure, which applies to any quiver variety. Some of the results have been presented at the HOCAT 2008 Conference, held at Centre de Recerca Matemàtica, Bellaterra, Spain. A stability property ==================== [\[sct:stability\]]{} The symbol $\mbb Q$ will always denote the field of rational numbers, and $K$ will be an algebraically closed field $K$ of characteristic zero. Throughout the paper, $G$ will always denote a connected, reductive group over $K$, and $T$ will be the maximal torus of $G$. We consider a faithful representation $\rho:G\rar{{\rm Gl}}(V)$, and denote by ${{\mbb V}}:={\mathop{\rm Spec}\nolimits}({\mathop{\rm Sym}\nolimits}^\bullet V^\vee)$ the corresponding affine space. We shall assume that the ring of invariants $K[{{\mbb V}}]^T=K$; it follows automatically that $K[{{\mbb V}}]^G=K$. [\[lm:Zneq0\]]{} Let $V$ be a non-zero $G$-module such that $K[{{\mbb V}}]^T=K$. Then: 1. There is a 1-PS $\l\in\cal X_*(T)$ such that all its weights on $V$ are strictly positive. 2. $G$ is not semi-simple. We fix once for all ${{\mfrak l}}\in\cal X_*(T)\otimes_{\mbb Z}\mbb R$ such that its weights on $V$ are all positive, and moreover it has ‘irrational slope’, that is ${{\rm Ker}}({{\mfrak l}}:\cal X^*(T)\rar\mbb R)=\{0\}.$ \(i) Let $\Phi$ denote the set of weights of the $T$-module $V$. Then the set of weights of the $T$ on $K[{{\mbb V}}]$ is the ‘cone’ $\underset{\eta\in\Phi}\sum\mbb N\eta$. Since $K[{{\mbb V}}]^T=K$, this cone is strictly convex. Otherwise we can construct a non-trivial $T$-invariant monomial. It follows that there is $\l\in\cal X_*(T)$ with $\langle\eta,\l\rangle>0$ for all $\eta\in\Phi$. \(ii) Assume that $G$ is a semi-simple group. The previous step implies that $K[{{\mbb V}}^m]^T=K$, hence $K[{{\mbb V}}^m]^G=K$ for all $m\ges 1$. Since $G$ is semi-simple, it has an open orbit in ${{\mbb V}}^m$. For large $m$ we get a contradiction. Let $\th\in\cal X^*(G)$ be a character. We denote: $${\label{G-sst}} \begin{array}{l} K[{{\mbb V}}]^G_\th:=\{f\in K[{{\mbb V}}]\mid f(g\times y)=\th(g)\cdot f(y), \,\forall y\in{{\mbb V}}\} \\[2ex] K[{{\mbb V}}]^{G,\th}:=K\oplus\underset{n\ges 1}\bigoplus K[{{\mbb V}}]^G_{\th^n}, \\[2ex] {{\mbb V}}^{{\rm ss}}(G,\th):=\{y\in{{\mbb V}}\mid\exists n\ges 1\text{ and } f\in K[{{\mbb V}}]^G_{\th^n}\text{ s.t. }f(y)\neq 0\}. \end{array}$$ We say that $\th$ is [*effective*]{} if there is $n\ges 1$ such that $K[{{\mbb V}}]^G_{\th^n}\neq 0$, that is ${{\mbb V}}^{{\rm ss}}(G,\th)\neq\emptyset$. We define the [*anti-canonical character*]{} of the $G$-module $V$ to be the character of the $G$-module ${{\rm det}}V$. Explicitly: decompose $V=\underset{\og\in\cal X}\bigoplus M_\omega^{\oplus m_\og}$ into its $G$-isotypical components. Let $\chi_\og$ be the character by which $Z(G)^\circ$ acts on $M_\og$, and denote $d_\og:=\dim M_\og$. Then $\; \chi_{{\rm ac}}(G,V):= \mbox{$\underset{\og\in\cal X}\sum$} m_\og d_\og\chi_\og\in\cal X^*(G). $ For shorthand, we will write $\chi_{{\rm ac}}=\chi_{{\rm ac}}(G,V)$. [\[lm:effective\]]{} Assume that $m_\og \ges d_\og$. Then the character $\chi_\og$ is effective. Moreover, if $m_\og > d_\og$ for all $\og$, then $\chi_{{\rm ac}}$ is effective, and the $\chi_{{\rm ac}}$-unstable locus has codimension at least two. We view $V$ as $\bigoplus_{\og\in\cal X} {\mathop{\rm Hom}\nolimits}(K^{m_\og }, M_\og)$. Since $m_\og \ges d_\og$, we can associate to an element ${\mathop{\rm Hom}\nolimits}(K^{m_\og }, M_\og)$ the $d_\og\times d_\og$-minor corresponding to the first $d_\og$ columns. This defines a regular function $f_\og$ which is $d_\og\chi_\og$-equivariant; moreover, $f_\og$ does not vanish on surjective homomorphisms. It follows that $d_\og\chi_\og$, and therefore $\chi_\og$, is effective for all $\og$. If a point belongs to the unstable locus, then all the minors $f_\og$ have to vanish. Since $m_\og \ges d_\og+1$, this implies the vanishing of at least two independent minors. Now we prove a general stability result of independent interest. It is well known that the tangent bundle of the projective space is stable, and more generally the tautological bundles over Grassmannians are stable. Our goal is to generalize these facts. We denote ${\{G_j\}}_{j\in J}$ the simple factors of $G$, and let $\gamma_j:G\rar G_j$, be the corresponding quotient morphisms. Using the $\gamma_j$’s we extend the structural group of $\O\rar Y$, and obtain the principal $G_j$-bundles $\O(G_j)\rar Y$. The main result of this section is: [\[thm:stab-bdl\]]{} Assume that $G$ acts freely on $\O\!:=\!{{\mbb V}}^{{\rm ss}}(G,\th)$, for some $\th\in\cal X^*(G)$, and let $Y$ be the quotient. Assume that $m_\omega >\dim M_\og$ holds for all $\omega\in\cal X$. Then the principal $G_j$-bundles $\O(G_j)\rar Y\!$, $j\in J$, obtained by extending the structural group are semi-stable. We fix $j\in J$, and a maximal parabolic subgroup $P_j\subset G_j$; denote $P:=\gamma_j^{-1}P_j$: it is a maximal parabolic subgroup of $G$. We observe that the associated homogeneous bundles $\bigl(\O(G_j)\bigr)(G_j/P_j)$ and $\O\bigl(G/P\bigr)$ are isomorphic. We denote $H=\underset{\omega }{\prod}\,H_\omega := \underset{\og }{\prod}\,{{\rm Gl}}_K(m_\omega )$: it acts naturally on ${{\mbb V}}$; the $G$- and $H$-actions on ${{\mbb V}}$ commute. It follows that $H$ still acts on $\O(G/P)$ by $ H\times\O(G/P)\rar\O(G/P),\; h\times[y,gP]:=[hy,gP]. $ We will prove that whenever there is a reduction of the structural group $ s:Y^o\rar\bigl(\O(G_j)\bigr)(G_j/P_j)=\O\bigl(G/P\bigr), $ with $Y^o\subset Y$ open and ${{\rm codim}}_Y(Y\sm Y^o)\ges 2,$ holds $\deg_{Y}\big(s^*{\sf T}_{\O(G/P)/Y}\big)\ges 0.$ Equivalently, the reduction $s$ can be viewed as a $G$-equivariant morphism $S:\O^o=q^{-1}(Y^o)\rar G/P$. The idea is to move $s$ using the $H$-action on $\O(G/P)$. Let $\hat y\in Y$ be a generic point, and consider $y\in\O$ over $\hat y$. We define the following subgroups of $H$: $\quad K_{\hat y}:={\rm Stab}_H(y)$, and $$H_{\hat y}:=\{h\in H\mid \exists\,g_h\in G\text{ s.t. } h\times y=\rho(g_h^{-1})y\} =\mbox{$\underset{\og}\prod$} H_{\og ,\hat y}.$$ We observe that $K_{\hat y}$ does not depend on the choice of $y\in q^{-1}(\hat y)$. Since $G$ acts freely on $\O$, the assignment $h\mt g_h$ defines a group homomorphism $\rho_{\hat y}:H_{\hat y}\rar G$ whose kernel is $K_{\hat y}$. We move the section $s$ using the action of $H_{\hat y}$. For $h\in H_{\hat y}$ define a new section $s_h$ as follows: $$s_h(\hat x):=[x,S(h^{-1}\times x)] \quad \text{(equivalently, $S_h(x):=S(h^{-1}\times x)$).}$$ Observe that as $h\in H_{\hat y}$ varies, $s_h(\hat y)=h\times s(\hat y)$ moves in the vertical direction. $H_{\hat y}/K_{\hat y}\rar G/Z(G)^\circ$ is surjective. Write $y={(y_\omega )}_\omega $ w.r.t. the direct sum decomposition of $V$; for each $\omega\in\cal X$, $y_\omega =(y_{\omega 1},\ldots,y_{\omega m_\omega})$. Since $y\in\O$ is chosen generically, and $m_\omega>\dim M_\og=:d_\omega$, we may assume that for each $\omega\in\cal X$ the vectors $y_{\omega 1},\ldots,y_{\omega m_\omega}$ span $M_\og$. Equivalently, we may view $y_\omega$ as a surjective homomorphism $K^{m_\omega}\rar M_\og$. For $g\in G$ holds $\rho(g)y={(\rho_\omega (g)y_\omega)}_\omega$. Using that $m_\omega> d_\omega$, we deduce that for each $\omega\in\cal X$ there is $h_\omega\in{{\rm Gl}}_K(m_\omega)$ such that $h_\omega y_\omega=\rho_\omega(g^{-1})y_\omega$. For $h:={(h_\omega)}_\omega$ we have $hy=\rho(g^{-1})y$, that is $g\in{\rm Image}\bigl(H_{\hat y}/K_{\hat y}\rar G\bigr)$. Back to our proof: the infinitesimal action of $H_{\hat y}$ preserves the restriction to the fibre $q^{-1}(\hat y)=\{[y,gP]\mid g\in G\}\cong G/P$ of the relative tangent bundle ${\sf T}_{\O(G/P)/Y}$. By this isomorphism the relative tangent bundle corresponds to ${\sf T}_{G/P}\rar G/P$. The claim implies that the infinitesimal action ${\mathop{{\cal L}ie}\nolimits}(H_{\hat y})\rar{\sf T}_{\O(G/P)/Y, s(\hat y)}$ is surjective. Hence there is a section $\si\in H^0(Y^o,s^*{{\rm det}}{\sf T}_{\O(G/P)/Y})$ which does not vanish at $\hat y$. It follows $\deg_Y\big(s^*{\sf T}_{\O(G/P)/Y}\big)\ges 0$. [\[cor:stab-bdl\]]{} Assume $\th\in\cal X^*(G)$ has the property that $G$ acts freely on $\O:={{\mbb V}}^{{\rm ss}}(G,\chi)$, and let $Y$ be the quotient. Let $E$ be an irreducible $G$-module, and denote by $\cal E:=\O(E)$ the associated vector bundle over $Y$. Assume that $m_\omega>\dim M_\og$ holds for all $\omega\in\cal X$. Then $\cal E\rar Y$ is slope semi-stable with respect to the polarization induced by the character $\th$. We may assume that $G=Z(G)^\circ\times\bigl(\underset{j\in J}{\times}G_j\bigr)$. Since each $\O(G_j)$ is semi-stable, $\O\rar Y$ itself is semi-stable. The homomorphism $\rho_\omega:G\rar{{\rm Gl}}(E)$ maps $Z(G)^\circ$ into the centre of ${{\rm Gl}}(E)$. By using [@rr theorem 3.18], we deduce that $\cal E=\O(E)\rar Y$ is semi-stable. The $H^0$ spaces ================ [\[sct:conseq-stab\]]{} Assume that $E$ is a $G$-module. We will denote by $\cal E$ the vector bundle over $Y$ associated to it. More precisely, $\cal E$ corresponds to the module of covariants $\bigl(K[{{\mbb V}}]\otimes_K E^\vee\bigr)^G$. The classical Schur lemma says that for two irreducible $G$-modules $E$ and $F$, the space ${\mathop{\rm Hom}\nolimits}(E,F)$ consists either of scalars (if $E=F$), or vanishes (if $E\neq F$). In this section we will prove that a similar result holds for the associated vector bundles $\cal E$ and $\cal F$. For warming-up, we start with a special case. We have proved in corollary \[cor:stab-bdl\] that $\cal E\rar Y$ is a semi-stable vector bundle w.r.t. any polarization on $Y$, as soon as the multiplicities $m_\og> d_\og$ for all $\og$. Its first Chern class equals $\dim(E)\cdot\chi_E$, where $\chi_E$ denotes the character of $Z(G)^\circ$ on $E$. Let $\th\in\cal X^*(G)$ be an ample class on $Y$; the slope of $\cal E$ w.r.t. $\th$ equals $$\mu_\th(\cal E)= \frac{\deg_\th\cal E}{\dim E}= \langle \chi_\og\cdot\th^{\dim Y-1},[Y] \rangle.$$ [\[defn:order1\]]{} Let $\th$ be a polarization of $Y$. We define the order $<_\th$ on $\cal X^*\bigl(Z(G)^\circ\bigr)$ as follows: we declare that $\chi<_\th\eta$ if holds: $$\mu_\th(\chi):=\langle\chi\cdot\th^{\dim Y-1},[Y]\rangle < \mu_\th(\eta):=\langle\eta\cdot\th^{\dim Y-1},[Y]\rangle.$$ Observe that, by the hard Lefschetz property, we can choose $\th$ in such a way that $\chi=\eta\Leftrightarrow\mu_\th(\chi)=\mu_\th(\eta)$. [\[prop:h00\]]{} We assume that $m_\og> d_\og$ holds for all $\og$. Let $E$ and $F$ be two distinct irreducible $G$-modules, such that $Z(G)^\circ$ acts on them by two different characters $\chi_E$ and $\chi_F$ respectively, such that $\mu_\th(\cal E)<\mu_\th(\cal F)$. Then $H^0\bigl(Y,{\mathop{\rm Hom}\nolimits}(\cal F,\cal E)\bigr)=0$. This is an immediate consequence of the semi-stability property of $\cal E$ and $\cal F$. The proposition has two shortcomings: first, we have imposed the condition on the multiplicities; second, there are distinct representations $E$ and $F$ such that the characters $\chi_E$ and $\chi_F$ coincide. So we need to sharpen our result. [\[not-eff\]]{} Assume that ${\rm codim}_{{\mbb V}}{{\mbb V}}^{{\rm us}}(G,\chi_{{\rm ac}})\ges 2$. Let $E$ be an irreducible $G$-module, and let $\cal E\rar Y$ be the associated vector bundle. Suppose that there is a weight $\veps$ of $T$ on $E$ which is not $T$-effective (that is ${{\mbb V}}^{{\rm ss}}(T,\veps)=\emptyset$). Then $H^0(Y,\cal E)=0$. Recall that $H^0(Y,\cal E)={\mathop{\rm Mor}\nolimits}(\mbb V\rar E)^G$, where $$(g\times S)(y)=g\times S(g^{-1}\times y),\quad\forall\,g\in G \text{ and }{{\mbb V}}\srel{S}{\rar}E.$$ Assume that there is a non-zero $G$-equivariant morphism $S:\mbb V\rar E$. Then the linear span $\langle S\rangle:=\langle S(y), y\in\mbb V\rangle$ is actually a $G$-submodule of $E$. Since $E$ is irreducible and $S\neq 0$, we deduce $\langle S\rangle=E$. On the other hand, $\veps$ is a weight of $T$ on $E$ which is not effective. We choose a one dimensional $T$-submodule $E_\veps\subset E$, and consider [*the function*]{} $S_\veps:={\mathop{\rm pr}\nolimits}^E_{E_\veps}\circ\, S$. Then $ S_\veps(t\times y)=\veps(t)\cdot S_\veps(y),\; \forall t\in T,\,y\in\mbb V. $ Since $\veps$ is not effective, the function $S_\veps$ must vanish. This implies that the image of the morphism $S$, and consequently its linear span $\langle S\rangle$, is contained in the complement $E'$ of $E_\veps$. The contradiction shows that $\langle S\rangle=E$. In order to check that a sequence of vector bundles forms an exceptional sequence, one has to prove that there are no non-trivial homomorphisms from ‘larger’ bundles into ‘smaller’ ones. Now we define the total order required for this property. [\[defn:order2\]]{} Consider ${{\mfrak l}}\in\cal X_*(T)$ as in lemma \[lm:Zneq0\]. 1. For any irreducible $G$-module, we define $$\mfrak l(E):= \max\{\langle\eta,\mfrak l\rangle\mid\eta\text{ is a weight of $T$ on $E$}\}.$$ Equivalently:\ $\mfrak l(E)=\langle\eta_E,\mfrak l\rangle$, where $\eta_E$ is the dominant weight of $E$ (with respect to $\mfrak l$). 2. Let $E$ and $F$ be two irreducible $G$-modules. We say that $E<_{{{\mfrak l}}}\,F$ if $\mfrak l(E)\,<\,\mfrak l(F)$. Since $\mfrak l$ has irrational slope, for any two irreducible $G$-modules $E$ and $F$ holds: $\mfrak l(E)=\mfrak l(F)\Rightarrow E=F$. Hence $<_{{{\mfrak l}}}$ is a total order relation. The following result can be viewed as a generalization of Schur’s lemma. [\[thm:h000\]]{} Assume that ${\rm codim}_{{\mbb V}}{{\mbb V}}^{{\rm us}}(G,\chi_{{\rm ac}})\ges 2$. 1. Let $E$ be an irreducible $G$-module. Then $H^0\bigl(Y,{\mathop{\rm End}\nolimits}(\cal E)\bigr)=K.$ 2. Let $E$ and $F$ be two irreducible $G$-modules such that $E<_{{{\mfrak l}}}\,F$. Then $H^0\bigl(Y,{\mathop{\rm Hom}\nolimits}(\cal F,\cal E)\bigr)=0.$ \(i) A section $s\!\in\! H^0\bigl(Y,{\mathop{\rm End}\nolimits}(\cal E)\bigr)$ corresponds to a $G$-equivariant morphism $S:{{\mbb V}}\rar{\mathop{\rm End}\nolimits}(E)$, where the action on ${\mathop{\rm End}\nolimits}(E)$ is by conjugation. (Here we use the hypothesis on the codimension of the unstable set: regular maps defined on the semi-stable locus extend to the whole affine space.) We will prove that the morphism $S$ is a scalar multiple of the identity. The origin $0\in{{\mbb V}}$ is fixed under $G$. Since $S$ is $G$-equivariant, the homomorphism $S_0\in{\mathop{\rm End}\nolimits}(E)$ is ${{\rm Ad}}_G$-invariant. Schur’s lemma implies that $S_0=c\cdot{{1\kern-0.57ex\rm l}}_{E}$, with $c\in K$. By lemma \[lm:Zneq0\], there is a 1-PS $\l\in\cal X_*(T)$ such that all its weights on $V$ are strictly positive. In particular $\disp\lim_{t\rar 0}\l(t)y=0$ for all $y\in{{\mbb V}}$. The $G$-equivariance implies $S_{\l(t)y}={{\rm Ad}}_{\l(t)}\circ S_y$, hence $\,\disp\lim_{t\rar 0}{{\rm Ad}}_{\l(t)}\circ S_y =S_0=c{{1\kern-0.57ex\rm l}}_{E}.$ The $\l(t)$-action on $E$ can be diagonalized in an appropriate basis formed by weight vectors. We denote ${\{E_i\}}_{i\in I}$ the weight spaces of $E$. We order the elements of $I$ in decreasing order, and consider the corresponding basis in $E$. Then w.r.t. this basis, $S_y$ has the following block-matrix shape: $$S_y = \left( \begin{array}{c|c|c} c{{1\kern-0.57ex\rm l}}& *&*\\ \hline 0&c{{1\kern-0.57ex\rm l}}&*\\ \hline 0&0&c{{1\kern-0.57ex\rm l}}\end{array} \right) \;\text{or equivalently}\; S_y-c{{1\kern-0.57ex\rm l}}= \left( \begin{array}{c|c|c} 0 & *&*\\ \hline 0&0&*\\ \hline 0&0&0 \end{array} \right), \;\forall\,y\in{{\mbb V}}$$ Let $\mfrak N_\l$ be the vector space which is formed by matrices having this shape ($\mfrak N_\l$ is actually a nilpotent Lie algebra). Intrinsically, $$\disp\mfrak N_\l=\{A\in{\mathop{\rm End}\nolimits}(E)\mid\lim_{t\rar 0}{{\rm Ad}}_{\l(t)}\circ A=0\}.$$ We denote $ {{\rm Ker}}(\mfrak N_\l):=\kern-1ex \mbox{$\underset{N\in\mfrak N_\l}\bigcap$}\kern-1ex{{\rm Ker}}(N). $ By Engel’s theorem, ${{\rm Ker}}(\mfrak N_\l)$ is a non-zero vector subspace of $E$. Applying the $G$-equivariance once more, we deduce that for any $g\in G$ holds: $$Ad_{g^{-1}}\circ \bigl( S_y-c{{1\kern-0.57ex\rm l}}\bigr) =S_{g^{-1}y}-c{{1\kern-0.57ex\rm l}}\in\mfrak N_\l.$$ It follows that for all $g\in G$, $${{\rm Ker}}\bigl( S_y-c{{1\kern-0.57ex\rm l}}\bigr) \supset g\cdot{{\rm Ker}}(\mfrak N_\l) \;\Longrightarrow\; {{\rm Ker}}\bigl( S_y-c{{1\kern-0.57ex\rm l}}\bigr) \!\supset\kern-.7ex \mbox{$\underset{g\in G}\sum$} g\cdot{{\rm Ker}}(\mfrak N_\l).$$ Note that the right-hand-side is a non-zero $G$-submodule of $E$. Since $E$ is irreducible, it follows taht ${{\rm Ker}}\bigl(S_y-c{{1\kern-0.57ex\rm l}}\bigr)=E$, that is $S_y=c{{1\kern-0.57ex\rm l}}$ for all $y\in{{\mbb V}}$. \(ii) The $G$-module ${\mathop{\rm Hom}\nolimits}(F,E)=F^\vee\otimes E$ contains the difference $\veps:=\eta_E-\eta_F$ of the corresponding dominant characters. Since $E<_{{\mfrak l}}\,F$, ${{\mfrak l}}(E)-{{\mfrak l}}(F)<0$, the weight $\veps$ is not $T$-effective. The conclusion follows from theorem \[not-eff\]. Numerical criteria for semi-stability ===================================== [\[numer-crit\]]{} In this section we are reviewing some numerical criteria for semi-stability, needed later on. The following convention is used throughout this section: the letters $E, V, W$ denote $G$-modules, while the symbols $\mbb{E, V, W}$ will denote the corresponding affine spaces: [*e.g.*]{} $\mbb E:={\mathop{\rm Spec}\nolimits}\bigl({\mathop{\rm Sym}\nolimits}^\bullet E^\vee\bigr)$. For a $G$-module $W$, let $\eta_1,\ldots,\eta_R$ be the weights of the maximal torus $T\subset G$. We define: $\;m:\mbb W\times \cal X_*(G)_{\mbb R}\rar \mbb R,$ $$\begin{array}{l} m(w,\l):= \min \biggl\{\; j\;\biggl|\; \begin{array}{l} \text{the $t^j$-isotypical component of $w$ w.r.t. $\,\l$} \\ \text{does not vanish} \end{array} \biggr.\biggr\}. \end{array}$$ Observe that for $\l\in\cal X_*(T)$ holds: $$m(w,\l):= \min \biggl\{ \langle\eta_j,\l\rangle \;\biggl|\; \begin{array}{l} \text{the $\eta_j$-isotypical component of $w$} \\ \text{does not vanish} \end{array} \biggr\}.$$ We fix a norm $|\cdot|$ on $\cal X_*(T)$, invariant under the Weyl group of $G$. For a character $\th\in\cal X^*(G)$, the Hilbert-Mumford criterion for $(G,\th)$-(semi-)stability reads: $$\begin{aligned} {\label{fct-m}} \begin{array}{rl} w\!\in\!\mbb W^{{{\rm s}}\,{\rm (resp. }\,{{\rm ss}}\rm)}(G,\th) &\Leftrightarrow m(w)\!:=\! \inf \left\{ \frac{\langle \th,\l\rangle}{|\l|} \,\biggl|\, m(w,\l)\ges 0\biggr. \right\} \underset{(\ges)}{>} 0 \\[2ex] &\Leftrightarrow \biggl[\, m(w,\l)\ges 0 \,\Rightarrow\, \langle \th,\l\rangle\underset{(\ges)}{>} 0 \,\biggr]. \end{array}\end{aligned}$$ For $w\in\mbb W$ we define: $$\begin{array}{rl} S(w):=& \{\eta_j\mid \text{the $\eta_j$-isotypical component of $w$ does not vanish} \} \\[1.5ex] \crl C_w=& \underset{\eta\in S(w)}\sum\mbb R_{\ges 0}\eta \\[1.5ex] \L^G_w:=& \{\l\in\cal X_*(G)\mid m(w,\l)\ges 0\} \\[1.5ex] \L^T_w:=& \{\l\in\cal X_*(T)\mid m(w,\l)\ges 0\} \\[1.5ex] =& \{\l\in\cal X_*(T)\mid \langle\eta,\l\rangle\ges 0,\;\forall\eta\in\crl C_w \} =\crl C_w^\vee. \end{array}$$ Note that $\crl C_w$ and $\L^T_w$ are convex, polyhedral cones. Since there are finitely many $\eta$’s, only finitely many cones $\crl C_w$ and $\L_w^T$ occur as $w$ varies in $\mbb W^s(G,\th)$. We are interested in the [*minimal*]{} cones $\crl C_w$. Let $\th$ be a character of $G$. A subset $S\subset\{\eta_1,\ldots,\eta_R\}$ is [*minimal for $\th$*]{} if $ \th\in \mbox{$\underset{\eta\in S}\sum$} \mbb R_{\ges 0}\eta $ and $ \th\not\in \mbox{$\underset{\eta\in S\sm\{\eta_0\}}\sum$} \mbb R_{\ges 0}\eta $ for all $\eta_0\in S$. We denote $S_1,\ldots,S_z$ the (finitely many) minimal sets for $\th$, and the corresponding cones by $\crl C_j$ and $\L_j:=\crl C_j^\vee$, $j=1,\ldots,z$, respectively. The Weyl group of $G$ operates by permutations on them. Observe that $\L^G_w= \underset{g\in G}\bigcup{{\rm Ad}}_{g^{-1}}\bigl(\L^T_{gw}\bigr)$. As $\th$ is ${{\rm Ad}}_G$-invariant, the numerical criterion can be reformulated as follows: $${\label{intersect}} \th\in\cal X^*(G)\cap{\rm int.} \biggl(\, \bigcap_{w\in\mbb W^{{\rm s}}(G,\th)} \kern-2ex\crl C_w \biggl) = {\rm int.}\biggl( \cal X^*(G)\cap\crl C_1\cap\ldots\cap\crl C_z \biggr).$$ For two $G$-modules $V, E$, we define the $K^\times\times G$-module $W_m:=E\times V^{m}$, $m\ges 1$, with the module structure given by $ (t,g)\times\bigl(\vphi,(v_j)_j\bigr) := \bigl(t\cdot(g\times\vphi),(g\times v_j)_j\bigr). $ Consider $l>0$, and define $\th_m:=l\chi_t+m\chi_{{\rm ac}}\in\cal X^*(K^\times\times G)$. The numerical functions on $\mbb V^m$ and $\mbb W_m$ are the following: $$\begin{array}{l} \disp m(\unl v,\l) \,=\, \min_j m(v_j,\l),\quad \forall\,\unl v=(v_j)_j\in\mbb V^{m}, \\[2.5ex] \disp m((\vphi,\unl v),t^\veps\l) =\min_{1\les j\les m}\{\veps+m(\vphi,\l),m(v_j,\l)\}, \quad \forall\,(\vphi,\unl v)\in\mbb W_{m}. \end{array}$$ The stability criterion for $\mbb W_m$ reads: a point $w=(\vphi,\unl v)$ is stable w.r.t. $(K^\times\times G,\,\th_m)$ if and only if $${\label{abc}} \kern-.5ex\left\{ \begin{array}{lrlr} (A)& m(\vphi,\l)\ges 0,\, m(\unl v,\l)\ges 0 &\Rightarrow& \langle \chi_{{\rm ac}},\l\rangle > 0; \\[1ex] (B)& 1+m(\vphi,\l)\ges 0,\, m(\unl v,\l)\ges 0 &\Rightarrow& l+m\cdot \langle \chi_{{\rm ac}},\l\rangle > 0; \\[1ex] (C)& -1+m(\vphi,\l)\ges 0,\, m(\unl v,\l)\ges 0 &\Rightarrow& -l+m\cdot\langle \chi_{{\rm ac}},\l\rangle > 0. \end{array} \right.$$ Note that $\crl C_{(\vphi,\unl v)} =\crl C_\vphi+\bigl(\mbb R\times\crl C_{\unl v}\bigr)$ for all $(\vphi,\unl v)\in\mbb W_m$; moreover, for $\unl v=(v_1,\ldots,v_m)$, then $\crl C_{\unl v}=\crl C_{v_1}+\ldots+\crl C_{v_m}$. We deduce that as [*both*]{} $m$ and $(\vphi,\unl v)\in\mbb W_m$ vary, there will be only [*finitely many*]{} dual cones: $${\label{fcs}} \L_{(\vphi,\unl v)} = \L_\vphi\cap\bigl(\mbb R\times\L_{\unl v}\bigr) = \L_\vphi\cap\bigl(\mbb R\times(\L_{v_1}\cap\ldots\cap\L_{v_m})\bigr).$$ We denote by $\L_1',...\,,\L_Z'$ the various intersections of $\L_1,...\,,\L_z$ defined above, corresponding to the [*fixed*]{} representation $G\rar{{\rm Gl}}(V)$. [\[prop:large-m\]]{} Let us assume that the $G$-module $V$ has the property:\ $({{\mbb V}}^m)^{{\rm ss}}(G,\chi_{{\rm ac}})=({{\mbb V}}^m)^{{\rm s}}(G,\chi_{{\rm ac}})\;$ for all $\;m\ges 1$. Then there is a constant $a_0(E)$ depending on $E$ such that for $\frac{m}{l}>a_0(E)$: $ \bigl(\mbb E\times\mbb V^{m}\bigr)^{{\rm s}}(K^\times\times G,l\chi_t+m\chi_{{\rm ac}}) =\bigl(\mbb E\sm\{0\}\bigr)\times \bigl(\mbb V^{m}\bigr)^{{\rm s}}(G,\chi_{{\rm ac}}). $ Equivalently, $\chi_t+r\chi_{{\rm ac}}$ is an ample class on $\mbb P(\cal E)$ for $r>a_0(E)$. ‘$\supset$’ Let $(\vphi,\unl v)\in (\mbb E\sm\{0\})\times (\mbb V^m)^{{\rm s}}(G,\chi_{{\rm ac}})$. By definition, this means:$\;$ $ m(\unl v,\l)\ges 0 \;\Rightarrow\; \langle\chi_{{\rm ac}},\l\rangle > 0. $\ The conditions $(A)$ and $(B)$ in are automatically fulfilled. We prove that for large $m$ the condition $(C)$ holds too. Let $\l_0$ be such that $m(\vphi,\l_0)\ges 1$ and $m(\unl v,\l_0)\ges 0$. Recall that only finitely many cones $\L_{\unl v}$ will appear when both $m$ and $\unl v\in(\mbb V^m)^{{\rm s}}$ vary. On each such cone, the linear function $\langle\chi_{{\rm ac}},\,\cdot\,\rangle$ is strictly positive. We choose $a_1>0$ such that $ \langle\chi_{{\rm ac}},\l\rangle\ges a_1|\l|,\; \forall\, \l\in\L_1'+\ldots+\L_Z'. $ For fixed $\vphi$, the function $m(\vphi,\cdot)$ is piecewise linear. As $\vphi$ varies, $m(\vphi,\cdot)$ depends only on the weights of $T$ on $E$. Overall we find a constant $a_2(E)>0$ [*independent of*]{} $\vphi$ such that $|m(\vphi,\l)|\les a_2(E)\cdot|\l|$ for all $\l\in\cal X_*(T)$. Back to the proposition: $$\begin{array}{lr} &a_2(E)\cdot|\l_0|\ges m(\vphi,\l_0)\ges 1 \;\Rightarrow\; |\l_0|\ges\frac{1}{a_2(E)}. \\[2ex] \text{Hence:}\, & -l+m\cdot \langle \chi_{{\rm ac}},\l_0\rangle \ges -l+m\cdot a_1|\l_0| \ges -l+m\cdot \frac{a_1}{a_2(E)}. \end{array}$$ We conclude that for $\frac{m}{l}>\frac{a_2(E)}{a_1}$ the condition $(C)$ is satisfied. ‘$\subset$’ We prove that $$\bigl(\mbb E\times\mbb V^{m}\bigr)^{{\rm us}}(K^\times\times G,l\chi_t+m\chi_{{\rm ac}}) \supset \bigl(\mbb E\sm\{0\}\bigr)\times \bigl(\mbb V^{m}\bigr)^{{\rm us}}(G,\chi_{{\rm ac}}) \;\text{ for }m\gg0.$$ The conclusion follows from the hypothesis $(\mbb V^{m})^{{\rm ss}}(G,\chi_{{\rm ac}}) =(\mbb V^{m})^{{\rm s}}(G,\chi_{{\rm ac}})$. Recall from (\[fct-m\]) that $\unl v\!\in\!(\mbb V^{m})^{{\rm us}}(G,\chi_{{\rm ac}})$ if and only if $m(\unl v)\!<\!0$. The value $m(\unl v)$ is reached at the ‘worst’ destabilizing $\l\in\cal X_*(G)$ (see [@ke2]). For variable $m$, there are only finitely many combinatorial strata in $(\mbb V^{m})^{{\rm us}}(G,\chi_{{\rm ac}})$ ([*c.f.*]{} ), hence only finitely many possible values for $m(\unl v)$. It follows that $$-\mu:=\max\bigl\{ m(\unl v)\mid m\ges 1,\,\unl v\in(\mbb V^{m})^{{\rm us}}(G,\chi_{{\rm ac}}) \bigr\}<0.$$ Now consider $(\vphi,\unl v)\in \bigl(\mbb E\sm\{0\}\bigr)\times \bigl(\mbb V^{m}\bigr)^{{\rm us}}(G,\chi_{{\rm ac}})$, and its worst destabilizing $\l\in\cal X_*(G)$. After possibly moving $\unl v$ by an element in $G$, we may assume that $\l\in\cal X_*(T)$. Then holds: $m(\unl v,\l)\ges 0$, and $\frac{\langle\chi_{{\rm ac}},\l\rangle}{|\l|}=m(\unl v)\les-\mu$. We distinguish the following cases: – If $m(\vphi,\l)=0\text{ resp.}>0$, then (A) and (B) imply that $(\vphi,\unl v)$ is $l\chi_t+m\chi_{{\rm ac}}$ unstable. – If $m(\vphi,\l)<0$, we normalize $\l$ such that $m(\vphi,\l)=-1$. We claim that $l+m\langle\chi_{{\rm ac}},\l\rangle\les 0$ for $m$ large enough. Otherwise we deduce: $$\phantom{xxxxxxxxxx} \begin{array}{l} \mu|\l|\les|\langle\chi_{{\rm ac}},\l\rangle|<l/m \\[1ex] 1=|m(\vphi,\l)|\les a_2(E)\cdot|\l| \end{array} \biggr\}\,\Rightarrow\, \frac{m}{l}<\frac{a_2(E)}{\mu}. \phantom{xxxxxxxxxx}$$ The nef vector bundles ====================== [\[sct:nef-vb\]]{} In this section we define a finite set of ‘extremal’ nef vector bundles, which will be the building blocks of the exceptional sequences. We continue the notations of the previous section. Consider the following Weyl group invariant cone: $${\label{nef-cone}} \crl N=\crl N(G,V):=\crl C_1\cap\ldots\cap\crl C_z ={(\L_1+\ldots+\L_z)}^\vee.$$ When $G$ is a torus, $\crl N$ is the nef cone of the quotient, which is a toric variety. In our context, $\crl N$ can be viewed as the nef cone of $\mbb V^{{\rm ss}}(T,\chi_{{\rm ac}})/T$. Its importance relies on the following: [\[prop:nef-vb\]]{} We consider a $G$-module $V$ which has the following property: ${{\mbb V}}^{{\rm ss}}(G,\chi_{{\rm ac}})={{\mbb V}}^{{\rm s}}(G,\chi_{{\rm ac}})$. Let $E$ be a $G$-module, and $\cal E\rar Y$ be its associated vector bundle. Then $\cal E$ is nef if and only if all the weights of $T\!$ on $E\!$ belong to the cone $\crl N\!$. We call a module with this property [*a nef module*]{}. ($\Leftarrow$) Let us assume that the weights of $E$ belong to $\crl N$. We prove that, on $\mbb P(\cal E^\vee)$, the class $\chi_t$ is nef, it means $\chi_t+r\chi_{{\rm ac}}$ is ample for all $r>0$. This translates into the following condition: $$\bigl(\mbb E^\vee\times\mbb V\bigr)^{{\rm s}}(K^\times\times G,\chi_t+r\chi_{{\rm ac}}) =\bigl(\mbb E^\vee\sm\{0\}\bigr)\times \mbb V^{{\rm s}}(G,\chi_{{\rm ac}}), \quad\forall r>0.$$ ‘$\supset$’ The conditions (A) and (B) are trivially satisfied. We show that the case (C) does not occur. Take $(\psi,v)\in(\mbb E^\vee\sm\{0\})\times\mbb V^{{\rm s}}(G,\chi_{{\rm ac}})$, and suppose that there is $\l_0$ with $m(\psi,\l_0)\!\ges 1$ and $m(v,\l_0)\!\ges 0$. Then $ \l_0\!\in{\rm int.}(\crl C_\psi)^\vee \!\subset\! -{\rm int.}\crl N^\vee $ and also $\l_0\!\in\crl C_v^\vee\!\subset\!\crl N^\vee$. Contradiction. ‘$\subset$’ For shorthand, we denote $\cal S_L$ resp. $\cal S_R$ the left- and the right-hand-side above. Note that the quotient $\cal S_R\bigr/K^\times\kern-.7ex\times G$ exists, and equals $\mbb P(\cal E^\vee)$; let $Z:=\cal S_L/(K^\times\!\times G)$ be the quotient. By previous step, there is a morphism $\phi:\mbb P(\cal E^\vee)\rar Z$. Since $\phi$ is open and $\mbb P(\cal E^\vee)$ is projective, $\phi$ is surjective. Recall from [@mfk Theorem 1.10], that $K^\times\!\times G$ acts with closed orbits on $\cal S_L$, and the quotient $\cal S_L\rar Z$ is geometric. Since $\mbb P(\cal E^\vee)\rar Z$ is surjective, the inclusion $\cal S_L\supset\cal S_R$ must be an equality. Otherwise we find closed orbits in $\cal S_L$, which are not contained in $\cal S_R$. ($\Rightarrow$) Assume that $\cal E\rar Y$ is nef, that means $\chi_t$ is a nef class on $\mbb P(\cal E^\vee)$. By inspecting the conditions we deduce: $$\raise.15ex\hbox{$\not$}\exists\,\psi\in\mbb E^\vee\sm\{0\},\; v\in{{\mbb V}}^{{\rm s}}(G,\chi_{{\rm ac}}),\; \l\in\cal X_*(T)\text{ s.t. } \left\{\begin{array}{l} m(\psi,\l)\ges 1,\\ m(v,\l)\ges 0. \end{array}\right.$$ We choose $\psi=\vphi^\vee$, with $\vphi\in\mbb E$ of weight $\veps$. The previous condition implies: $\;\raise.15ex\hbox{$\not$}\exists\, \l\in\cal X_*(T)$ such that $\langle\veps,\l\rangle<0$, and $\l\in\bigl(-\mbb R_+\veps+\crl N\bigr)^\vee.$ This happens only for $\veps\in\crl N$. There is also an effective procedure to produce ‘the smallest’ such modules. Let us consider the set of weights: $${\label{n1}} \crl N_1=\crl N_1(G,V):= \left\{ \xi\,\biggl|\, \begin{array}{l} \mbb R_+\xi\text{ is an extremal ray of }\crl N,\\ \xi\text{ generates }\mbb R_+\xi\cap\cal X^*(T) \text{ over }\mbb Z_{\ges0} \end{array} \right\}.$$ It is a Weyl-invariant set, and therefore it makes sense considering the irreducible $G$-modules whose dominant weights belong to $\crl N_1$. These modules will be the building blocks for constructing exceptional sequences. We denote $${\label{eqn:nef-vb}} {{\cal V\kern-.41ex\euf B}}^+(Y):= \left\{ E \;\biggl|\; \begin{array}{l} \text{the dominant weight of the $G$-module $E$} \\ \text{belongs to }\crl N_1 \end{array}\biggr. \right\}.$$ Equivalently, denote $\mfrak W_G^+$ the closure of the positive Weyl chamber of $G$. Then ${{\cal V\kern-.41ex\euf B}}^+(Y)$ can be identified with $$\crl N_1^+(G,V):=\mfrak W_G^+\cap\crl N_1(G,V).$$ The set ${{\cal V\kern-.41ex\euf B}}^+(Y)$ is finite. For any $E\in{{\cal V\kern-.41ex\euf B}}^+(Y)$, the weights of $T$ on $E$ belong to the cone $\crl N$. As $\crl N_1$ is finite, ${{\cal V\kern-.41ex\euf B}}^+(Y)$ is the same. Let $\xi$ be the dominant weight of $E$. The weights of $T$ on $E$ belong to the convex hull of the images of $\xi$ under the Weyl group. But all of them generate rays of $\crl N\!$. Hence the convex hull of the images of $\xi$ is contained in $\crl N\!$. [\[prop:+comb\]]{} Let $M$ be an irreducible, nef $G$-module. Then there are $E_1,\ldots,E_n\in{{\cal V\kern-.41ex\euf B}}^+(Y)$, and $c_1,\ldots,c_n\ges 1$ such that $M\!\subset\! \mbox{$\overset{n}{\underset{j=1}\bigotimes}$}{\mathop{\rm Sym}\nolimits}^{c_j}E_j.$ We say that $M$ is [*a positive combination*]{} of extremal nef modules. Since the $G$-module $M$ is nef, its highest weight $\xi_M$ belongs to the cone $\crl N$. Then $\xi_M$ is a positive combination of $\xi_1,\ldots,\xi_n\in\crl N_1\,$:\ $\xi_M=\mbox{$\overset{n}{\underset{j=1}\sum}$}c_j\xi_j,\;c_j\ges 1.$ \ Each $\xi_j$ is conjugated to some $\xi^+_j\in\crl N_1^+$, since the Weyl group acts transitively on the Weyl chambers. The irreducible $G$-module $E_j$ with highest weight $\xi^+_j$ belongs to ${{\cal V\kern-.41ex\euf B}}^+(Y)$. Now observe that $\xi_M$ appears among the weights of $\overset{n}{\underset{j=1}\bigotimes}{\mathop{\rm Sym}\nolimits}^{c_j}E_j$. Hence the whole module $M$ is contained in it. [\[lm:chi\]]{} Consider the set ${{\cal V\kern-.41ex\euf B}}^+(Y)$ of extremal nef vector bundles on $Y$, defined in . Then the anti-canonical character $\chi_{{\rm ac}}(G,V)$ is a positive linear combination of ${{\rm det}}E$, with $E\in{{\cal V\kern-.41ex\euf B}}^+(Y)$: $$\chi_{{\rm ac}}=\mbox{$\underset{E\in{{\cal V\kern-.41ex\euf B}}^+(Y)}\sum$}\kern-1ex m_{E}\!\cdot{{\rm det}}(E),\quad\text{with }m_E\ges 0.$$ Let $\{\xi_j\}_j$ be the elements of $\crl N_1$. Since $\chi_{{\rm ac}}$ belongs to the interior of $\crl N$, there are positive numbers $c_j$ such that $ \chi_{{\rm ac}}\!=\!\underset{j}\sum c_j\xi_j \!=\!\underset{j}\sum c_j\xi_j^\circ +\underset{j}\sum c_j\xi_j'. $ We decompose $\cal X^*(T)_{\mbb Q} \!=\!\cal X^*(Z(G)^\circ)_{\mbb Q}\oplus\cal X^*(T')_{\mbb Q}$. Accordingly, each $\xi_j$ decomposes into $\xi_j=\xi_j^\circ+\xi_j'$, and each $\xi_j$ is conjugated to some $\xi_j^+\in\crl N_1^+$. Let $E_j\in{{\cal V\kern-.41ex\euf B}}^+(Y)$ be the irreducible $G$-module with highest weight $\xi^+_j$. Note that $Z(G)^\circ$ acts on $E_j$ by the character $\xi_j^\circ$. Since $\chi_{{\rm ac}}$ is trivial on the semi-simple part of $G$, we deduce that $ \chi_{{\rm ac}}=\mbox{$\underset{j}\sum$} c_j\xi_j^\circ =\mbox{$\underset{j}\sum$} \frac{c_j}{\dim E_j}{{\rm det}}E_j.$ Cohomological properties of nef vector bundles ============================================== [\[cohom-nef\]]{} In section \[sct:nef-vb\] we have introduced the set of nef vector bundles associated to representations of $G$. In this section we are going to study their cohomological properties. [\[thm:hq-nef\]]{} Let $E$ be a nef $G$-module. Then $H^q(Y,\cal E)=0$ for all $q>0$. Using the projection formula, $H^q(Y,\cal E)=H^q(\mbb P(\cal E^\vee),\cal O_{\mbb P}(1))$, and $\cal O_{\mbb P}(1)\rar \mbb P(\cal E^\vee)$ is a nef line bundle. The vanishing of the latter cohomology group is a consequence of the Hochster-Roberts theorem (see [@ke]). We place ourselves in the following framework: $${\label{rel-situation}} \left\{\text{ \begin{minipage}{27.5em} (i) There is a quotient group $H$ of $G$ with kernel $G_0$ (note that $G_0$ and $H$ are still reductive), and a quotient $H$-module $W$ of $V$ with kernel $V_0$, such that the natural projection ${\mathop{\rm pr}\nolimits}^{{{\mbb V}}}_{\mbb W}\!:\!{{\mbb V}}\!\rar\!\mbb W$ has the property ${\mathop{\rm pr}\nolimits}^{{{\mbb V}}}_{\mbb W}\bigl(\;{{\mbb V}}^{{\rm ss}}\bigl(G,\chi_{{\rm ac}}(G,V)\bigr)\;\bigr) \subseteq\mbb W^{{\rm ss}}\bigl(H,{\chi_{{\rm ac}}}(H,W)\bigr).$\break We denote $Y\srel{\phi}{\rar}X$ the induced morphism. \\[1.5ex] (ii) Both unstable loci have codimension at least two. \\[1.5ex] (iii) $G$ and $H$ act freely on ${{\mbb V}}^{{\rm ss}}\bigl(G,\chi_{{\rm ac}}(G,V)\bigr)$ and \phantom{MMM} $\mbb W^{{\rm ss}}\bigl(H,{\chi_{{\rm ac}}}(H,W)\bigr)$ respectively. \end{minipage} }\right.$$ Now let us study the positivity properties of direct images of nef vector bundles. [\[lm:phiE\]]{} Suppose that we are in the situation , and that $E$ is a $G$-module such that its associated vector bundle $\cal E\rar Y$ is nef. Then $\phi_*\cal E\rar X$ is a vector bundle, and it is associated to the $H$-module $\phi_*E:={\mathop{\rm Mor}\nolimits}({{\mbb V}}_0,E)^{G_0}=H^0({{\mbb V}}_0{{\slash\kern-0.65ex\slash}}_{\chi_{{\rm ac}}}G_0,\cal E)$. The restriction of $\cal E$ to the fibres of $\phi$ are nef. By applying theorem \[thm:hq-nef\], we obtain that $R^q\phi_*\cal E=0$ for all $q>0$, and therefore $\phi_*\cal E\rar X$ is locally free. Observe that both $V_0$ and $H$ are actually $G$-modules, and $V=V_0\oplus W$; the kernel $G_0$ is acting trivially on $W$. For an $H$-invariant open set $O\subset\mbb W$, holds: $ \!\begin{array}[b]{ll} H^0(O{{\slash\kern-0.65ex\slash}}H,\phi_*\cal E) &= H^0\bigl((\mbb V_0\times O){{\slash\kern-0.65ex\slash}}G,\cal E\bigr) ={\mathop{\rm Mor}\nolimits}\bigl(\mbb V_0\times O,E\bigr)^G\! \\ &= \kern-.4ex{\bigl({\mathop{\rm Mor}\nolimits}(\mbb V_0\times O,E)^{G_0}\bigr)}^H \kern-.8ex=\! {{\mathop{\rm Mor}\nolimits}\bigl(O,{\mathop{\rm Mor}\nolimits}(\mbb V_0,E)^{G_0}\bigr)}^H \kern-.5ex. \end{array} $ [\[thm:direct-image\]]{} Assume that [($\!$]{}\[rel-situation\][)]{} holds, and let $E$ be a nef $G$-module. Then the $H$-module $\phi_*E$ is still nef. (The direct image $\phi_*\cal E\rar X$ is a nef vector bundle.) Mourougane proves in [@mou] a similar statement for adjoint bundles. The proof below follows [*ad litteram*]{} his proof ([*loc.cit.*]{} section 3), with the necessary changes. By lemma \[lm:phiE\], $\phi_*\cal E\rar X$ is locally free. : Construct the tensor powers $(\phi_*\cal E)^{\otimes n}$.\ Let $Y^{(n)}=Y\times_X\ldots\times_XY$ be the fibre product, and $\phi^{(n)}:Y^{(n)}\rar X$ be the projection. Note that the vector bundle $\cal E^{(n)}:=\cal E\times_X\ldots\times_X\cal E$ on $Y^{(n)}$ is nef. Its direct image is $\phi^{(n)}_*\cal E^{(n)}=(\phi_*\cal E)^{\otimes n}$. Moreover, $Y^{(n)}$ is the quotient of the affine space $\mbb V^{(n)}$ by the action of the group $G^{(n)}$, and $\cal E^{(n)}$ is associated to the $G^{(n)}$-module $E^{\oplus n}$: – $V^{(n)}:= \{(v_1,\ldots,v_n)\in V^{\oplus n}\mid {\mathop{\rm pr}\nolimits}^V_W(v_1)=\ldots={\mathop{\rm pr}\nolimits}^V_W(v_n)\}; $ – The group $G^{(n)}:=G\times_H\ldots\times_HG$ is still reductive. : Let $A\rar X$ be a very ample line bundle, associated to some character of $H$. Then $(\phi_*\cal E)^{\otimes n}\otimes A^{\dim X+1}$ is globally generated.\ We replace $Y$ by $Y':=Y^{(n)}$, $\phi$ by $\phi':=\phi^{(n)}$, and $\cal E$ by $\cal E':=\cal E^{(n)}$. By the Castelnuovo-Mumford criterion, in order to prove that $\phi'_*\cal F\otimes A^{\dim X+1}$ is globally generated, it is enough to check that $H^q(X,\phi'_*\cal E'\otimes A^{\dim X+1-q})=0$ for all $q>0$. Since the higher direct images of $\cal E'$ vanish, the projection formula gives: $ H^q(X,\phi'_*\cal E'\otimes A^{\dim X+1-q})= H^q(Y',\cal E'\otimes {(\phi')}^*A^{\dim X+1-q}). $ But $Y'$ is still a quotient of an affine space, $\cal E'$ is associated to a nef $G$-module, and ${(\phi')}^*A$ corresponds to a nef character of $G$. We apply theorem \[thm:hq-nef\] to $\cal E'\otimes {(\phi')}^*A^{\dim X+1-q}$, and deduce that its higher cohomology groups vanish. : According to the previous step $(\phi_*\cal E)^{\otimes n}\otimes A^{\dim X+1}$ is globally generated for all $n>0$, and therefore $\phi_*\cal E$ is nef. We use this result to describe more precisely the nef cone $\crl N(G,V)$. We consider the projective variety $${{\rm Flag}}(Y):=\O_G/B=\bigl(\O_G\times(G/B)\bigr)\bigr/G,$$ and denote $\pi:{{\rm Flag}}(Y)\rar Y$ the projection. It is a $G/B$-fibre bundle over $Y\!$, justifying the notation ${{\rm Flag}}(Y)$. For any $\xi\in\cal X^*(T)=\cal X^*(B)$, we denote by $\cal L_\xi\rar{{\rm Flag}}(Y)$ the line bundle $(\O_G\times K)/B$, where $B$ acts on $K$ by $\xi$. [\[LE\]]{} Let $\xi\in\cal X^*(T)$ be a dominant character, and let $E_\xi$ be the corresponding irreducible $G$-module. Then holds: 1. $\cal E_\xi=\pi_*\cal L_\xi$; 2. $\cal E_\xi\rar Y$ is nef if and only if $\cal L_\xi\in{\mathop{\rm Pic}\nolimits}^+\bigl({{\rm Flag}}(Y)\bigr):=$ the nef cone of ${{\rm Flag}}(Y)$. \(i) The equality is a direct consequence of the Borel-Weil theorem, which says that $H^0(G/B,\cal L_\xi)=E_\xi$. \(ii) Assume that $\cal L_\xi$ is nef. The Borel-Weil theorem implies that the higher direct images $R^{>0}\pi_*\cal L_\xi=0$. By the same argument of the theorem \[thm:direct-image\], we deduce that $\cal E_\xi=\pi_*\cal L_\xi\rar Y$ is still nef. Conversely, assume that $E_\xi$ is nef, hence ${{\mbb V}}^{{\rm ss}}(T,\chi_{{\rm ac}})\subset{{\mbb V}}^{{\rm ss}}(T,\xi)$. We claim that some tensor power of $\cal L_\xi$ is globally generated, and therefore $\cal L_\xi$ is nef. Let $B$ be the Borel subgroup of $G$ for which $\xi$ is dominant. Our hypothesis implies that $$\begin{array}{r} {{\mbb V}}^{{\rm ss}}(G,\chi_{{\rm ac}})= \mbox{$\underset{g\in G}\bigcap$}g{{\mbb V}}^{{\rm ss}}(T,\chi_{{\rm ac}}) \subset \mbox{$\underset{b\in B}\bigcap$}b{{\mbb V}}^{{\rm ss}}(T,\chi_{{\rm ac}}) \subset \mbox{$\underset{b\in B}\bigcap$} b{{\mbb V}}^{{\rm ss}}(T,\xi) \\= {{\mbb V}}^{{\rm ss}}(B,\xi). \end{array}$$ Observe that $B$ is solvable, [*not reductive*]{}, and therefore the standard invariant theory does not apply. The $B$-semi-stable locus ${{\mbb V}}^{{\rm ss}}(B,\xi)$ is defined exactly as in , in terms of the algebra $K[{{\mbb V}}]^{B,\xi}$. Its [*finite generacy*]{} has been proved by Grosshans (see [*e.g.*]{} [@gross Corollary 9.5]). We deduce that for some $n>0$, ${{\mbb V}}^{{\rm ss}}(B,\xi)$ can be covered by a finite number of sets $\{y\mid f(y)\neq 0\}$, with $f\in K[{{\mbb V}}]^B_{\xi^n}$. Altogether, we find at each point $y\in{{\mbb V}}^{{\rm ss}}(G,\chi_{{\rm ac}})$ a function which is $(B,\xi^n)$-equivariant, and does not vanish at $y$. Hence $\cal L^n_{\xi}$ is globally generated. [\[cor:+comb\]]{} Suppose that holds. Let $E$ be a nef $G$-module, and $M$ an irreducible $H$-submodule of $\phi_*E$. Then $M$ is a direct summand in a $H$-module of the form $\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes\kern-1.2ex{\mathop{\rm Sym}\nolimits}^{c_F}F.$ The push-forward $\phi_*\cal E\rar X$ is nef, and therefore all its weights belong to the cone $\crl N(H,W)$. We deduce that $M$ is nef too, and the conclusion follows from proposition \[prop:+comb\]. Consider the Grassmannian $X:={{\rm Grass}}(K^m,d)$ of $d$-dimensional quotients, and denote $\cal Q$ the tautological quotient on it. Note that the variety ${{\rm Flag}}(X)$ is the variety of full quotient flags of $\cal Q$. The cone $\mfrak W^+\cap{\mathop{\rm Pic}\nolimits}^+\bigl({{\rm Flag}}(X)\bigr)$ is generated by $d$ elements which correspond to the characters $\tau_1$, $\tau_1+\tau_2$,$\ldots$,$\tau_1+\ldots+\tau_d$ (here the $\tau_j$’s denote the obvious characters of the maximal torus in ${{\rm Gl}}(d)$). We deduce that for any nef ${{\rm Gl}}(d)$-module $F$, its associated vector bundle $\cal F\rar {{\rm Grass}}(K^m,d)$ is a direct summand in a tensor product of the form ${\mathop{\rm Sym}\nolimits}^{c_1}(\cal Q)\otimes{\mathop{\rm Sym}\nolimits}^{c_2}(\overset{2}\bigwedge\cal Q) \otimes\ldots\otimes{\mathop{\rm Sym}\nolimits}^{c_d}(\overset{d}\bigwedge\cal Q)$. This is in agreement with the fact that this tensor product contains the Schur power $\mbb S^{\alpha}\cal Q$, where $\alpha=(\alpha_1\ges\ldots\ges\alpha_{d}\ges0)$, and the positive integers $c_j$ satisfy $\alpha_j=c_j+\ldots+c_d$ for $j=1,...,d$. The main result: the absolute case ================================== [\[sct:main\]]{} In this section we prove our first main result. We consider a $G$-module $V$, and the character $\chi_{{\rm ac}}=\chi_{{\rm ac}}(G,V)$. Assume that the codimension of the $\chi_{{\rm ac}}$-unstable locus is at least two, and $G$ acts freely on the semi-stable locus. It follows that $Y:={{\mbb V}}{{\slash\kern-0.65ex\slash}}_{\chi_{{\rm ac}}}G={{\mbb V}}^{{\rm ss}}(G,\chi_{{\rm ac}})/G$ is a projective Fano variety. Observe that lemma \[lm:effective\] implies that $\chi_{{\rm ac}}=\chi_{{\rm ac}}(G,V)$ is effective as soon as $m_\og > d_\og$ for all $\og\in\cal X$ (the result below does not require this hypothesis). We define a Young diagram $\l$ of length $d$ to be an array of decreasing integers $(\l_1\ges\!\ldots\!\ges\l_d)$. We denote $\l_{{\rm max}}:=\l_1$, $\l_{{\rm min}}:=\l_d$, ${\rm length}(\l):=d$. For arrays consisting of positive integers, we visualize the Young diagrams, and the parameters as in the figure:\ ![image](yd.eps) We introduce the following shorthand notation: for a Young diagram $\l$, let $\l\pm\fbox{$c$}$ be the diagram obtained by adding/subtracting the integer $c$ to/from the entries of $\l$. For a vector space $E$ and a Young diagram $\l$ of length $\dim E$, we will denote $\mbb S^\l E$ its usual Schur power (for $\l_{{\rm min}}\ges 0$), or $\mbb S^{\l- \raise.3ex\hbox{\tiny$\begin{array}{|c|} \hline\kern-1.5ex \l_{{\rm min}}\kern-1.5ex\\[.3ex] \hline \end{array}$} }\otimes({{\rm det}}E)^{\l_{{\rm min}}}$ (for arbitrary $\l$). For two positive numbers $m,d$ we define the following sets: $$\begin{aligned} \begin{array}{l} \widetilde{\cal Y}_{d}:= \text{ the set of Young diagrams $\l$ with }{\rm length}(\l)= d; \\[2ex] \cal Y_{m,d}:= \bigl\{ \l\in\widetilde{\cal Y}_d\mid 0\les \l_{{\rm min}}\les\l_{{\rm max}}\les m \bigr\}; \\[2ex] \cal Y_{d}:=\underset{m\ges 0}\bigcup\cal Y_{m,d}\,; \quad \cal Y^+_{d}:= \underset{m\ges 0}\bigcup \bigl\{ \l\in\cal Y_{m,d}\mid\l_d\ges{\rm length} \bigl( \l-\fbox{$\l_d$} \,\bigr)\, \bigr\}. \end{array}\end{aligned}$$ Roughly speaking, our main result is that certain Schur powers of the extremal nef bundles on $Y$ form a strong exceptional sequence. The main technical tool that will be used is the following cohomology vanishing theorem, proved by Manivel for Kählerian varieties (see [@ma]), and Arapura for projective ones (see [@ar]). Next comes our first main result. [\[thm:main\]]{} Let $V$ be a $G$-module such that $K[\mbb V]^T=K$. Assume that the unstable locus has codimension at least two, and that $G$ acts freely on ${{\mbb V}}^{{\rm ss}}(G,\chi_{{\rm ac}})$; we denote by $Y:={{\mbb V}}^{{\rm ss}}(G,\chi_{{\rm ac}})/G$ the quotient. We consider the order $<_{\mfrak l}$ defined in \[defn:order2\]. Let $E_1,\ldots,\kern-.1exE_N\!$ be the elements of ${{\cal V\kern-.41ex\euf B}}^+\!(Y\kern-.1ex)$,$\kern-.2ex$ and denote $d_j\!:=\!\dim E_j$. We write $\chi_{{\rm ac}}=\overset{N}{\underset{j=1}\sum}m_j\cdot{{\rm det}}(E_j)$, with $m_j\ges 0$ as in lemma \[lm:chi\], and assume that all the numbers $m_j$ are integers. Consider the set $$\begin{array}[t]{ll} \euf{ES}(Y):= & \text{the set of all irreducible $G$-modules contained in} \\[1ex] & \mbb S^{\l^{\bullet}}E_\bullet \!:= \mbb S^{\l^{(1)}} E_1 \otimes\ldots\otimes \mbb S^{\l^{(N)}} E_N, \text{ where }\l^{(j)}\!\in\!\cal Y_{m_j-d_j,d_j}. \end{array}$$ Then the vector bundles $\cal E\rar Y$ associated to the modules $E\in\euf{ES}(Y)$ form a strong exceptional sequence over $Y$ w.r.t. the order $<_{\mfrak l}$. The condition on $H^0({\mathop{\rm Hom}\nolimits}(\cal U',\cal U''))$ for two elements $\cal U',\cal U''\in\euf{ES}(Y)$ is implied by theorem \[thm:h000\]. It remains to prove the vanishing of the higher cohomology groups. First of all we observe that, by definition, the vector bundles $\cal{U', U''}$ are direct summands of $\mbb S^{\l^\bullet}\cal E_\bullet$. Therefore it is enough to prove that vanishing of $H^q\bigl(Y, {\mathop{\rm Hom}\nolimits}(\mbb S^{\l^\bullet}\cal E,\mbb S^{\mu^\bullet}\cal E) \bigr)$, $q>0$. Using the Littlewood-Richardson rules, we decompose $${\mathop{\rm Hom}\nolimits}(\mbb S^{\l^\bullet}\cal E_\bullet,\mbb S^{\mu^\bullet}\cal E_\bullet) = \mbox{ ${\underset{\alpha^\bullet=(\alpha^{(1)},\ldots,\alpha^{(N)})}\bigoplus}$ } \kern-2ex\mbb S^{\alpha^\bullet}\cal E_\bullet\,,$$ and observe that $\alpha^{(j)}\!=\!\bigl(m_j-d_j\ges \alpha^{(j)}_1\ges\ldots\ges\alpha^{(j)}_{d_j} \ges -m_j+d_j\bigr)$. For each direct summand holds: $$\begin{array}{ll} H^q\bigl( Y,\mbb S^{\alpha^\bullet}\cal E_\bullet \bigr) &= H^q\biggl( Y,\kappa_Y\otimes\mbox{$\overset{N}{\underset{j=1}\bigotimes}$} \bigl( \mbb S^{\alpha^{(j)}}\cal E_j\otimes {{\rm det}}(\cal E_j)^{m_j} \bigr) \biggr) \\& = H^q\biggl( Y,\kappa_Y\otimes\mbox{$\overset{N}{\underset{j=1}\bigotimes}$} \, \mbb S^{\alpha^{(j)}+ \tiny\begin{array}{|c|} \hline\kern-1.5ex m_j\kern-1.5ex\\ \hline \end{array} } \cal E_j \biggr). \end{array}$$ Note that $ \alpha^{(j)}+ \small\begin{array}{|c|} \hline m_j\\ \hline \end{array} = \underbrace{ \alpha^{(j)}+\small\begin{array}{|c|} \hline -\alpha^{(j)}_{d_j}+d_j-1\\ \hline \end{array}}_{:=\bar\alpha^{(j)}} \,+\, \small\begin{array}{|c|} \hline \alpha^{(j)}_{d_j}+m_j-d_j+1\\ \hline \end{array},$ and $$\left\{\begin{array}{ll} \bar\alpha^{(j)}_{d_j}=d_j-1\ges{\rm length} \bigl(\bar\alpha^{(j)}-\small\fbox{$d_j-1$} \,\bigr) &\text{ and} \\[1.5ex] \bar a_j:=\alpha^{(j)}_{d_j}+m_j-d_j+1\ges 1. \end{array}\right.$$ Since $E_1,\ldots,E_N$ are [*all*]{} the extremal nef bundles, it follows that the $A:=\overset{N}{\underset{j=1}\bigotimes}{{\rm det}}(\cal E_j)^{\bar a_j}$ is an ample line bundle over $Y$. The theorem cited above implies that the higher cohomology of $S^{\alpha^\bullet}\cal E_\bullet$ vanishes. Assume that the $G$-module $V$ has the property that the multiplicities $m_\og > d_\og$ for all $\og\in\cal X$. Then the exceptional sequence constructed above is formed by semi-stable vector bundles. It is an immediate consequence of the corollary \[thm:stab-bdl\]. [\[rmk:length\]]{} It is important to observe that $\kappa_Y^{-1}$ is ample, and it becomes increasingly positive as we increase the multiplicities $m_\og $ of the isotypical components of $V$. It follows that the effect of increasing the $m_\og $’s is that of [*simultaneously*]{} increasing the dimension of the quotient, and that of the length of the exceptional sequence. In other words, for our construction we will always have a [*lower bound*]{} for $$\frac{\text{length of exceptional sequence on $Y$}} {\text{Euler characteristic of } Y}.$$ Compare this construction with the one discussed in subsection \[ssect:AH\]. The main result: the relative case ================================== [\[sct:main2\]]{} Theorem \[thm:main\] is too weak for fibred varieties. By applying it directly, one looses many terms of the exceptional sequences (see subsections \[ssect:kapranov\] and \[ssect-A3\]). The goal of this section is to address the relative case described in . The additional hypothesis which will be imposed in may look overabundant, but in many concrete cases they are naturally fulfilled (especially for quiver representations). Denote $T_0$ and $T_H$ the maximal tori of $G_0$ and $H$ respectively. The exact sequence $1\!\rar\! G_0\!\rar\! G\!\rar\! H\!\rar\!1$ induces a natural splitting $\cal X^*(T)_{\mbb Q}=\cal X^*(T_0)_{\mbb Q}\oplus\cal X^*(T_H)_{\mbb Q}$. We will denote by $\crl N(G_0,V_0)$ respectively $\crl N(H,W)$ the nef cones of the $G_0$-module $V_0$ and $H$-module $W$, corresponding to $\chi_{{\rm ac}}(G_0,V_0)={\chi_{{\rm ac}}(G,V)|}_{G_0}$ and $\chi_{{\rm ac}}(H,W)$. Throughout this section we will assume: $${\label{rel-situation2}} \left\{\text{ \begin{minipage}{27.5em} (i) The situation described in \eqref{rel-situation} holds. \\[1ex] (ii) $\crl N(G,V)=\crl N(G_0,V_0)+\crl N(H,W)$.\\ (We use the shorthand notation $\crl N=\crl N_0+\crl N_H$.) \\[1ex] (iii) The maximal torus $T_0\subset G_0$ has exactly $\dim T_0$ weights\\ on $V_0$. \end{minipage} }\right.$$ Let us make a few comments related to the assumptions: – The condition (ii) means that there is a partition ${{\cal V\kern-.41ex\euf B}}^+(Y)={{\cal V\kern-.41ex\euf B}}^+(X)\,\dot\cup\,{{\cal V\kern-.41ex\euf B}}^+(\text{fibre}).$ The set ${{\cal V\kern-.41ex\euf B}}^+(X)$ can always be viewed as a subset of ${{\cal V\kern-.41ex\euf B}}^+(Y)$ via the pull-back ${{\mbb V}}\srel{\phi}{\rar}\mbb W$. What we assume is that the ‘extremal’ nef bundles on the fibres extend to ‘extremal’ nef bundles on the whole $Y$. For shorthand, we will write ${{\cal V\kern-.41ex\euf B}}^+_0:={{\cal V\kern-.41ex\euf B}}^+(\text{fibre})$. – $T_0$ has always at least $\dim T_0$ linearly independent weights on $V_0$. The assumption (iii) is equivalent to any of the following: (iii$'$) For any $\xi\in\cal X^*(T_0)$, $\xi$ is $T_0$-nef on $V_0$ if and only if $\xi$ is $T_0$-effective on $V_0$; (iii$''$) The quotient ${{\mbb V}}_0{{\slash\kern-0.65ex\slash}}T_0$ is a product of projective spaces. Observe that by lemma \[lm:chi\], we can express $ \begin{array}{l} \chi_{{\rm ac}}(H,W)=\kern-1ex \mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\sum$}\kern-1.9ex m_F\,\cdot\,{{\rm det}}F\;\;(m_F\ges 0), \quad\text{and} \\[2ex] \chi_{{\rm ac}}(G_0,V_0)=\kern-1ex \mbox{$\underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\sum$}\kern-.4ex m_E\,\cdot\,{{\rm det}}E\;\;(m_E\ges 0). \end{array} $ [\[prop:tech\]]{} Assume that holds, and denote $d_F:=\dim F$, and $d_E:=\dim E$. Suppose that $(a_E)_{E\in{{\cal V\kern-.41ex\euf B}}^+_0}$ and $(b_F)_{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}$ are integers having the following property: for all $q>0$, and all Young diagrams $\alpha^{E}\in\wtld{\cal Y}_{d_E}$ resp. $\beta^{F}\in\wtld{\cal Y}_{d_F}$, such that $\alpha^{E}_{{\rm min}}\ges -a_E$ and $\beta^{F}_{{\rm min}}\ges -b_F$, holds: $$\begin{aligned} {\label{eqn:a}} H^q\biggl( V_0{{\slash\kern-0.65ex\slash}}_{\chi_{{\rm ac}}(G_0,V_0)}G_0, \mbox{$\underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes$} \mbb S^{\alpha^{E}}\cal E \biggr)=0, \\ {\label{eqn:b}} H^q\biggl( X,\mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes$}\kern-1ex \mbb S^{\beta^{F}}\cal F \biggr)=0.\hspace{5em}\end{aligned}$$ Then $H^q\biggl( Y,\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes\kern-1.7ex \phi^*\,\mbb S^{\beta^{F}}\cal F \;\otimes \underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes\kern-.5ex \mbb S^{\alpha^{E}}\cal E \biggr)=0$ for all $q>0$, and for all Young diagrams $\beta^{F}\in\wtld{\cal Y}_{d_F}$ and $\alpha^{E}\in\wtld{\cal Y}_{d_E}$ with $\beta^{F}_{{\rm min}}\ges -b_F$ and $\alpha^{E}_{{\rm min}}\ges -a_E$ respectively. The condition is fulfilled for $a_E:=m_E-d_E$, $\forall E\in{{\cal V\kern-.41ex\euf B}}^+_0$. The condition is fulfilled for $b_F:=m_F-d_F$, $\forall F\in{{\cal V\kern-.41ex\euf B}}^+(X)$. $\!$(i) The hypothesis implies that the higher direct images of $\!\underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes\kern-1ex\mbb S^{\alpha^{E}}\cal E$ vanish. By using the projection formula we deduce: $$\begin{aligned} H^q\biggl( Y,\mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes$}\kern-1.7ex \phi^*\mbb S^{\beta^{F}}\cal F \;\otimes \mbox{$\underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes$}\kern-.5ex \mbb S^{\alpha^{E}}\cal E \biggr)\hspace{10em} \\ =H^q\biggl( X,\mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes$}\kern-1.7ex \mbb S^{\beta^{F}}\cal F \otimes\; \phi_*\biggl( \mbox{$\underset{\;E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes$}\kern-.5ex \mbb S^{\alpha^{E}}\cal E \biggr)\biggr).\end{aligned}$$ Let us write $\cal V^0:= \underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes\kern-.5ex \mbb S^{\alpha^{E}}\cal E$, and decompose it into the direct sum corresponding to the irreducible $G$-modules appearing in the tensor product: $\cal V^0=\bigoplus\cal V^0_j$. The cohomology group breaks up into the direct sum of the ‘smaller’ cohomology groups. For each component $\cal V^0_j$ there are two possibilities: There is a weight of $T_0$ on $V^0_j$ which is not effective. In this case $\phi_*\cal V^0_j\!=\!0$ ([*c.f.*]{} theorem \[not-eff\]), and we discard it from the direct sum. All the weights of $T_0$ on $V^0_j$ are effective. In this case the hypotheses (ii)+(iii) imply that the weights of $V^0_j$ are nef, and therefore $\cal V^0_j\rar Y$ is nef itself. Using theorem \[thm:direct-image\] and proposition \[prop:+comb\], we deduce that $\phi_*\cal V^0_j\rar X$ is nef, and is actually contained in $\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes\kern-1.5ex{\mathop{\rm Sym}\nolimits}^{c_F}\cal F$, with $c_F\ges 0$. The Littlewood-Richardson rules imply that the tensor product $\mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes$}\kern-1.5ex \mbb S^{\beta^{F}}\cal F\,\otimes \mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes$}\kern-1.5ex {\mathop{\rm Sym}\nolimits}^{c_F}\cal F$ breaks up into the direct sum of $\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes\kern-1.5ex \mbb S^{\bar\beta^{F}}\cal F$, with $\bar\beta^F_{{\rm min}}\ges\beta^F_{{\rm min}}+c_F\ges b_F$. By the hypothesis, their higher cohomology vanishes. (ii$_1$) Note that ${\kappa_{Y/X}^{-1}\bigr|}_{\rm fibre} \!=\!\underset{E\in{{\cal V\kern-.41ex\euf B}}_0^+}\sum\! m_E\cdot{{\rm det}}E$. Consider Young diagrams $(\alpha^E)_{E\in{{\cal V\kern-.41ex\euf B}}^+_0}$ with $\alpha^E_{{\rm min}}\ges d_E-m_E$ for all $E$. It holds: $$\begin{array}{ll} \mbox{$\underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes$}\kern-1ex \mbb S^{\alpha^{E}}\cal E\otimes\,\kappa_{Y/X}^{-1} \biggr|_{\rm fibre} &= \mbox{$\underset{\;E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes$}\kern-1ex \bigl( \mbb S^{\alpha^{E}}\cal E\otimes({{\rm det}}\cal E)^{m_E} \bigr) \biggr|_{\rm fibre} \\ &= \mbox{$\underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes$}\kern-1ex {\mbb S^{{\alpha^{E}+ \tiny\begin{array}{|c|} \hline\kern-1.5ex m_E\kern-1.5ex\\ \hline \end{array} }}\,\cal E} \biggr|_{\rm fibre}, \end{array}$$ and ${\alpha^{E}+ \small\begin{array}{|c|} \hline m_E\\ \hline \end{array} } = \underbrace{ \alpha^{E}+\small\begin{array}{|c|} \hline -\alpha^{E}_{{{\rm min}}}+d_E-1\\ \hline \end{array}}_{:=\bar\alpha^{E}} \,+\, \small\begin{array}{|c|} \hline \alpha^{E}_{{{\rm min}}}+m_E-d_E+1\\ \hline \end{array}\,$ with $$\left\{\begin{array}{l} \bar\alpha^{E}_{{{\rm min}}}=d_E-1\ges{\rm length} \bigl(\bar\alpha^{E}-\small\fbox{$d_E-1$} \,\bigr)_{\,\mbox{,}} \\[1.5ex] \bar a_E:=\alpha^{E}_{{{\rm min}}}+m_E-d_E+1\ges 1. \end{array}\right.$$ Manivel and Arapura’s theorem implies that $R^q\phi_*(\mbb S^{\alpha^\bullet}\cal E_\bullet)=0$, for all $q>0$. (ii$_2$) Consider Young diagrams $(\beta^F)_{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}$ with $\beta^F_{{\rm min}}\ges d_F-m_F$ for all $F$. Then holds: $$\mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes$}\kern-2ex \mbb S^{\beta^{F}}\cal F\otimes\,\kappa_X^{-1} =\!\! \mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes$}\kern-2ex \bigl( \mbb S^{\beta^{F}}\cal F\otimes({{\rm det}}\cal F)^{m_F} \bigr) =\!\! \mbox{$\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes\,$}\kern-2ex {\mbb S^{{\beta^{F}+ \tiny\begin{array}{|c|} \hline\kern-1.5ex m_F\kern-1.5ex\\ \hline \end{array} }}\,\cal F}.$$ We deduce the vanishing of the higher cohomology as in (ii$_1$). [\[thm:main2\]]{} Assume that the conditions are satisfied, and that there are integers $(b_F)_{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}$ which fulfill the property . Then the elements of the set ${\euf{ES}}(Y)$ defined below form a strong exceptional sequence of vector bundles over $Y$: $$\begin{aligned} \begin{array}{ll} {\euf{ES}}(Y):= & \text{all the direct summands, corresponding to irreducible} \\ &\text{$G$-modules contained in } \\[0.5ex] & \phi^*\bigl(\mbb S^{\l^{\bullet}}\cal F_\bullet\bigr) \otimes \mbb S^{\nu^{\bullet}}\cal E_\bullet := \phi^*\bigl( \mbox{\kern-2ex$\underset{\tiny F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes$} \kern-1.7ex\mbb S^{\l^{F}}\cal F\, \bigr) \otimes\kern-.2ex \mbox{$\underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes$\,} \kern-.5ex\mbb S^{\nu^{E}}\cal E, \end{array}\end{aligned}$$ with $\l^F\in\cal Y_{b_F,\,d_F},$ and $\nu^{E}\in\cal Y_{m_E-d_E,\,d_E}$. Moreover, it holds: $H^q\biggl( Y,\underset{F\in{{\cal V\kern-.41ex\euf B}}^+(X)}\bigotimes\kern-1.7ex \phi^*\,\mbb S^{\beta^{F}}\cal F \,\otimes \underset{E\in{{\cal V\kern-.41ex\euf B}}^+_0}\bigotimes\kern-.5ex \mbb S^{\alpha^{E}}\cal E \biggr)\!=\!0$ for all $q>0$, and all Young diagrams $\beta^{F}\in\wtld{\cal Y}_{d_F}$ and $\alpha^{E}\in\wtld{\cal Y}_{d_E}$, with $\beta^{F}_{{\rm min}}\ges -b_F$ and $\alpha^{E}_{{\rm min}}\ges -(m_E-d_E)$ respectively. Let $\cal U'$ and $\cal U''$ be two elements of ${\euf{ES}}(Y)$. The condition on the $H^0({\mathop{\rm Hom}\nolimits}(\cal U',\cal U''))$ follows again from theorem \[thm:h000\]. It remains to prove the vanishing of $H^q({\mathop{\rm Hom}\nolimits}(\cal U',\cal U''))$, for $q\ges 1$. By using the Littlewood-Richardson rules, we deduce that ${\mathop{\rm Hom}\nolimits}(\cal U',\cal U'')$ is direct summand in $\mbox{$\underset{\alpha^\bullet,\beta^\bullet}\bigoplus$} \phi^*\bigl(\mbb S^{\beta^{\bullet}}\cal F_\bullet\bigr) \otimes\mbb S^{\alpha^{\bullet}}\cal E_\bullet,$ with $$\left\{\begin{array}{l} \hskip2.75em b_F\ges\beta^F_{{\rm max}}\ges\beta^F_{{\rm min}}\ges-b_F, \\[1ex] m_E-d_E\ges\alpha^E_{{\rm max}}\ges\alpha^E_{{\rm min}}\ges-m_E+d_E. \end{array}\right.$$ The conclusion of the theorem follows from proposition \[prop:tech\](ii$_1$). An immediate consequence of the previous theorem is the following: [\[cor:tower\]]{} Assume the following assumptions hold: 1. There is a sequence of quotients $G\!\rar\! G_1\!\rar\!\ldots\!\rar G_k\!\rar\! 1$, with $\Gamma_{j}\!:=\!{{\rm Ker}}(G_{j}\!\rar\! G_{j+1})$. 2. $V=W_1\oplus\ldots\oplus W_k$, where $W_j$ is a $G_j$-module for all $j$. We define $V_j:=W_j\oplus\ldots W_k$ for all $j$. 3. The projections ${\mathop{\rm pr}\nolimits}_j:V_j\rar V_{j+1}$ satisfy the conditions . The induced morphisms are denoted by $$\phi_j\!:\!\mbb V_{j}{{\slash\kern-0.65ex\slash}}_{\!\chi_{{\rm ac}}(G_{j},V_{j})} G_{j} \rar \mbb V_{j+1}{{\slash\kern-0.65ex\slash}}_{\!\chi_{{\rm ac}}(G_{j+1},V_{j+1})} G_{j+1},\; \text{for all }\,1\les j\les k-1.$$ Let us write $\chi_{{\rm ac}}(\Gamma_j,W_j) =\kern-.9ex \underset{E\in{{\cal V\kern-.41ex\euf B}}^+(\mbb W_j/\!/ \Gamma_j)}\sum\kern-3ex m_{j,E}\cdot{{\rm det}}E$ ([*c.f.*]{} \[lm:chi\]), and denote ${{\cal V\kern-.41ex\euf B}}^+_j:={{\cal V\kern-.41ex\euf B}}^+(\mbb W_j/\!/ \Gamma_j)$. Then the elements of the set ${\euf{ES}}(Y)$ defined below form a strong exceptional sequence of vector bundles over ${{\mbb V}}{{\slash\kern-0.65ex\slash}}H$: $$\begin{array}{ll} {\euf{ES}}(Y):= & \text{all the direct summands, corresponding to irreducible} \\ & \text{$G$-modules contained in } \ouset{j=1}{k}{\bigotimes} \biggl( \mbox{$\underset{\tiny E\in{{\cal V\kern-.41ex\euf B}}^+_j}\bigotimes$} \mbb S^{\alpha^{j,E}}\cal E \biggr), \\[1ex] & \text{with }\;\alpha^{j,E}\in\cal Y_{m_{j,E}-d_E\,,\,d_E}. \end{array}$$ Assume moreover that the multiplicity condition in corollary \[cor:stab-bdl\] is fulfilled. Then $\euf{ES}(Y)$ consists of semi-stable vector bundles over $Y$. Examples ======== [\[sct:expl\]]{} In this section we are going to present a few particular cases, in order to illustrate the general discussion. We concentrate on quiver varieties because they are a source of infinitely many examples, and are also very convenient: for generic choices of the dimension vector, the semi-stability and stability concepts agree. Therefore the quotients which will appear are geometric, as we wish. Even more remarkably, the procedure of constructing exceptional sequences of vector bundles over quiver varieties is [*almost algorithmic*]{}. Let $Q=(Q_0,Q_1,h,t)$ be a quiver, and $\unl d=(d_q)_{q\in Q_0}$ be a dimension vector. We adopt the following convention: suppose that $q,q'$ are two vertices, and there is (at least) one arrow from $q$ to $q'$; then we draw [*only one*]{} arrow $a$, and we denote by $m_a$ its [*multiplicity*]{} (that is how many times the arrow is repeated). In other words, we consider the group $G=\kern-.5ex \underset{q\in Q_0}{\hbox{\Large$\times$}}\kern-.5ex{{\rm Gl}}(d_q)$, and the $G$-module $V=\underset{a\in Q_1}\bigoplus\kern-.5ex {{\mathop{\rm Hom}\nolimits}(K^{d_{t(a)}},K^{d_{h(a)}})}^{\oplus m_a}\!$. The construction of exceptional sequences involves the following steps: Compute the anti-canonical character: $$\begin{array}{rl} \chi_{{\rm ac}}=& \mbox{$\underset{a\in Q_1}\sum$} m_a\cdot \bigl( d_{t(a)}{{\rm det}}_{h(a)}-d_{h(a)}{{\rm det}}_{t(a)} \bigr) \\[2ex] =& \mbox{$\underset{q\in Q_0}\sum$} \biggl( \mbox{$\underset{a\in Q_1^{\rm in}(q)}\sum$} \kern-1ex m_ad_{t(a)} - \mbox{$\underset{a\in Q_1^{\rm out}(q)}\sum$} \kern-1ex m_ad_{h(a)} \biggr)\cdot{{\rm det}}_q. \end{array}$$ Note that the multiplicative group, embedded diagonally in $G$, acts trivially on $V$, and the quotient $G/{(K^\times)}_{\rm diag}$ acts effectively on $V$. Moreover, for generic choices of the multiplicities $m_a$ (w.r.t. the dimension vector $\unl d$), the $\chi_{{\rm ac}}$-semi-stable locus of ${{\mbb V}}$ coincides with the stable locus (see [*e.g.*]{} [@king proposition 3.1]). For such a generic choice, there is a natural ‘Euler sequence’ over the quotient $Y$: $$\vspace{-1.5ex} 0 \lar \cal O_Y^{^{\oplus\dim\hat G}} \lar \mbox{$\underset{a\in Q_1}\bigoplus$} {\cal Hom\bigl(\cal E_{t(a)},\cal E_{h(a)}\bigr)}^{\oplus m_a} \lar T_Y \lar 0.$$ It follows that the anti-canonical class of the quotient is $\kappa_Y^{-1}=\chi_{{\rm ac}}$. It consists in determining the ‘extremal bundles’ in the set ${{\cal V\kern-.41ex\euf B}}^+(Y)$ (see ), and expressing $\chi_{{\rm ac}}$ as a positive combination of their determinants (see lemma \[lm:chi\]). Actually this step is responsible for the use of the word ‘almost’ above: the computation of the extremal nef bundles is algorithmic, but involves the maximal torus of $G$, and is therefore tedious. Denote $\cal E_1,\ldots,\cal E_N$ the extremal bundles above, and take tensor products of their Schur powers $ \mbb S^{\l_1,\ldots,\l_N}\cal E\kern-.5ex := \mbb S^{\l_1}\cal E_1\otimes\ldots\otimes\mbb S^{\l_N}\cal E_N. $ The third step consists in determining the sizes of the Young diagrams $\l_1,\ldots,\l_N$ which fulfill the requirements of theorem \[thm:main\]. Search for fibrations coming from a sub-quiver. More precisely, we are looking for a sub-quiver $R\subset Q$ having the property: $$\begin{array}{rcr} \forall\,(A_a)_{a\in Q_1}\in{{\mbb V}}^{{\rm ss}}\bigl(G,\chi_{{\rm ac}}(V)\bigr) &\quad\Longrightarrow\quad& (A_a)_{a\in R_1}\in\mbb W^{{\rm ss}}\bigl(H,\chi_{{\rm ac}}(W)\bigr), \\[1.5ex] G=\underset{v\in Q_0}\prod{{\rm Gl}}(v) && H=\underset{v\in R_0}\prod{{\rm Gl}}(v). \end{array}$$ Here $V$ and $W$ denote the representation spaces of $Q$ and $R$ respectively. In such a situation there is a natural projection map $Y\rar X$ between the corresponding quotients. Moreover, if $R$ is chosen appropriately, the numerous hypotheses in are naturally fulfilled. Very often one obtains better bounds for the sizes of the Young diagrams involved in the Schur powers than those which are obtained by applying the step 3 directly (see subsections \[ssect:kapranov\] and \[ssect-A3\] below). Kapranov’s construction ----------------------- [\[ssect:kapranov\]]{} Let us start by reviewing Kapranov’s examples of tilting bundles over the Grassmannian, and over the flag variety for ${{\rm Gl}}(m)$. We show that by using our approach we automatically recover the vector bundles which appear in the tilting objects constructed by Kapranov over the Grassmannian, and over partial flag manifolds. They are the quiver varieties associated respectively to: $$\entrymodifiers={++[o][F-]} \xymatrix@+.7em@R=.9em{ m \ar[r]^-{B} & *++[o][F=]{_{\phantom{i}}d_{\phantom{i}}} &*\txt{}&*\txt{} &*\txt{\kern-1ex with $m>d$.} \\ m \ar[r]^-{A_{k}} & *++[o][F=]{d_k} \ar[r]^-{\!A_{{k-1}}} &*\txt{$\;$\ldots}\ar[r]^-{A_{1}}& *++[o][F=]{d_1} & *\txt{\kern-1ex with $m>d_k>\ldots>d_1$.} }$$ A doubled circle means that the corresponding linear group acts at that entry (we have factored out the diagonal $K^\times$-action). ### The case of the Grassmannian Let us consider the Grassmannian $Y:={{\rm Grass}}(\mbb C^m,d)$ of $d$-dimensional quotients of $K^m$. Its anti-canonical class is $\kappa_{{{\rm Grass}}(K^m,d)}^{-1}=({{\rm det}}\cal Q)^m$, where $\cal Q$ denotes the universal quotient bundle. The cone $\crl N$ is generated by the characters $t_1,\ldots,t_d$ of ${{\rm Gl}}(d)$, and $\crl N_1^+=\{t_1\}$. Hence the set ${{\cal V\kern-.41ex\euf B}}^+(Y)$ of extremal nef bundles ${{\cal V\kern-.41ex\euf B}}^+(Y)$ consists of $\cal Q$ only. Theorem \[thm:main\] says that the elements of the set $\{\mbb S^\l\cal Q\mid \l\in\cal Y_{m-d,d}\}$ form a strong exceptional sequence of vector bundles on ${{\rm Grass}}(K^m\!,d)$. Indeed, this is what Kapranov proves in [*loc.cit.*]{}. Let us remark that he actually proves that they form a tilting sequence. ### The case of flag manifolds We denote by $\mbb F_k:={{\rm Flag}}(K^m,d_k,\ldots,d_1)$ the variety of quotient $k$-flags of $K^m$. Let $\cal Q_1,\ldots,\cal Q_k$ be the tautological quotient bundles over $\mbb F_k$ with ${{\rm rank\,}}\cal Q_j\!=\!d_j$. The anti-canonical class is $\kappa_{\mbb F_k}^{-1} \!\!=\!\! \hbox{$\overset{k}{\underset{j=1}\bigotimes}$} {({{\rm det}}\cal Q_j)}^{d_{j+1}-\,d_{j-1}}\!\!.$ The cone $\crl N$ is generated by the characters $t^{(j)}_1,...,t^{(j)}_{d_j}$, $j=1,...,k$, and $\crl N_1^+=\{t^{(1)}_1,\ldots,t^{(k)}_1\}$. We deduce that ${{\cal V\kern-.41ex\euf B}}^+(\mbb F_k)=\{\cal Q_1,\ldots,\cal Q_k\}$. By applying theorem \[thm:main\] directly, we obtain that the elements of $$\left\{ \begin{array}{l} \mbb S^{\l_\bullet}\cal Q_\bullet^\vee:= \mbb S^{\l_k}\cal Q_k\otimes\ldots\otimes\mbb S^{\l_1}\cal Q_1, \quad \l_\bullet=(\l_k,\ldots,\l_1), \\[1ex] \text{with }\l_\bullet\in \cal Y_{m-d_k-d_{k-1},d_k} \times\ldots\times \cal Y_{d_3-d_2-d_{1},d_2}\times\cal Y_{d_2-d_1,d_1}\! \end{array} \right\}$$ form a strong exceptional sequence over $\mbb F_k$. The problem is that these bounds are very weak, and this set can be empty! At this point Step 4 becomes useful. There is a natural projection from the $k$-flag onto the $(k-1)$-flag variety $$\mbb F_k\srel{\phi}{\lar}\mbb F_{k-1},\quad [A_k,\ldots,A_2,A_1]\lmt[A_k,\ldots,A_2].$$ One checks easily that all the conditions of are fulfilled. By applying corollary \[cor:tower\] we deduce that the elements of the set $$\left\{ \begin{array}{l} \mbb S^{\l_\bullet}\cal Q_\bullet^\vee:= \mbb S^{\l_k}\cal Q_k^\vee\otimes\ldots\otimes\mbb S^{\l_1}\cal Q_1^\vee \\[1ex] \text{with }\l_\bullet=(\l_k,\ldots,\l_1) \in \cal Y_{m-d_k,d_k}\times\ldots\times\cal Y_{d_2-d_1,d_1} \end{array} \right\}$$ form a strong exceptional sequence of vector bundles over $\mbb F_k$. $A_3$-type quiver with multiple arrows -------------------------------------- [\[ssect-A3\]]{} Interesting phenomena occur already for $A_3$-type quivers, as soon as we increase the multiplicities of the arrows. Consider the quiver $$\begin{array}{l} \entrymodifiers={++[o][F-]} \xymatrix@+.9em{ m \ar[rr]^-{B} &*\txt{}& *++[o][F=]{d_2} \ar[rr]^-{{\mbb A}=(A_1,\ldots,A_{n})} &*\txt{}& *++[o][F=]{d_1} & *\txt{with $m>d_2>d_1$,} } \\[2ex] V={(K^{d_2})}^{\oplus m}\oplus{{\mathop{\rm Hom}\nolimits}(K^{d_2},K^{d_1})}^{\oplus n}, \quad G={{\rm Gl}}(d_1)\times{{\rm Gl}}(d_2). \end{array}$$ Let $Y:={{\mbb V}}{{\slash\kern-0.65ex\slash}}_{\chi_{{\rm ac}}}G$ be the corresponding quiver variety. The flag variety ${{\rm Flag}}(K^m,d_2,d_1)$ corresponds to the case $n=1$. We denote the vector bundles over $Y$ associated to the $G$-modules $K^{d_1}$ and $K^{d_2}$ by $\cal E_1$ and $\cal E_2$ respectively. The anti-canonical character is $$\chi_{{\rm ac}}=nd_2\cdot{{\rm det}}_1+(m-nd_1)\cdot{{\rm det}}_2 \!=\! n\cdot\bigl[ d_2\cdot{{\rm det}}{{\cal E}}_1+(r-d_1)\cdot{{\rm det}}{{\cal E}}_2 \bigr], \; r:=\!\frac{m}{n}.$$ We are going to see that the effect of introducing the parameter $n$ is that of obtaining several types of quotients. Observe that for generic choices of $m$ and $n$, the semi-stable and the stable loci coincide; this happens for $$\begin{array}{c} \text{gcd}(nd_2,m-nd_1)=\text{gcd}(nd_2,m-nd_1,md_2)=1. \end{array}$$ For details about semi-stability criteria for quiver representations, the reader may consult [@king]. ### Case $r\!>\!d_1$ The $\chi_{{\rm ac}}$-semi-stability condition for $(B,\mbb A)\!\in\!\mbb V$ is: $$\begin{aligned} \begin{array}{l} \left\{ \begin{array}[c]{l} U_2\subset K^{d_2}\text{ and }U_1\subset K^{d_1}\text{ s.t. } \mbb A(U_2)\subset U_1 \\ \dim(U_2)=d_2'\text{ and }\dim(U_1)=d_1' \end{array} \right\} \\[3ex] \hspace{3ex}\Longrightarrow d_2d_1'+(r-d_1)d_2'\ges rd_2\text{ for }(d_2',d_1')\neq(d_2,d_1). \end{array}\end{aligned}$$ The set of extremal nef vector bundles is ${{\cal V\kern-.41ex\euf B}}^+(Y)=\{\cal E_1,\cal E_2\}$, and the anti-canonical class is $\kappa_Y^{-1}=({{\rm det}}\cal E_2)^{m-nd_1}\otimes({{\rm det}}\cal E_1)^{nd_2}$. Theorem \[thm:main\] implies that the elements of the set $$\{ \mbb S^\l\cal E_1\otimes \mbb S^\mu\cal E_2 \mid \l\in\cal Y_{nd_2-d_1,d_1}\text{ and } \mu\in\cal Y_{m-nd_1-d_2,d_2} \}$$ form a strong exceptional sequences of vector bundles over $Y$. We illustrate again the role of Step 4 described at the beginning of this section: by using an appropriate fibre bundle structure on $Y$, we will increase the number of elements in the exceptional sequence. Observe that both $B$ and $\mbb A\in{\mathop{\rm Hom}\nolimits}(K^{d_2}\otimes K^n,K^{d_1})$ are surjective, for any $\chi_{{\rm ac}}$-semi-stable point $(B,\mbb A)$. Indeed: by inserting $d_1'=d_1$ we obtain $d_2'\ges d_2$, and by inserting $d_2'=d_2$ we obtain $d_1'\ges d_1$. It follows that there is a natural projection $\phi:\!Y\!\rar{{\rm Grass}}(K^m,d_2)$, whose fibres are isomorphic to ${{\rm Grass}}(K^{nd_2},d_1)$. The group ${{\rm Gl}}(m)\times{{\rm Gl}}(n)$ acts on $Y$, and the projection is equivariant for the ${{\rm Gl}}(m)$-action. However $Y$ is not the $2$-flag variety. We observe that the projection $V\rar{\mathop{\rm Hom}\nolimits}(K^m\!,K^{d_2})$ fulfills the conditions , and moreover ${{\cal V\kern-.41ex\euf B}}^+\bigl({{\rm Grass}}(K^m\!,d_2)\bigr)=\{\cal E_2\}$, and ${{\cal V\kern-.41ex\euf B}}^+_0=\{\cal E_1\}$. Applying corollary \[cor:tower\] to $\phi$ we deduce that the elements of the following set form a strong exceptional set of vector bundles over $Y$: $$\begin{aligned} \{ \mbb S^\l\cal E_1\otimes \mbb S^\mu\cal E_2 \mid \l\in\cal Y_{nd_2-d_1,d_1}\text{ and } \mu\in\cal Y_{m-d_2,d_2} \}.\end{aligned}$$ ### Case $r\!<\!d_1$ The $\chi_{{\rm ac}}$-semi-stability condition for $(B,\mbb A)\!\in\!\mbb V$ is: $$\begin{aligned} {\label{ss2}} \begin{array}{l} \left\{\kern-.5ex \begin{array}[c]{l} U_2\subset K^{d_2}\text{ and }U_1\subset K^{d_1}\text{ s.t. } \mbb A(U_2)\subset U_1 \\ \dim(U_2)=d_2'\text{ and }\dim(U_1)=d_1' \end{array}\kern-.5ex \right\} \\[3ex] \hspace{3ex} \Longrightarrow \left\{\begin{array}{rll} \rm (i)\;& d_2d_2'-(d_1-r)d_1'\ges 0&\text{for }(d_2',d_1')\neq(0,0), \\ \rm (ii)\;& d_2d_1'-(d_1-r)d_2'\ges rd_2&\text{for }(d_2',d_1')\neq(d_2,d_1). \end{array}\right. \end{array}\end{aligned}$$ Now we determine the extremal nef vector bundles. Since $r-d_1<0$, the situation differs from the previous case; now we will have ${{\cal V\kern-.41ex\euf B}}^+(Y)=\bigl\{\cal E_2, \cal H\bigr\}$, with $\cal H:={\mathop{\rm Hom}\nolimits}(\cal E_2,\cal E_1)$. We express the anti-canonical class as a positive combination of the extremal bundles: $\kappa_Y^{-1}=({{\rm det}}\cal E_2)^m\otimes({{\rm det}}\cal H)^n$. Theorem \[thm:main\] implies that $$\begin{aligned} \{\mbb S^\l\cal E_2\otimes\mbb S^\mu\cal H \mid \l\in\cal Y_{m-d_2,d_2}\text{ and } \mu\in\cal Y_{n-d_1d_2,d_1d_2} \}\end{aligned}$$ is a strong exceptional sequence of vector bundles over $Y$. Let us interpret the result. We consider the sub-quiver formed by the two rightmost vertices, and let $W:={\mathop{\rm Hom}\nolimits}(K^{d_2}\otimes K^n,K^{d_1})$ be its representation space. The anti-canonical character is $\chi_{{\rm ac}}(W)=d_2{{\rm det}}_1-d_1{{\rm det}}_2$. The symmetry group which is acting (effectively) is $G/{(K^\times)}_{\rm diag}$. The condition (i) implies that if $(B,\mbb A)$ is $\chi_{{\rm ac}}$-semi-stable, then $\mbb A$ is $\chi_{{\rm ac}}(W)$-semi-stable. Hence there is a natural morphism $$Y\srel{\phi}{\lar} X:= {\mathop{\rm Hom}\nolimits}(K^{d_2}\otimes K^n\!,K^{d_1}){{\slash\kern-0.65ex\slash}}_{\chi_{{\rm ac}}(W)}\,G,$$ which is a projective bundle, with fibre isomorphic to $\mbb P(K^{md_2})$. The conditions are fulfilled, and we may apply corollary \[cor:tower\]. However, in this case we do not improve the previous bound. Altman and Hille’s examples --------------------------- [\[ssect:AH\]]{} In the article [@ah] the authors present the following construction: consider a quiver $Q$ without oriented cycles, and a thin and faithful representation space $V$ of it. This means that the dimension vector of the representation space is $\unl d={(1)}_{q\in Q_0}$, and the symmetry group which is acting is the torus $T=\underset{q\in Q_0}\prod K^\times\bigl/{(K^\times)}_{\rm diag}$. (Altmann, Hille) [ *Assume that ${{\mbb V}}^{{\rm ss}}(T,\chi_{{\rm ac}})={{\mbb V}}_{(0)}^{{\rm s}}(T,\chi_{{\rm ac}})$. Then the tautological line bundles ${(\cal L_q)}_{q\in Q_0}$ form an exceptional sequence over the toric variety $Y:={{\mbb V}}^{{\rm ss}}(T,\chi_{{\rm ac}})/T$.* ]{} We wish to remark that this construction fits into a more general framework: we consider a quiver $Q=(Q_0,Q_1,h,t)$ without oriented cycles, and we fix a dimension vector $\unl d=(d_q)_{q\in Q_0}$; we denote $V$ the corresponding representation space. For $m\ges 1$, we denote $Q^{(m)}$ the quiver obtained from $Q$ by multiplying each arrow $m$ times. The representation space of $Q^{(m)}$ with dimension vector $\unl d$ is $V^m$, and the symmetry group which is acting is $G=\underset{q\in Q_0}\prod {{\rm Gl}}(d_q)\bigl/K^\times_{\rm diagonal}$. [\[prop:general-AH\]]{} Assume that ${({{\mbb V}}^m)}^{{{\rm ss}}}(G,\chi_{{\rm ac}})={({{\mbb V}}^m)}^{{{\rm s}}}(G,\chi_{{\rm ac}})$, and denote $Y_m$ the quotient by the $G$-action. For $q\in Q_0$, we denote $\cal E_q$ the tautological bundle over $Y_m$, associated to $G\rar{{\rm Gl}}(d_q)$. Then there is a constant $m(Q)\ges 1$ such that for all $m> m(Q)$, the set $\{\cal E_q\}_{q\in Q_0}$ is a strong exceptional sequence of vector bundles over $Y_m$ (with respect to an appropriate ordering). Moreover, these vector bundles are semi-stable. For two vertices $p,q\in Q_0$, let $E_{pq}:={\mathop{\rm Hom}\nolimits}(E_p, E_q)$, and $\cal E_{pq}$ the associated vector bundle over $Y_m$, and let $e_{pq}:=\dim E_{pq}=d_pd_q$. The condition on $H^0(Y_m,\cal E_{pq})$ follows from theorem \[thm:h000\]. It remains to prove the vanishing of the higher cohomology. We compute $H^{n-i}\bigl(Y_m,\cal E_{pq}\bigr)$, $n=\dim Y$, by using the relative duality for $\mbb P(\cal E_{qp})\srel{{\mathop{\rm pr}\nolimits}}{\rar}Y_m$; it equals: $$H^{(e_{pq}-1)+i}\bigl(\mbb P({{\cal E}}_{qp}), {\mathop{\rm pr}\nolimits}^*(\kappa_{Y_m}\otimes({{\rm det}}{{\cal E}}_{pq})^{-1}) \otimes{{\cal O}}_{\mbb P({{\cal E}}_{qp})}(-e_{pq}-1) \bigr)^\vee.$$ The Kodaira vanishing theorem implies that $H^j(Y_m,\cal E_{pq})$ vanishes for all $j\ges 1$, as soon as ${\mathop{\rm pr}\nolimits}^*(\kappa_{Y_m}^{-1}\otimes({{\rm det}}{{\cal E}}_{pq})) \otimes{{\cal O}}_{\mbb P({{\cal E}}_{qp})}(e_{pq}+1)$ is ample over $\mbb P({{\cal E}}_{qp})$. By proposition \[prop:large-m\], there is a number $m_{pq}$ such that this property holds for all $m>m_{pq}$. Consider now $m(Q):=\max\{m_{qp},e_{pq}\mid p,q\in Q_0\}$. The isotypical components of $V^m$ are ${\mathop{\rm Hom}\nolimits}(E_{t(a)},E_{h(a)})$, $a\in Q_1$. Note that $m>m(Q)$ implies $m>\dim{\mathop{\rm Hom}\nolimits}(E_p,E_q)$, and the semi-stability of the tautological bundles $\cal E_q$ follows from corollary \[cor:stab-bdl\]. We wish to point out the following shortcoming: in this construction the length of the exceptional sequence equals the number of vertices of $Q$, which is independent of the multiplicity $m$. It follows that for large $m$ this sequence is certainly [*not*]{} a tilting object for $Y_{m}$ (compare this with remark \[rmk:length\]). [00]{} K. Altmann, L. Hille, [*Strong exceptional sequences provided by quivers*]{}, Algebr. Represent. Theory [**2**]{} (1999) 1-17. D. Arapura, [*A class of sheaves satisfying Kodaira’s vanishing theorem*]{}, Math. Ann. [**318**]{} (2000) 235-253. L. Costa, R.M. Miró-Roig, [*Tilting sheaves on toric varieties*]{}, Math. Z. [**248**]{} (2004) 849-865. F. Grosshans, [*Algebraic Homogeneous Spaces and Invariant Theory*]{}, Lect. Notes Math. [**1673**]{}, Springer, 1997. L. Hille, M. Perling, [*A counterexample to King’s conjecture*]{}, Compos. Math. [**142**]{} (2006) 1507-1521. M. Kapranov, [*On the derived categories of coherent sheaves on some homogeneous spaces*]{}, Invent. Math. [**92**]{} (1988) 479-508. G. Kempf, [*The Hochster-Roberts theorem of invariant theory*]{}, Michigan Math. J. [**26**]{} (1979) 19-32. G. Kempf, [*Instability in invariant theory*]{}, Ann. Math. [**108**]{} (1978) 299-316. A. King, [*Moduli of representations of finite dimensional algebras*]{}, Quarterly J. Math. [**45**]{} (1994) 515-530. A. King, [*Tilting bundles on some rational surfaces*]{}, unpublished preprint available at: http://www.maths.bath.ac.uk/$\sim$masadk/papers/. L. Manivel, [*Vanishing theorems for ample vector bundles*]{}, Invent. Math. [**127**]{} (1997) 401-416. Ch. Mourougane, [*Images Directes de Fibres en Droites Adjoints*]{}, Publ. RIMS Kyoto Univ. [**33**]{} (1997) 893-916. D. Mumford, J. Fogarty, F. Kirwan, [*Geometric Invariant Theory*]{}, 3$^{\rm rd}$ edition, Springer-Verlag Berlin New York, 1994. S. Ramanan, A. Ramanathan, [*Some remarks on the instability flag*]{}, Tôhoku Math. J. [**36**]{} (1984) 269-291.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this work, we demonstrate the existence of universal adversarial audio perturbations that cause mis-transcription of audio signals by automatic speech recognition (ASR) systems. We propose an algorithm to find a single quasi-imperceptible perturbation, which when added to any arbitrary speech signal, will most likely fool the victim speech recognition model. Our experiments demonstrate the application of our proposed technique by crafting audio-agnostic universal perturbations for the state-of-the-art ASR system – Mozilla DeepSpeech. Additionally, we show that such perturbations generalize to a significant extent across models that are not available during training, by performing a transferability test on a WaveNet based ASR system.' address: | $^1$UC San Diego Department of Computer Science\ $^2$UC San Diego Department of Electrical and Computer Engineering\ $^3$UC San Diego Department of Music\ Equal contribution bibliography: - 'mybib.bib' title: Universal Adversarial Perturbations for Speech Recognition Systems --- **Index Terms**: speech recognition, adversarial examples, speech processing, computer security Introduction ============ Machine learning agents serve as the backbone of several speech recognition systems, widely used in personal assistants of smartphones and home electronic devices (e.g. Apple Siri, Google Assistant). Traditionally, Hidden Markov Models (HMMs) [@baum1967; @baum1970maximization; @hmm1; @ahadi1997combined; @bahl1986maximum; @Rabiner89-ATO] were used to model sequential data but with the advent of deep learning, state-of-the-art speech recognition systems are based on Deep Neural Networks (DNNs) [@deepspeech2; @parallelwavenet; @wavenet; @mozilladeepspeech]. However, several studies have demonstrated that DNNs are vulnerable to adversarial examples [@goodfellow6572explaining; @obfuscated-gradients; @Carlini2017TowardsET; @atscale; @limitations]. An adversarial example is a sample from the classifier’s input domain which has been perturbed in a way that is intended to fool a victim machine learning (ML) model. While the perturbation is usually imperceptible, such an adversarial input can mislead neural network models deployed in real-world settings causing it to output an incorrect class label with higher confidence. A vast amount of past research in adversarial machine learning has shown such attacks to be successful in the image domain  [@intriguing; @limitations; @transferibility; @papernot1; @papernot2; @advpatch; @harnessing]. However, few works have addressed attack scenarios involving other modalities such as audio. This limits our understanding of system vulnerabilities of many commercial speech recognition models employing DNNs, such as Amazon Alexa, Google Assistant, and home electronic devices like Amazon Echo and Google Home. Recent studies that have explored attacks on automatic speech recognition (ASR) systems [@asrblack; @targetattacks; @hidden; @DBLP:journals/corr/abs-1810-11793], have demonstrated that adversarial examples exist in the audio domain. The authors of [@targetattacks] proposed targeted attacks where an adversary designs a perturbation that can cause the original audio signal to be transcribed to any phrase desired by the adversary. However, calculating such perturbations requires the adversary to solve an optimization problem for each data-point they wish to mis-transcribe. This makes the attack in-applicable in real-time since the adversary would need to re-solve the data-dependent optimization problem from scratch for every new data-point. Universal Adversarial Perturbations [@universal] have demonstrated that there exist universal *image-agnostic* perturbations which when added to any image will cause the image to be mis-classified by a victim network with high probability. The existence of such perturbations poses a threat to machine learning models in real world settings since the adversary may simply add the same pre-computed universal perturbation to a new image and cause mis-classification. **Contributions:** In this work, we seek to answer the question “Do universal adversarial perturbations exist for neural networks in audio domain?” We demonstrate the existence of universal audio-agnostic perturbations that can fool DNN based ASR systems.[^1] We propose an algorithm to design such universal perturbations against a victim ASR model in the *white-box setting*, where the adversary has access to the victim’s model architecture and parameters. We validate the feasibility of our algorithm, by crafting such perturbations for Mozilla’s open source implementation of the state-of-the-art speech recognition system DeepSpeech [@mozilladeepspeech]. Additionally, we discover that the generated universal perturbation is transferable to a significant extent across different model architectures. Particularly, we demonstrate that a universal perturbation trained on DeepSpeech can cause significant transcription error on a WaveNet [@wavenet] based ASR model. Related Work ============ **Adversarial Attacks in the Audio Domain:** Adversarial attacks on ASR systems have primarily focused on *targeted attacks* to embed carefully crafted perturbations into speech signals, such that the victim model transcribes the input audio into a specific malicious phrase, as desired by the adversary [@asrblack; @targetattacks; @mfccattack; @hidden; @usenixaudio]. Prior works [@hidden; @usenixaudio] demonstrate successful attack algorithms targeting traditional speech recognition models based on HMMs and GMMs, that operate on Mel Frequency Cepstral Coefficient (MFCC) representation of audio. For example, in Hidden Voice Commands [@hidden], the attacker uses inverse feature extraction to generate obfuscated audio that can be played over-the-air to attack ASR systems. However, obfuscated samples sound like random noise rather than normal human perceptible speech and therefore come at the cost of being fairly perceptible to human listeners. Additionally, these attack frameworks are not end-to-end, which render them impractical for studying the vulnerabilities of modern ASR systems based on DNNs. In more recent work [@targetattacks], Carlini *et al.* propose an end-to-end white-box attack technique to craft adversarial examples, which transcribe to a target phrase. Similar to the work in images, they propose a gradient-based optimization method that replaces the cross-entropy loss function used for classification, with a Connectionist Temporal Classification (CTC) loss [@graves2006connectionist] which is optimized for time-sequences. The CTC-loss between the target phrase and the network’s output is backpropagated through the victim neural network and the MFCC computation, to update the additive adversarial perturbation. The adversarial samples generated by this work are quasi-perceptible, motivating a separate work [@psychoacoustic] to minimize the perceptibility of the adversarial perturbations using psychoacoustic hiding. Designing adversarial perturbations using all the above mentioned approaches requires the adversary to solve a data dependent optimization problem for each input audio signal the adversary wishes to mis-transcribe, making them ineffective in a real-time attack scenario. In other words, targeted attacks must be customized for each segment of audio, a process that cannot yet be done in real-time. The existence of universal adversarial perturbations (described below) can pose a more serious threat to ASR systems in real-world settings since the adversary may simply add the same pre-computed universal adversarial perturbation to any input audio and fool the DNN based ASR system. **Universal Adversarial Perturbations:** The authors of [@universal] craft a single universal perturbation vector which can fool a victim neural network to predict a false classification output on the majority of validation instances. Let $\hat{k}(x)$ be the classification output for an input $x$ that belongs to a distribution $\mu$. The goal is to find a perturbation $v$ such that: $\hat{k}(x+v) \neq \hat{k}(x)$ for “most” $x \in \mu$. This is formulated as an optimization problem with constraints to ensure that the universal perturbation is within a specified p-norm and is also able to fool the desired number of instances in the training set. The proposed algorithm iteratively goes over the training dataset to build a universal perturbation vector that pushes each data point to its decision boundary. The authors demonstrate that it is possible to find a quasi-imperceptible universal perturbation that pushes most data points outside the correct classification region of a victim model. More interestingly, the work demonstrates that the universal perturbations are transferable across models with different architectures. The perturbation produced using one network such as VGG-16 can also be used to fool another network e.g. GoogLeNet showing that their method is doubly universal. *Universal adversarial perturbations* for images focuses on the goal of mis-classification and cannot directly be applied to the more challenging goal of mis-transcription by Speech Recognition System. In our work we address this challenge and solve an alternate optimization problem to adapt the method for designing universal adversarial perturbations for ASR systems. Methodology =========== Threat Model ------------ ![Threat Model: We aim to find a single perturbation which when added to any arbitrary audio signal, will most likely cause an error in transcription by a victim Speech Recognition System[]{data-label="fig:my_label"}](figures/modelDiag.pdf){width="1.0\columnwidth"} \[NED\] We aim to find a universal audio perturbation, which when added to any speech waveform, will cause an error in transcription by a speech recognition model with high probability. For the success of the attack, the error in the transcription should be high enough so that the transcription of the perturbed signal (adversarial transcription) is incomprehensible and the original transcription cannot be deduced from the adversarial transcription. As discussed in [@targetattacks], the transcription *“test sentence”* mis-spelled as *“test sentense”* does little to help the adversary. To make the adversary’s goal challenging, we report success only when the Character Error Rate (CER) or the normalized Levenshtein distance *(Edit Distance)* [@editdistance] between the original and adversarial transcription is greater than a particular threshold. Formally, we define our threat model as follows: Let $\mu$ denote a distribution of waveforms and $C$ be the victim speech recognition model that transcribes a waveform $x$ to $C(x)$. The goal of our work is to find perturbations $v$ such that: $$\mathit{CER}( C(x), C(x + v)) > t \text{ for ``most'' } x \in \mu$$ Here, $\mathit{CER}(x, y)$ is the edit distance between the strings $x$ and $y$ normalized [@editdistance] by the $length$ of $x$ i.e $$\mathit{CER}(x,y) = \frac{\mathit{EditDistance}(x, y)}{\mathit{length}(x)}$$ The threshold $t$ is chosen as 0.5 for our experiments i.e., we report success only when the original transcription has been *edited* by at least $50\%$ of its length using character *removal, insertion, or substitution* operations. The universal perturbation signal $v$ is chosen to be of a fixed length and is cropped or zero-padded at the end to make it equal to length of the signal $x$. Distortion Metric {#distmetric} ----------------- To quantify the distortion introduced by some adversarial perturbation $v$, an $l_\infty$ metric is commonly used in the space of images. Following the same convention, in the audio domain [@Carlini2017TowardsET], the loudness of the perturbation can be quantified using the $\mathit{dB}$ scale, where $\mathit{dB}(v) = \max_i(20.\log_{10} (v_i)).$ We calculate $dB_x(v)$ to quantify the relative loudness of the universal perturbation $v$ with respect to an original waveform $x$ where: $$\mathit{dB}_x(v) = \mathit{dB}(v) - \mathit{dB}(x)$$ Since the perturbation introduced is quieter than the original signal, $\mathit{dB}_x(v)$ is a negative value, where smaller values indicate quieter distortions. In our results, we report the average relative loudness: $\mathit{dB}_x(v)$ across the whole test set to quantify the distortion introduced by our universal perturbation. Problem Formulation and Algorithm --------------------------------- Our goal to find a quasi-imperceptible universal perturbation vector $v$ such that it mis-transcribes *most* data points sampled from a distribution $\mu$. Mathematically, we want to find a perturbation vector $v$ that satisfies: 1. $ \|v\|_\infty < \epsilon$ 2. $\underset{x \sim \mu}{P} \left( CER(C(X), C(x+v) > t) \right) \geq 1 - \delta.$\ Here $\epsilon$ is the maximum allowed $l_\infty$ norm of the perturbation, $\delta$ is the desired success rate and $t$ is the threshold CER chosen to define our success criteria. To solve the above problem, we adapt the universal adversarial perturbation algorithm proposed by [@universal] to find universal adversarial perturbations for the goal of *mis-transciption* of speech waveforms instead of *mis-classification* of data (images). Let $X = {x_1, x_2, \ldots, x_m}$ be a set of speech signals sampled from the distribution $\mu$. Our Algorithm (\[algorithm1\]) goes over the data-points in $X$ iteratively and gradually builds the perturbation vector $v$. At each iteration $i$, we seek a minimum perturbation $\Delta v_i$, that causes an error in the transcription of the current perturbed data point $x_i + v$. We then add this additional perturbation $\Delta v_i$ to the current universal perturbation $v$ and clip the new perturbation $v$, if necessary, to satisfy the constraint $ \|v\|_\infty < \epsilon$. **input:** Training Data Points $X$, Validation Data Points $X_v$ Victim Model $C$, allowed distortion level $\epsilon$, desired success rate $\delta$ **output:** Universal Adversarial Perturbation vector $v$ Initialize $v \gets 0$ Update and clip universal perturbation $v$: \[algorithm1\] At each iteration we need to solve the following optimization problem, that seeks a minimum (under *l2* norm) additional perturbation $\Delta v_i$, to mis-transcribe the current perturbed audio signal $x_i + v$: $$\label{eq:eq1} \Delta v_i \gets \arg\min_{r} \| r \|_2 \text{ s.t. } CER( C(x_i + v + r), C(x_i)) > t$$ It is non-trivial to solve the above optimization in its current form. In [@universal], the authors try to solve a similar optimization problem for the goal of *mis-classification* of data points. They approximate its solution using DeepFool [@deepfool] which finds a minimum perturbation vector that pushes a data point to its decision boundary. Since we are tackling a more challenging goal of *mis-transcription* of signals where we have decision boundaries for each audio frame across the time axis, the same idea cannot be directly applied. Therefore, we approximate the solution to the optimization problem given by Equation \[eq:eq1\] by solving a more tractable optimization problem: $$\begin{split} & \text{Minimize } J(r) \text{ where}\\ & J(r) = c\|r\|^2 + L(x_i+v+r, C(x_i)) \\ & \text{s.t. } \|v + r\|_\infty < \epsilon\\ & \text{where }L(x, y) = -\mathit{CTCLoss}(f(x), y) \end{split} \label{eq2}$$ In other words, to mis-transcribe the signal, we aim to maximize the CTC-Loss between the predicted probability distributions of the perturbed signal $f(x_i+v+r)$ and the original transcription $C(x_i)$ while having a regularization penalty on the $l\-2$ norm of $r$. Since this a non-convex optimization problem, we approximate its solution using iterative gradient sign method [@iterativeFGSM]: $$\label{eq3} \begin{split} & r_0 = \overrightarrow{0} \\ & r_{N+1} = \mathit{Clip}_{r+v,\epsilon} \{ r_N - \alpha \mathit{sign}( \Delta_{r_N} J(r_N) \}\\ \end{split}$$ Note that the error $J$ is back-propagated through the entire neural network and the MFCC computation to the perturbation vector $r$. We iterate until we reach the desired CER threshold $t$ for a particular data point $x_i$. The regularization constant $c$ is chosen through hyper-parameter search on a validation set to find the maximum success rate for a given magnitude of allowed perturbation. Experimental Details ==================== We demonstrate the application of our proposed attack algorithm on the pre-trained *Mozilla DeepSpeech* model [@mozilladeepspeechgit; @mozilladeepspeech]. We train our algorithm on the Mozilla Common Voice Dataset [@mozilladeepspeech] which contains 582 hours of audio across 400,000 recordings in English. We train on a randomly selected set *X* containing 5,000 audio files from the training set and evaluate our model on both the training set $X$ and the entire unseen validation set of the Mozilla Common Voice Dataset. We analyze the effect of the size of the set $X$ below. The length of our universal adversarial perturbation is fixed to 150,000 samples which corresponds to around 9 seconds of audio at 16 KHz. The universal adversarial perturbations are trained using our proposed algorithm \[algorithm1\] with a learning rate $\alpha = 5$ and the regularization parameter $c$ set to 0.5. **Evaluation:** We utilize two metrics: *i) Mean CER* - Character Error Rate averaged over the entire test set and *ii) Success Rate* to evaluate our universal adversarial perturbations. We report success on a particular waveform, if the *CER* between the original and adversarial transcription (Section \[NED\]) is greater than 0.5. The amount of perturbation is quantified using mean relative distortion $dB_x(v)$ over the test set (Refer to Section \[distmetric\]). Results ======= Table \[table:table2\] shows the results of our algorithm for different allowed magnitude of universal adversarial perturbation on both the training set X and the unseen Test Set. Both the success rate and the Mean Character Error Rate (CER) increase with increase in the maximum allowed perturbation. We achieve a success rate of 89.06 % on the validation set, with the mean distortion metric $dB_x(v) \approx -32 dB$. To interpret the results in context, $-32 dB$ is roughly the difference between ambient noise in a quiet room and a person talking [@dbnoise; @targetattacks] . We encourage the reader to listen to our adversarial samples and their corresponding transcriptions on our web page (link in the footnote of the first page) Figure \[fig:sizechart\] shows the success rate and mean edit distance compared to the size of the training set $X$ for maximum allowed perturbation $ \|v\|_\infty=200$ (Mean $dB_x(v) = -36.01$). We observe that it is possible to train our proposed algorithm on very few examples and achieve reasonable success rates on unseen data. For example, training on just 1000 examples can achieve a success rate of 80.47 % on the test set. ![Attack Success Rate on the test set vs. the number of audio files in the training set X[]{data-label="fig:sizechart"}](figures/sizeChartLatest.pdf){width="0.9\columnwidth"} Effectiveness of universal perturbations ---------------------------------------- In order to assess the vulnerability of the victim Speech Recognition System to our attack algorithm, we compare our universal perturbation with random (uniform) perturbation having the same magnitude of distortion (same $\|v\|_\infty$) as our universal adversarial perturbation. Figure \[fig:baseline\] shows the plot of success rate vs. the magnitude of the perturbation for each of these perturbations. It can be seen that universal adversarial perturbations are able to achieve high success rate with very low magnitude of distortion as compared to a random noise perturbation. For example, for allowed perturbation $ \|v\|_\infty = 100$ our universal perturbation achieves a success rate of $65\%$ which is substantially higher than the success rate of random noise. This implies that for the same magnitude of distortion, distorting an audio waveform in a random direction is significantly less likely to cause mis-transcription as compared to distorting the waveform in the direction of universal perturbation. Our results support the hypothesis discussed in [@universal], demonstrating that universal adversarial perturbations exploit geometric correlations in the decision boundaries of the victim model. ![Success Rate vs $ \|v\|_\infty$ of universal and random perturbations.[]{data-label="fig:baseline"}](figures/baselineExcel.pdf){width="0.9\columnwidth"} Cross-model Transferability --------------------------- We perform a study on the transferability of adversarial samples to deceive ML models that have not been used for training the universal adversarial perturbation, i.e., their parameters and network structures are not revealed to the attacker. We train universal adversarial perturbations for Mozilla DeepSpeech and evaluate the extent to which they are valid for a different ASR architecture based on WaveNet [@wavenet]. For this study, we use a publicly available pre-trained model of WaveNet [@asrwavenet] and evaluate the transcriptions obtained using clean and adversarial audio for the same unseen validation dataset as used in our previous experiments. Our results in Table \[transfertable\] indicate that our attack is transferable to a significant extent for this particular setting. Specifically, when the mean $\mathit{dB}_x(v)=-29.82$, we are able to achieve a 63.28% success rate while attacking the WaveNet based ASR model. This result demonstrates the practicality of such adversarial perturbations, since they are able to generalize well across data points and architectures. Conclusion ========== In this work, we demonstrate the existence of audio-agnostic adversarial perturbations for speech recognition systems. We demonstrate that our audio-agnostic adversarial perturbation generalizes well across unseen data points and to some extent across unseen networks. Our proposed end-to-end approach can be used to further understand the vulnerabilities and blind spots of deep neural network based ASR system, and provide insights for building more robust neural networks. [^1]: Sound Examples: [[](https://universal-audio-perturbation.herokuapp.com)]{}
{ "pile_set_name": "ArXiv" }
[Strong coupling in the Kondo problem in the\ low-temperature region]{} **Yu.N. Ovchinnikov and A.M. Dyugaev** [L.D. Landau Institute for Theoretical Physics,\ 117940 Moscow, Russia]{} Abstract ======== The magnetic field dependence of the average spin of a localized electron coupled to conduction electrons with an antiferromagnetic exchange interaction is found for the ground state. In the magnetic field range $\mu H\sim 0.5 T_c$ ($T_c$ is the Kondo temperature) there is an inflection point, and in the strong magnetic field range $\mu H\gg T_c$, the correction to the average spin is proportional to $(T_c/\mu H)^2$. In zero magnetic field, the interaction with conduction electrons also leads to the splitting of doubly degenerate spin impurity states. Introduction ============ In the low-temperature and weak magnetic field region, even a weak interaction of magnetic impurities with a degenerate electron gas becomes strong$^{1-3}$. In this region, perturbation theory is violated. Two scenarios are possible in such a situation. First, an assumption can be made that in the low-temperature region, an increase in the magnetic field takes the system out of a strongly coupled state and into the region of applicability of perturbation theory. This nonobvious conjecture was used in Bethe’s ansatz method in the problem under consideration. As the result, in a strong magnetic field $\mu_eH\gg T_c$ ($T_c$ is the Kondo temperature), the correction to the mean spin impurity value has logarithmic behavior$^3$, $\langle S_z \rangle =\frac{1}{2} \Bigl ( 1-\frac{1}{2\ln (\mu_eH/T_c)} \Bigr )$. Such spin dependence of the magnetic field value is too slow, and is inconsistent with the experimental data$^4$, which yields power-like behavior. The level of spin satiation in the magnetic field in Ref.4 (Fig. 8) can be reached according to the expression given above only at the magnetic field value $H\approx 50$ T, instead of the experimental value of 6 T. The second scenario is connected with the assumption that an increase only in the magnetic field value does not move the system from a strongly coupled state to a weak the perturbed state. The second conjecture is supported by the fact that the correction to the wave function of a system consisting of magnetic impurity plus degenerate Fermi gas, in some state with low energy, contains corrections of two types obtained with the help of perturbation theory. The norm of one of them decreases in an increasing magnetic field, whereas the norm of the other is divergent in the limit $T\to 0$ for a finite magnetic field. Consideration of the norm of states in the problem involved is very useful, because it contains direct information about the average value of magnetic spin. Below we consider in detail the second conjecture and confirm it. In the low-temperature region $(T \ll T_c)$, the average spin of magnetic impurities is found for an arbitrary value of the external magnetic field. States for both signs of interaction constant are investigated. The strong coupled state arises in both cases, but the magnetic field dependence of the average value of spin is substantially different. The definition of Kondo temperature $T_c$ is also slightly different for different signs of the interaction constant. The model ========= We will suppose that the interaction of magnetic impurity with the Fermi sea of electrons has an exchange nature. Then the Hamiltonian $\hat H$ of the system under consideration can be taken in the form $$\hat H=\hat H_0+\int d^3r_1d^3r_2V(r_1-r_2) \chi^{+}_{\alpha}(r_1)\varphi^{+}_{\beta}(r_2) \chi_{\beta}(r_2)\varphi_{\alpha}(r_1)$$ $$-\frac{\mu H}{2} \int \Bigl ( \varphi^{+}_{\uparrow}(r_1) \varphi_{\uparrow}(r_1)- \varphi^{+}_{\downarrow}(r_1)\varphi_{\downarrow}(r_1) \Bigr ) d^3r_1.$$ In Eq. (1), operators $\varphi^{+}_{\beta},~\chi^{+}_{\alpha}$ are creation operators of an electron in a localized state on a magnetic impurity and in the continuum spectrum respectively. For simplicity, we consider the case with one unpaired electron in the localized state (spin 1/2). The first term in Eq. (1) describes the degenerate electron gas in some external field that leads to creation of one localized state. The spin interaction of electrons in the continuum spectrum with magnetic field leads only to small renormalization of the magnetic moment of a localized electron, and a small shift in the kinetic energy of electrons with spin up and down in such a way that they have the same value of chemical potential (no gap for transfer of electron with spin flip over the Fermi level). For this reason we omit this term in Hamiltonian (1). The last term gives the interaction energy of a localized electron with the magnetic field. Consider now the limiting case as $T\to 0$ and $H$ finite. We search for the lowest-energy eigenfunction $|\psi\rangle$ of Hamiltonian (1) in Fock space in the form $$|\psi\rangle =|10;11;11;..\rangle+ \sum C^{2L-1}_{2K}|01;\stackrel{2K}{10};\stackrel{2L-1}{10}\rangle+ \sum C^{2L-1}_{2K-1}|10;\stackrel{2K-1}{01};\stackrel{2L-1}{10};\rangle$$ $$+\sum C^{2L}_{2K}|10;\stackrel{2K}{10};\stackrel{2L}{01}\rangle + \sum_{K_1<K}C^{2L_1~2L-1}_{2K_1~2K} \hat N|01;\stackrel{2K_1}{10};\stackrel{2K}{10}; \stackrel{2L_1}{01};\stackrel{2L-1}{10}\rangle$$ $$+\sum C^{2L_1;2L-1}_{2K_1-1;2K} \hat N|10;\stackrel{2K_1-1}{01};\stackrel{2K}{10}; \stackrel{2L_1}{01};\stackrel{2L-1}{10}\rangle+ \sum_{L_1<L}C^{2L_1-1;2L-1}_{2K_1;2K-1} \hat N|01;\stackrel{2K_1}{10};\stackrel{2K-1}{01}; \stackrel{2L_1-1}{10};\stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_1<K;L_1<L} C^{2L_1-1;2L-1}_{2K_1-1;2K-1} \hat N|10;\stackrel{2K_1-1}{01};\stackrel{2K-1}{01}; \stackrel{2L_1-1}{10};\stackrel{2L-1}{10} \rangle$$ $$+\sum_{K_1<K;L_1<L} C^{2L_1;2L}_{2K_1;2K} \hat N|10;\stackrel{2K_1}{10};\stackrel{2K}{10}; \stackrel{2L_1}{01};\stackrel{2L}{01}\rangle+...$$ In Eq. (2), all single-particle states (solutions of Eq. (1) for one particle) are ordered and numbered. Indexes $K,L$ label states under and over the Fermi surface. Each box has two places. The first one means a state with spin up, and the second with spin down. As an example, the state $|\stackrel{2K}{10};\stackrel{2L}{01}\rangle$ means that the state $2K$ (spin down) under the Fermi surface is empty and the state $2L$ (spin down) over the Fermi surface is filled. The first cell is always reserved for an electron in a localized state. The first term in Eq. (2) gives the ground state of Hamiltonian (1) without interaction $(V(r)=0)$. The number of upper (or lower) indexes in $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ gives the number of excited pairs. For $P$ excited pairs, there are $2P+1$ different symbols $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$. Operator $\hat N$ is the ordering operator, and each rearrangement of two neighboring filled states gives a factor (-). In Eq. (2) in each box below Fermi surface, only one place can be empty and above the Fermi surface in each box, only one place can be filled. The equation for the wave function $|\psi\rangle$ is $$|\hat H\psi\rangle =E|\psi\rangle,$$ where $E$ is the energy of the state. Inserting expression (2) for the wave function $|\psi\rangle$ into Eq. (3), we obtain a set of linear equations for the quantities $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$. Due to the structure of Hamultonian (1), each quantity $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ order of $P$ is coupled only with quantities $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ order of $P,P\pm 1$. From the first equation of this system, we obtain the energy of the state, $$E=E_0-\mu H/2-\delta E,$$ $$\delta E=\sum \Bigl ( I^{2L-1^*}_{2K-1}C^{2L-1}_{2K-1}- I^{2L-1^*}_{2K}C^{2L-1}_{2K} \Bigr ),$$ where $E_0$ is the energy of the ground state without interaction. For convenience, we leave the magnetic energy of the localized state out of the correction term $\delta E$. The quantities $I^{\cdot}_{\cdot}$ in Eq. (4) are the transition matrix elements. As an example, we have $$I^{2L-1}_{2K}= \int d^3r_1d^3r_2\chi^{*}_{\uparrow}(r_1) \varphi^{*}_{\downarrow}(r_2)\varphi_{\uparrow}(r_1) \chi_{\downarrow}(r_2)V(r_1-r_2).$$ The Hamiltonian (1) possesses deep symmetry properties. To see some of these, we will keep indexes on $I$ that indicate energy and spin in the initial and final states. The next three equations for the quantities $C^{\cdot}_{\cdot}$ are $$-I^{2L-1}_{2K}+ \sum C^{2L-1}_{2K_1}I^{2K_1}_{2K}- \sum C^{2L-1}_{2K_1-1}I^{2K_1-1}_{2K}- \sum C^{2L_1}_{2K}I^{2L-1}_{2L_1}$$ $$+(\mu H+\varepsilon_L-\varepsilon_K-\delta E)C^{2L-1}_{2K}+ \sum_{K_1<K}C^{2L_1;2L-1}_{2K_1;2K}I^{2K_1}_{2L_1}$$ $$-\sum_{K<K_1}C^{2L_1;2L-1}_{2K;2K_1}I^{2K_1}_{2L_1}- \sum C^{2L_1;2L-1}_{2K_1-1;2K}I^{2K_1-1}_{2L_1}=0,$$ $$I^{2L-1}_{2K-1}- \sum I^{2K_1}_{2K-1}C^{2L-1}_{2K_1}+ \sum C^{2L-1}_{2K_1-1}I^{2K_1-1}_{2K-1}- \sum I^{2L-1}_{2L_1-1}C^{2L_1-1}_{2K-1}$$ $$+(\varepsilon_L-\varepsilon_K-\delta E)C^{2L-1}_{2K-1}+ \sum_{L_1<L}C^{2L_1-1;2L-1}_{2K_1;2K-1}I^{2K_1}_{2L_1-1}$$ $$-\sum_{L<L_1}C^{2L-1;2L_1-1}_{2K_1;2K-1}I^{2K_1}_{2L_1-1}+ \sum_{K<K_1;L_1<L} C^{2L_1-1;2L-1}_{2K-1;2K_1-1}I^{2K_1-1}_{2L_1-1}- \sum_{K_1<K;L_1<L}C^{2L_1-1;2L-1}_{2K_1-1;2K-1}I^{2K_1-1}_{2L_1-1}$$ $$-\sum_{L<L_1;K<K_1}C^{2L-1;2L_1-1}_{2K-1;2K_1-1}I^{2K_1-1}_{2L_1-1}+ \sum_{K_1<K;L<L_1} C^{2L-1;2L_1-1}_{2K_1-1;2K-1}I^{2K_1-1}_{2L_1-1}=0,$$ $$-\sum I^{2L}_{2L_1-1}C^{2L_1-1}_{2K}+ (\varepsilon_L-\varepsilon_K-\delta E)C^{2L}_{2K}+ \sum_{K<K_1}C^{2L;2L_1-1}_{2K;2K_1}I^{2K_1}_{2L_1-1}$$ $$-\sum_{K_1<K}C^{2L;2L_1-1}_{2K_1;2K}I^{2K_1}_{2L_1-1}+ \sum C^{2L;2L_1-1}_{2K_1-1;2K}I^{2K_1-1}_{2L_1-1}=0.$$ In Eq. (6), the quantities $\varepsilon_{L,K}$ are the energies of single states. As mentioned above, index $L$ means a state above the Fermi level and index $K$ means a state below the Fermi level. The equations for $C^{\cdot\cdot}_{\cdot\cdot}$ are given in Appendix A. Since the equations for $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ have a special structure, quantity $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ order of $P$ is coupled only with quantities $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ order of $P,P\pm 1$, it is possible to leave quantities $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ order of $P\geq 2$ out of Eqs. (6). As the result, we obtain three equations for the quantities $C^{2L-1}_{2K},C^{2L-1}_{2K-1}$ and $C^{2L}_{2K}$. They have the following form (from Appendix A): $$-I^{2L-1}_{2K}+\sum C^{2L-1}_{2K_1}I^{2K_1}_{2K}- \sum C^{2L-1}_{2K_1-1}I^{2K_1-1}_{2K}- \sum C^{2L_1}_{2K}I^{2L-1}_{2L_1}$$ $$+\Bigl ( \mu H+\varepsilon_L-\varepsilon_K-\delta E-\Sigma^{(1)}_{(K,L)} \Bigr ) C^{2L-1}_{2K}=A_1\Bigl ( C^{2L-1}_{2K};C^{2L-1}_{2K-1};C^{2L}_{2K} \Bigr ),$$ $$I^{2L-1}_{2K-1}-\sum C^{2L-1}_{2K_1}I^{2K_1}_{2K-1}+ \sum C^{2L-1}_{2K_1-1}I^{2K_1-1}_{2K-1}- \sum C^{2L_1-1}_{2K-1}I^{2L-1}_{2L_1-1}$$ $$+\Bigl ( \varepsilon_L-\varepsilon_K-\delta E-\Sigma_{(K,L)} \Bigr ) C^{2L-1}_{2K-1}=A_2\Bigl ( C^{2L-1}_{2K};C^{2L-1}_{2K-1};C^{2L}_{2K} \Bigr ),$$ $$-\sum I^{2L}_{2L_1-1}C^{2L_1-1}_{2K}+ \Bigl ( \varepsilon_L-\varepsilon_K-\delta E-\Sigma_{(K,L)}\Bigr ) C^{2L}_{2K}=A_3\Bigl ( C^{2L-1}_{2K};C^{2L-1}_{2K-1};C^{2L}_{2K} \Bigr ).$$ The linear operators $A_{1,2,3}$ do not contain terms proportional to the quantities $C^{2L-1}_{2K},C^{2L-1}_{2K-1},C^{2L}_{2K}$ without integral over one of variable $K,L$ with some function of $K,L$. These terms form the $\Sigma^{(1)}_{(K,L)},\Sigma_{(K,L)}$ terms in Eq. (7). All off-diagonal elements of such a form are equal to zero. The linear operators $A_{1,2,3}$ also do not contain terms proportional to the convolution of quantities $C^{\cdot}_{\cdot}$ with $I^{\cdot}_{\cdot}$ over one of variable $K,L$ without of denominator with the same variable. In Appendix B, we give the expressions for quantities $\Sigma^{(1)}_{(K,L)},~\Sigma_{(K,L)}$ in the fourth order of perturbation theory and quantities $C^{2L-1}_{2K},~C^{2L-1}_{2K-1},~C^{2L}_{2K}$ in the third order. It is easy to check that in the fourth order of perturbation theory, $$-\delta E-\Sigma_{(K,L)} \Bigl |_{\mbox~ {\rm for}~ \varepsilon_K= \varepsilon_L=\varepsilon_F} \Bigr. =0.$$ This equality holds in all the orders of perturbation theory. Below, we put $$-\delta E-\Sigma_{(K,L)} \Bigl |_{\varepsilon_K= \varepsilon_L=\varepsilon_F} =\Delta.$$ In Eq. (9), $\Delta \equiv\Delta (H)$ is some function of the magnetic field that must be determined from self-consistency. This equation is given below. Very important properties follow from the normalisation of states defined by Eqs. (2) and (7). To simplify the investigation of Eqs. (7), we give also the expression for operators $A_{1,2,3}$ in the lowest order of perturbation theory in Appendix B. All statements made above are independent of the exact form of spectrum $\varepsilon_{K_1},\varepsilon_L$ and potential $V(r)$. Wave function of the ground state ================================= The average electron spin $\langle S_z\rangle$ in a bound state at zero temperature can be found by differentiating the energy $\delta E$ with respect to $\mu H$ $$\langle S_z\rangle =\frac{1}{2}-\frac{\partial\delta E} {\partial\mu H}.$$ In accordance with quantum mechanical rules, the quantity $\langle S_z\rangle$ in the ground state is also given by an expression containing only norms of the states in expansion (2): $$\langle S_z\rangle= \frac{1}{2}\Bigl \{ \Bigr. 1+ \Bigl |C^{2L-1}_{2K-1}\Bigr |^2+ \Bigl |C^{2L}_{2K}\Bigr |^2- \Bigl |C^{2L-1}_{2K}\Bigr |^2+ \Bigl |C^{2L_1;2L-1}_{2K_1-1;2K}\Bigr |^2+ \Bigl |C^{2L_1-1;2L-1}_{2K_1-1;2K-1}\Bigr |^2$$ $$+\Bigl |C^{2L_1;2L}_{2K_1;2K}\Bigr |^2- \Bigl |C^{2L_1;2L-1}_{2K_1;2K}\Bigr |^2- \Bigl |C^{2L_1-1;2L-1}_{2K_1;2K-1}\Bigr |^2+... \Bigl. \Bigr \}~$$ $$\times\Bigl \{ \Bigr. 1+ \Bigl |C^{2L-1}_{2K-1}\Bigr |^2+ \Bigl |C^{2L}_{2K}\Bigr |^2+ \Bigl |C^{2L-1}_{2K}\Bigr |^2+ \Bigl |C^{2L_1;2L-1}_{2K_1-1;2K}\Bigr |^2+ \Bigl |C^{2L_1-1;2L-1}_{2K_1-1;2K-1}\Bigr |^2$$ $$+\Bigl |C^{2L_1;2L}_{2K_1;2K}\Bigr |^2+ \Bigl |C^{2L_1;2L-1}_{2K_1;2K}\Bigr |^2+ \Bigl |C^{2L_1-1;2L-1}_{2K_1;2K-1}\Bigr |^2+... \Bigl. \Bigr \}^{-1}~.$$ Below we use both Eqs. (10) and (11). To solve Eqs. (7) and (9), we consider $\Delta$ as a parameter. Then the right-hand side of Eq. (7) can be taken into account in perturbation theory. In the leading approximation we obtain $$-I^{2L-1}_{2K}+\sum C^{2L-1}_{2K_1}I^{2K_1}_{2K}- \sum C^{2L-1}_{2K_1-1}I^{2K_1-1}_{2K}- \sum C^{2L_1}_{2K}I^{2L-1}_{2L_1}$$ $$+(\mu H+\varepsilon_L-\varepsilon_K+\Delta)C^{2L-1}_{2K}=0~,$$ $$I^{2L-1}_{2K-1}-\sum C^{2L-1}_{2K_1}I^{2K_1}_{2K-1}+ \sum C^{2L-1}_{2K_1-1}I^{2K_1-1}_{2K-1}- \sum C^{2L_1-1}_{2K-1}I^{2L-1}_{2L_1-1}$$ $$+(\varepsilon_L-\varepsilon_K+\Delta)C^{2L-1}_{2K-1}=0~,$$ $$-\sum I^{2L}_{2L_1-1}C^{2L_1-1}_{2K}+ (\varepsilon_L-\varepsilon_K+\Delta )C^{2L}_{2K}=0~.$$ Below we make the usual assumptions about the energy-independent value of the density of states near the Fermi surface, and that the characteristic energy in transition matrix elements $I^{\cdot}_{\cdot}$ is also the Fermi energy $\varepsilon_F$. As a result, we can put $$\sum_KI^{\cdot}_{2K}() \rightarrow g\int^{\varepsilon_F}_{0}dx(), \quad \sum_LI^{2L}_{\cdot}\rightarrow g\int^{A\varepsilon_F}_0dy(),$$ $$\varepsilon_L-\varepsilon_F=y; \qquad \varepsilon_F-\varepsilon_K=x~.$$ In Eq. (13), $g$ is the dimensionless coupling constant. The potential $V(r)$ in Hamiltonian (1) is in natural units, hence the smallness of the coupling constant $g$ is connected only to the small radius of bound state. Due to the energy independence of the transition matrix elements $I^{\cdot}_{\cdot}$, Eqs. (12) can be substantially simplified. To do this, we define new quantities that are convolutions of functions $C^{\cdot}_{\cdot}$ with overlap integral $I^{\cdot}_{\cdot}$ over only one variable, $K$ or $L$, that is $$Z_L=\sum_{K_1}I^{2K_1}_{2K}C^{2L-1}_{2K_1}, \qquad Z_K=\sum_{L_1}I^{2L}_{2L_1-1}C^{2L_1-1}_{2K}~,$$ $$Y_L=\sum_{K_1}I^{2K_1-1}_{2K}C^{2L-1}_{2K_1-1}~, \qquad Y_K=\sum_{L_1}I^{2L-1}_{2L_1-1}C^{2L_1-1}_{2K-1}~,$$ $$X_L=\sum_{L_1}I^{2L}_{2L_1-1}C^{2L_1}_{2K}, \qquad X_K=\sum_{L_1}I^{2L-1}_{2L_1}C^{2L_1}_{2K}~.$$ Inserting Eqs. (14) into Eqs. (12), we obtain $$C^{2L-1}_{2K}=\frac{1}{\mu H+y+x+\Delta} \Bigl \{ I-Z_L+Y_L+X_K \Bigr \}~,$$ $$C^{2L-1}_{2K-1}=\frac{1}{y+x+\Delta} \Bigl \{-I+Z_L-Y_L+Y_K \Bigr \}~,$$ $$C^{2L}_{2K}=\frac{1}{y+x+\Delta} Z_K~,$$ where $I$ is the value of the transition matrix element $I^{\cdot}_{\cdot}$ for states near the Fermi surface. Now from Eqs. (14) and (15) we can obtain a complete set of equations for the quantities $Z_{K,L};Y_{K,L};X_{K,L}$ only. In addition, the quantities $X_{K,L}$ are very simply related to $Z_{K,L};Y_{K,L}$. Eliminating them, we obtain a set equations for just the quantities $Z_{K,L};Y_{K,L}$: $$Z_L \Bigl ( 1+g\ln \frac{\varepsilon_F}{\mu H+y+\Delta} \Bigr ) -Y_Lg\ln \frac{\varepsilon_F}{\mu H+y+\Delta}=$$ $$Ig\ln\frac{\varepsilon_F}{\mu H+y+\Delta}+g^2 \int\limits^{\varepsilon_F}_0 \frac{dxZ_K\ln\frac{A\varepsilon_F}{x+\Delta}} {\mu H+y+x+\Delta}~,$$ $$Z_K \Bigl ( 1-g^2\ln \frac{A\varepsilon_F}{x+\Delta} \ln\frac{A\varepsilon_F}{\mu H+x+\Delta} \Bigr )=$$ $$Ig\ln\frac{A\varepsilon_F}{\mu H+x+\Delta}-g \int\limits^{A\varepsilon_F}_0 \frac{dy(Z_L-Y_L)} {\mu H+y+x+\Delta}~,$$ $$Y_L \Bigl ( 1+g\ln \frac{\varepsilon_F}{y+\Delta} \Bigr )- Z_Lg\ln \frac{\varepsilon_F}{y+\Delta}= -Ig\ln\frac{\varepsilon_F}{y+\Delta}+g \int\limits^{\varepsilon_F}_0 \frac{dxY_K} {y+x+\Delta}~,$$ $$Y_K \Bigl ( 1-g\ln \frac{A\varepsilon_F}{x+\Delta} \Bigr )= -Ig\ln\frac{A\varepsilon_F}{x+\Delta}+g \int\limits^{A\varepsilon_F}_0 \frac{dy(Z_L-Y_L)} {y+x+\Delta}~.$$ Equations (16) are valid for both signs of the interaction constant $g$. But its solutions are substantially different for $g<0$ and $g>0$. Consider first the case $g<0$ (attractive interaction in the Kondo problem). In such a case, the quantities $Z_L,~Y_L$ are large in comparison with $Z_K$ and $Y_K$. To obtain this, we introduce a formal definition of “Kondo” temperature $T_c$, $$|g|\ln\frac{\varepsilon_F}{T_c}=1/2.$$ Now we also put $$T_L(y)=Z_L-Y_L.$$ Eliminating terms $Z_K,~Y_K$ from (16), we obtain one equation the quantity $T_L$: $$T_{L}(y)=\frac{1} {1+g\ln\frac{\varepsilon_F}{y+\Delta}+g\ln\frac{\varepsilon_F} {\mu H+y+\Delta}}\cdot \Biggl \{ \Biggr. Ig\Bigl (\ln\frac{\varepsilon_F}{y+\Delta}+ \ln\frac{\varepsilon_F}{\mu H+y+\Delta}\Bigr )$$ $$+\frac{Ig}{2}\int\limits^{\varepsilon_F}_{0}dx \Bigl ( \frac{1}{\mu H+y+x+\Delta}+ \frac{1}{y+x+\Delta} \Bigr ) \Biggl [ \Biggr. \frac{g^2\ln\frac{A\varepsilon_F}{x+\Delta} \ln\frac{A\varepsilon_F}{\mu H+x+\Delta}} {1-g^2\ln\frac{A\varepsilon_F}{x+\Delta} \ln\frac{A\varepsilon_F}{\mu H+x+\Delta}}$$ $$+\Biggl. \frac{g\ln\frac{A\varepsilon_F}{x+\Delta}} {1-g\ln\frac{A\varepsilon_F}{x+\Delta}} \Biggr ]- \frac{g^2}{2}\int\limits^{\varepsilon_F}_0dx \Bigl ( \frac{1}{\mu H+y+x+\Delta}+ \frac{1}{y+x+\Delta} \Bigr )$$ $$\times \Biggl [ \Biggr. \frac{g\ln\frac{A\varepsilon_F}{x+\Delta}} {1-g^2\ln\frac{A\varepsilon_F}{x+\Delta} \ln\frac{A\varepsilon_F}{\mu H+x+\Delta}}$$ $$\int\limits^{A\varepsilon_F}_0 \frac{dy_1T_L(y_1)}{\mu H+y_1+x+\Delta}+ \frac{1}{1-g\ln\frac{A\varepsilon_F}{x+\Delta}} \int\limits^{A\varepsilon_F}_0 \frac{dy_1T_L(y_1)}{y_1+x+\Delta} \Biggl. \Biggr ] \Biggl. \Biggr \}$$ It can be shown that the last term in Eq. (19) can be omitted, because it is small in the parameter $(g|\ln (1/|g|)$. We then obtain from Eqs. (17) and (19) $$T_{L}(y)= \frac{1}{|g|\ln\Bigl ( \frac{(y+\Delta)(\mu H+y+\Delta)}{T^2_c}\Bigr )} \Bigl \{ -I-I\int\limits^{1/2}_0dt\Bigl (\frac{t^2}{1-t^2}-\frac{t}{1-t} \Bigr ) \Bigr \}.$$ We finally obtain $$T_L(y)= \frac{-I\beta} {|g|\ln \Bigl ( \frac{(y+\Delta)(\mu H+y+\Delta)}{T_c^2} \Bigr )}, \qquad \beta=\frac{1}{2}\ln 3+\ln(3/2).$$ Inserting Eqs. (18) and (21) into Eq. (15), we obtain expressions for coefficients $C^{2L-1}_{2K},~C^{2L-1}_{2K-1}$: $$C^{2L-1}_{2K}=- \frac{T_L(y)}{\mu H+y+x+\Delta}~, \qquad C^{2L-1}_{2K-1}= \frac{T_L(y)}{y+x+\Delta}.$$ Now we can determine the value of $\Delta$. Equations (10) and (11) should give the same value for average spin $\langle S_z\rangle$. This condition, with the help of Eqs. (4) and (22), gives $$\beta\int\limits^{\infty}_0 \frac{dy}{(\mu H+y+\Delta)\ln^2 \Bigl ( \frac{(y+\Delta)(\mu H+y+\Delta)}{T^2_c} \Bigr )} \Biggr / \Biggl ( 1+\frac{\beta^2}{\ln \Bigl ( \frac{\Delta (\mu H+\Delta)}{T^2_c} \Bigr )} \Biggr )=$$ $$\frac{\partial\Delta}{\partial\mu H}\cdot \frac{1}{\ln\frac{\Delta (\mu H+\Delta)}{T^2_c}}+ \int\limits^{\infty}_0 \frac{dy}{(\mu H+y+\Delta)\ln^2 \Bigl (\frac{(y+\Delta)(\mu H+y+\Delta)}{T^2_c} \Bigr )}.$$ We seek a solution of Eq. (23) in the form $$\Delta (\mu H+\Delta)=T^2_c(1+\gamma), \qquad 0<\gamma\ll 1.$$ Terms proportional to $\gamma^{-1}$ cancel on the right-hand side of Eq. (23). This condition yields $$\frac{\partial\Delta}{\partial\mu H}+ \frac{T^2_c}{(\mu H+\Delta)(\mu H+2\Delta)}=0.$$ The solution of this equation is $$\Delta (\mu H+\Delta )=T^2_c,$$ $$\quad\qquad\qquad\qquad \Delta =-\frac{\mu H}{2}+ \Biggl ( \Bigl ( \frac{\mu H}{2} \Bigr )^2+T^2_c \Biggr )^{1/2}, \quad\qquad\qquad\qquad\qquad\qquad (26a)$$ and confirms our conjecture (24) about it. Of course, Eqs. (24) have two solutions for $\Delta$. One is given by Eq. (26a) (ground state), and the other is $$\quad\qquad\qquad\qquad \Delta = -\frac{\mu H}{2}-\Biggl ( \Bigl ( \frac{\mu H}{2} \Bigr )^2+T^2_c \Biggr )^{1/2}. \quad\qquad\qquad\qquad\qquad\qquad (26b)$$ Solution (26b) for $\Delta$ corresponds to the excited state. In the limit $\mu H\gg T_c$, this state transforms to a state with spin orientation along the magnetic field. The excited state is separated from the ground state by a “gap” $2\Biggl ( \Bigl ( \frac{\mu H}{2} \Bigr )^2+T^2_c \Biggr )^{1/2}$. The gap results in the independence of the position of the maximum of impurity heat capacity from the magnetic field in the range $\mu H\ll T_c$ (Schottky anomaly). Such a residual Schottky anomaly is always present in experiments$^5$. In the Sec. 5 we will show that renormalization of the term $\mu H$ in (7) leads to a change from $\mu H$ in Eq. (27) to $\mu\tilde H$ defined by Eq. (43). As a result, we obtain the mean spin $\langle S_z\rangle$ as an implicit function of the magnetic field $\mu H$. An attempt to obtain such an equation at nonzero temperature was made in Ref. 7. But the mean field approximation used there is incorrect for the problem considered. In Appendix D we show that the right-hand side of (7) leads to renormalization of the coefficients in Eq. 16, but does not alter the main result of the paper, Eqs. (27) and (43). Of course, renormalization changes Eq. (17) for the Kondo temperature. The quantity $\gamma$ can be found only from correction terms to Eqs. (20) and (22). Fortunately, we do not need these correction terms, because in the leading approximation, $\gamma$ also drops out of Eq. (11) for the spin value. With the help of Eqs. (11), (22), and (24), we obtain $$\langle S_z\rangle=\frac{\mu H}{2}\cdot \int\limits^{\infty}_0 \frac{dy}{(y+\Delta)(y+\mu H+\Delta)(\gamma +y(\mu H+2\Delta)/T^2_c)^2} \Biggr / 1/\gamma$$ $$=\frac{\mu H}{4 \Bigl ( T^2_c+(\mu H/2)^2\Bigr )^{1/2}}.$$ Equation (27) is in good agreement with the experimental data of Ref. 4. Ferromagnetic case ${\bf (g>0)}$ ================================ As mentioned above, Eqs. (16) are valid for both signs of the “interaction” constant $g$. In the case $g>0$, we can define the characteristic energy of the problem to be the Kondo temperature $T_c$ by the relation $$g\ln \frac{A\varepsilon_F}{T_c}=1.$$ For $g>0$, the quantities $Z_K,~Y_K,~X_K$ are large in comparison with $Z_L,~Y_L,~X_L$. We can eliminate $Z_L,~Y_L$ from Eqs. (16). As a result, we have $$Z_K(1-g^2\ln\frac{A\varepsilon_F}{x+\Delta} \ln\frac{A\varepsilon_F}{\mu H+x+\Delta}= Ig\ln\frac{A\varepsilon_F}{\mu H+x+\Delta}$$ $$-g\int\limits^{A\varepsilon_F}_0 \frac{dy}{\mu H+y+x+\Delta} \cdot \frac{1}{1+g\ln\frac{\varepsilon_F}{y+\Delta}+ g\ln\frac{\varepsilon_F}{\mu H+y+\Delta}} \cdot \Biggl [ \Biggr. Ig\ln \frac{\varepsilon^2_F}{(y+\Delta)(\mu H+y+\Delta)}$$ $$+g^2\int\limits^{\varepsilon_F}_0 \frac{dx_1Z_K(x_1)\ln\frac{A\varepsilon_F}{x_1+\Delta}} {\mu H+y+x_1+\Delta}-g\int\limits^{\varepsilon_F}_0 \frac{dx_1Y_K(x_1)}{y+x_1+\Delta} \Biggl. \Biggr ]~,$$ $$Y_K(1-g\ln\frac{A\varepsilon_F}{x+\Delta})= -Ig\ln\frac{A\varepsilon_F}{x+\Delta}$$ $$+g\int\limits^{A\varepsilon_F}_0 \frac{dy}{y+x+\Delta} \cdot \frac{1}{1+g\ln\frac{\varepsilon_F}{y+\Delta}+ \ln\frac{\varepsilon_F}{\mu H+y+\Delta}} \cdot \Biggl [ \Biggr. Ig\ln \frac{\varepsilon^2_F}{(y+\Delta)(\mu H+y+\Delta)}$$ $$+g^2\int\limits^{\varepsilon_F}_0 \frac{dx_1Z_K(x_1)\ln\frac{A\varepsilon_F}{x_1+\Delta}} {\mu H+y+x_1+\Delta}-g\int\limits^{\varepsilon_F}_0 \frac{dx_1Y_K(x_1)}{y+x_1+\Delta} \Biggl. \Biggr ]~.$$ In the range $x\ll \varepsilon_F$, Eqs. (29) yield the following values for $Y_K,~Z_K$: $$Y_K=-\frac{ID}{g\ln \bigl ( \frac{x+\Delta}{T_c} \bigr )}~, \qquad Z_K=\frac{ID}{g\ln \bigl ( \frac{(x+\Delta)(\mu H+x+\Delta)} {T^2_c} \bigr )}~,$$ where $D$ is a number of order unity. Inserting Eq. (30) into Eq. (15), we obtain $$C^{2L-1}_{2K}=\frac{1}{\mu H+y+x+\Delta}\cdot \frac{ID}{g\ln \bigl ( \frac{(x+\Delta)(\mu H+x+\Delta)} {T^2_c} \bigr )}~,$$ $$C^{2L-1}_{2K-1}=-\frac{1}{y+x+\Delta}\cdot \frac{ID}{g\ln \bigl ( \frac{x+\Delta} {T_c} \bigr )}~,$$ $$C^{2L}_{2K}=\frac{1}{y+x+\Delta}\cdot \frac{ID}{g\ln \bigl ( \frac{(x+\Delta)(\mu H+x+\Delta)} {T^2_c} \bigr )}~.$$ In the same way as in the case $g<0$, with the help of Eqs. (10), (11), and (31), we obtain $$\frac{\partial\Delta}{\partial\mu H} \Biggl [ \frac{1}{\ln\frac{\Delta}{T_c}}+ \frac{1}{\ln\frac{\Delta (\mu H+\Delta)}{T^2_c}} \Biggr ]=$$ $$-\Biggl [ 1-\frac{D^2} {1+D^2 \Bigl ( \frac{1}{\ln\Delta\bigl / T_c}+ \frac{1}{\ln \frac{\Delta (\mu H+\Delta)}{T^2_c}} \Bigr )} \Biggr ] \int\limits^{\infty}_0 \frac{dx}{(x+\Delta+\mu H)\ln^2 \Bigl ( \frac{(x+\Delta)(\mu H+x+\Delta)}{T^2_c} \Bigr )}.$$ The solution of this equation is $$\Delta\equiv T_c.$$ Equation (33) means that in the leading approximation, the spin value in the magnetic field is saturated, $$\langle S_z\rangle =\frac{1}{2}.$$ Correction terms to Eq. (34) come only from an energy range $\varepsilon$ of order $\varepsilon\sim\varepsilon_F\exp (-1/g^2)$. Note that a similar energy scale also arises in the problem considered by Nozieres and Dominicis$^6$. Our conjecture is that in temperature range $$T^2_c/\varepsilon_F~\ll T\ll T_c,$$ the leading correction to the average spin arises from the cutoff of integrals with respect to energy in expression (11) over an energy range of order $T$. If such an assumption is true, then the average spin in the magnetic field $\mu H \gg T$ is $$\langle S_z\rangle=\frac{1}{2}-\frac{T}{T_c} \int\limits^{\infty}_0 \frac{dx} {(x+1+\mu H/T_c)\ln^2\Bigl ( (1+x)(x+1+\mu H/T_c) \Bigr )}$$ $$=\frac{1}{2}-\frac{T}{4T_c} \int\limits^{\infty}_{\ln(1+\mu H/T_c)} \frac{dz} {\Bigl [ z+1/2\ln(1-(\mu H/T_c)e^{-z})\Bigr ]^2}.$$ In the limiting cases of weak $(\mu H \ll T_c)$ and strong $(\mu H \gg T_c)$ magnetic fields, the average spin is $$\langle S_z\rangle =\frac{1}{2} \Bigl ( 1-\frac{T}{\mu H} \Bigr ), \qquad T\ll \mu H\ll T_c,$$ $$\langle S_z\rangle =\frac{1}{2} \Bigl ( 1-\frac{T} {2T_c\ln \bigl ( \frac{\mu H}{T_c} \bigr )} \Bigr ), \qquad \mu H\gg T_c.$$ Self-energy terms ${\bf \Sigma^{(1)}_{(K,L)},~\Sigma_{(K,L)}}$ in perturbation theory ===================================================================================== As mentioned in the Sec. 2, there are two self-energy terms in the problem under consideration, $\Sigma^{(1)}_{(K,L)}$ and $\Sigma_{(K,L)}$. In second-order perturbation theory, they coincide. They start to differ in third-order in the coupling constant. In third-order perturbation theory, we otain from Appendix A $$\Sigma^{(1)}_{(K,L)}-\Sigma_{(K,L)}= I^{2K_1}_{2L_1}I^{2L_1}_{2L_2}I^{2L_2}_{2K_1}$$ $$\times\Biggl ( \frac{1} {(\mu H+\varepsilon_L+\varepsilon_{L_1}-\varepsilon_K-\varepsilon_{K_1}) (\mu H+\varepsilon_L+\varepsilon_{L_2}-\varepsilon_K-\varepsilon_{K_1})}$$ $$-\frac{1} {(\varepsilon_L+\varepsilon_{L_1}-\varepsilon_K-\varepsilon_{K_1}) (\varepsilon_L+\varepsilon_{L_2}-\varepsilon_K-\varepsilon_{K_1})} \Biggr ).$$ A simple calculation of sums in Eq. (38) leads to $$\Bigl ( \Sigma^{(1)}_{(K,L)}-\Sigma_{(K,L)}\Bigr )_ {\varepsilon_L=\varepsilon_K=\varepsilon_F}= -\mu Hg^3\ln^2\Bigl (\frac{\varepsilon_F}{\varepsilon_c} \Bigr ),$$ where $\varepsilon_c$ is the cutoff energy. In Appendix C, we obtain the following term in expansion (39) for the self-energy: $$\Bigl (\Sigma^{(1)}_{(K,L)}-\Sigma_{(K,L)}\Bigr ) _{\varepsilon_K=\varepsilon_L=\varepsilon_F}= -\mu Hg^3\ln^2 \Bigl (\frac{\varepsilon_F}{\varepsilon_c} \Bigr ) + 2\mu Hg^4\ln^3 \Bigl (\frac{\varepsilon_F}{\varepsilon_c} \Bigr )-...$$ Comparison with the expression for $\delta E$ obtained in perturbation theory shows that $$\delta\Sigma= \Bigl ( \Sigma^{(1)}_{(K,L)}-\Sigma_{(K,L)}\Bigr )_ {\varepsilon_K=\varepsilon_L=\varepsilon_F}$$ $$=\mu Hg\ln \Bigl (\frac{\varepsilon_F}{\varepsilon_c}\Bigr ) \Biggl [ -\frac{\partial\delta E}{\partial\mu H} \Biggr ]$$ $$=-\frac{\mu H}{2} \Bigl ( -\frac{1}{2}+\langle S_z \rangle \Bigr )~.$$ To obtain Eq. (41) we used Eqs. (17), (10), and an assumption that $\varepsilon_c\sim T_c$. Equation (41) means that some corrections should be made in the first of Eqs. (12). Specifically, $\mu H$ in the first Eqs. (12) should be corrected by $\delta\Sigma$: $$\mu H \to \mu H-\delta\Sigma =\mu \tilde H$$ The main result of this correction is a decrease in the initial slope of the magnetic field dependence of the average spin value by 3/4. This phenomenon was probably found in the experimental Ref. 4 (Figs 8 and 9). The average spin $\langle S_z\rangle$ is given by Eq. (27) with the substitution $$\mu H \to \mu\tilde H=\mu H - \frac{\mu H}{2} \Bigl ( \frac{1}{2}-\langle S_z\rangle \Bigr ).$$ This equation determines $\langle S_z\rangle$ as an implicit function of $\mu H$. From Eqs. (27) and (43), we find that $\langle S_z\rangle$ as a function of $\mu H$ has an a inflection point at $\mu H/2T_c=0.2426$. Such an inflection point was obtained in Ref. 4. Conclusion ========== Thus, we show that at zero temperature and finite magnetic field $\mu H \ll \varepsilon_F$, a singularity esists in the convolution of amplitudes $C^{2L-L}_{2K_1}$ and $C^{2L-1}_{2K_1-1}$ over energy $\varepsilon_{K_1}$ with amplitude $I^{2K_1}_{2K}$. As a result, in the high magnetic field region $\mu H\gg T_c$ the correction to the spin impurity value is proportional to $(T_c/\mu H)^2$ instead of $1/\ln (\mu H/T_c)$, as predicted in Refs. 1-3. We also find that renormalization of the magnetic field discussed in Sec. 5 leads to an inflection point in the dependence of spin impurity on the magnetic field. The initial slope is a function of $z$, which enters into the definition of the Kondo temperature (see Appendix D). Our consideration shows that the interaction of the spin of an impurity with at electron gas does not lead to the appearance of the localized state, as assumed in Refs. 8-10. The Kondo temperature $T_c$ is given by Eq. (D.7), where $z$ is the root of the equation $$f(z)=0.$$ We find here three terms in the expansion of $f$ in Taylor series (Eq. (D.8)). This equation was also studied in Refs. 8 and 11. Our result for the first two terms in Eq. (44) coincide with the result of Ref. 11, because this is also the result of parquet approximation. But, our consideration (Eq. 44) is conceptually closer to the Ref. 8. The difference even in the second term is probably related to the assumption of Ref. 8 that in the problem under consideration there is a localized state with spin 1/2. In fact, such a localized state does not exist. Without interaction there are two states associted with impurity spin 1/2. In zero magnetic field, these two states are degenerate. Interaction removes such a degeneracy and the splitting energy is $2T_c$. Of course, interaction does not change the number of states, as in our consideration, and is not fulfilled in Ref. 8. Note also that the driving term is Refs. 8-10 missing. Nevertheless, the average value of spin of impurity $\langle S_z\rangle$ as a function of magnetic field found in Refs. 9 and 10 coincides with our result except for the effect of renormalization of the magnetic field (Sec. 5) and the expression for the Kondo temperature. The authors thank Prof. P. Fulde and Prof. A.I. Larkin for helpful discussions. We thank Prof. P. Fulde for hospitality at the Max-Planck-Institute for Complex Systems (Dresden). The research of Yu.N.O. was supported by CRDF Grant RP1-194. The research of A.M.D. is supported by INTAS and the Russian Foumdation for Basic Research (Grant 95-553). Appendix ======== The wave function of a system consisting of one localized electron plus degenerate electron gas can be taken in the form $$|\psi\rangle =|10;11;11...\rangle + \sum C^{2L-1}_{2K}|01;\stackrel{2K}{10};\stackrel{2L-1}{10}\rangle$$ $$+\sum C^{2L-1}_{2K-1}|10;\stackrel{2K-1}{01};\stackrel{2L-1}{10}\rangle+ \sum C^{2L}_{2K}|10;\stackrel{2K}{10};\stackrel{2L}{01}\rangle$$ $$+\sum_{K_1<K} C^{2L_1;2L-1}_{2K_1;2K} \hat N|01;\stackrel{2K_1}{10};\stackrel{2K}{10}; \stackrel{2L_1}{01};\stackrel{2L-1}{10}\rangle+ \sum C^{2L_1;2L-1}_{2K_1-1;2K} \hat N|10;\stackrel{2K_1-1}{01};\stackrel{2K}{10}; \stackrel{2L_1}{01};\stackrel{2L-1}{10}\rangle$$ $$+\sum_{L_1<L} C^{2L_1-1;2L-1}_{2K_1;2K-1} \hat N|01;\stackrel{2K_1}{10};\stackrel{2K-1}{01}; \stackrel{2L_1-1}{10};\stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_1<K;L_1<L} C^{2L_1-1;2L-1}_{2K_1-1;2K-1} \hat N|10;\stackrel{2K_1-1}{01};\stackrel{2K-1}{01}; \stackrel{2L_1-1}{10};\stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_1<K;L_1<L} C^{2L_1;2L}_{2K_1;2K} |10;\stackrel{2K_1}{10};\stackrel{2K}{10}; \stackrel{2L_1}{01};\stackrel{2L}{01}\rangle$$ $$+\sum_{K_2<K_1<K;L_2<L_1} C^{2L_1;2L_1;2L-1}_{2K_2;2K_1;2K} \hat N|01;\stackrel{2K_2}{10};\stackrel{2K_1}{10}; \stackrel{2K}{10};\stackrel{2L_2}{01}\stackrel{2L_1}{01}; \stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_1<K;L_2<L_1} C^{2L_2;2L_1;2L-1}_{2K_2-1;2K_1;2K} \hat N|10;\stackrel{2K_2-1}{01};\stackrel{2K_1}{10}; \stackrel{2K}{10};\stackrel{2L_2}{01}\stackrel{2L_1}{01}; \stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_2<K;L_1<L} C^{2L_2;2L_1-1;2L-1}_{2K_2-1;2K_1;2K-1} \hat N|10;\stackrel{2K_2-1}{01};\stackrel{2K_1}{10}; \stackrel{2K-1}{01};\stackrel{2L_2}{01}\stackrel{2L_1-1}{10}; \stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_2<K_1;L_1<L} C^{2L_2;2L_1-1;2L-1}_{2K_2;2K_1;2K-1} \hat N|01;\stackrel{2K_2}{10};\stackrel{2K_1}{10}; \stackrel{2K-1}{01};\stackrel{2L_2}{01}\stackrel{2L_1-1}{10}; \stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_1<K;L_2<L_1<L} C^{2L_2-1;2L_1-1;2L-1}_{2K_2;2K_1-1;2K-1} \hat N|01;\stackrel{2K_2}{10};\stackrel{2K_1-1}{01}; \stackrel{2K-1}{01};\stackrel{2L_2-1}{10}\stackrel{2L_1-1}{10}; \stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_2<K_1<K;L_2<L_1<L} C^{2L_2-1;2L_1-1;2L-1}_{2K_2-1;2K_1-1;2K-1} \hat N|10;\stackrel{2K_2-1}{01};\stackrel{2K_1-1}{01}; \stackrel{2K-1}{01};\stackrel{2L_2-1}{10}\stackrel{2L_1-1}{10}; \stackrel{2L-1}{10}\rangle$$ $$+\sum_{K_2<K_1<K;L_2<L_1<L} C^{2L_2;2L_1;2L}_{2K_2;2K_1;2K} \hat N|10;\stackrel{2K_2-1}{10};\stackrel{2K_1-1}{10}; \stackrel{2K-1}{10};\stackrel{2L_2}{01}\stackrel{2L_1}{01}; \stackrel{2L}{01}\rangle+ ...$$ The notation here is the same as in the text. As we note above, there are $(2P+1)$ different symbols $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ of order $P$. Inserting Eq. (A.1) into Eq. (3) for the wave function, some simple but tedions calculations yield a set of equations for the coefficients $C^{\cdot\cdot}_{\cdot\cdot}$. The five equations for the $C^{\cdot\cdot}_{\cdot\cdot}$ are $$C^{2L-1}_{2K}I^{2L_1}_{2K_1}-C^{2L-1}_{2K_1}I^{2L_1}_{2K}- C^{2L_1}_{2K}I^{2L-1}_{2K_1}+C^{2L_1}_{2K_1}I^{2L-1}_{2K}$$ $$+(\mu H+\varepsilon_L+\varepsilon_{L_1}-\varepsilon_K-\varepsilon_{K_1}- \delta E)C^{2L_1;2L-1}_{2K_1;2K}$$ $$+\Biggl ( \sum_{K_2<K}C^{2L_1;2L-1}_{2K_2;2K}I^{2K_2}_{2K_1}- \sum_{K<K_2}C^{2L_1;2L-1}_{2K;2K_2}I^{2K_2}_{2K_1} \Biggr )$$ $$+\Biggl ( \sum_{K_1<K_2}C^{2L_1;2L-1}_{2K_1;2K_2}I^{2K_2}_{2K}- \sum_{K_2<K_1}C^{2L_1;2L-1}_{2K_2;2K_1}I^{2K_2}_{2K} \Biggr )$$ $$-\sum C^{2L_2;2L-1}_{2K_1;2K}I^{2L_1}_{2L_2}+ \Biggl ( \sum_{L_2<L_1}C^{2L_2;2L_1}_{2K_1;2K}I^{2L-1}_{2L_2}- \sum_{L_1<L_2}C^{2L_1;2L_2}_{2K_1;2K}I^{2L-1}_{2L_2} \Biggr )$$ $$-\Biggl ( \sum C^{2L_1;2L-1}_{2K_2-1;2K}I^{2K_2-1}_{2K_1}- \sum C^{2L_1;2L-1}_{2K_2-1;2K_1}I^{2K_2-1}_{2K} \Biggr )$$ $$-\sum_{K_1<K<K_2;L_2<L_1} C^{2L_2;2L_1;2L-1}_{2K_1;2K;2K_2}I^{2K_2}_{2L_2}$$ $$+\sum_{K_1<K_2<K;L_2<L_1} C^{2L_2;2L_1;2L-1}_{2K_1;2K_2;2K}I^{2K_2}_{2L_2}- \sum_{K_2<K_1;L_2<L_1} C^{2L_2;2L_1;2L-1}_{2K_2;2K_1;2K}I^{2K_2}_{2L_2}$$ $$+\sum_{K_1<K<K_2;L_1<L_2} C^{2L_1;2L_2;2L-1}_{2K_1;2K;2K_2}I^{2K_2}_{2L_2}$$ $$-\sum_{K_1<K_2<K;L_1<L_2} C^{2L_1;2L_2;2L-1}_{2K_1;2K_2;2K}I^{2K_2}_{2L_2}+ \sum_{K_2<K_1<K;L_1<L_2} C^{2L_1;2L_2;2L-1}_{2K_2;2K_1;2K}I^{2K_2}_{2L_2}$$ $$+\sum_{L_2<L_1;K_1<K} C^{2L_2;2L_1;2L-1}_{2K_2-1;2K_1;2K}I^{2K_2-1}_{2L_2}- \sum_{L_1<L_2;K_1<K} C^{2L_1;2L_2;2L-1}_{2K_2-1;2K_1;2K}I^{2K_2-1}_{2L_2}=0~,$$ $$-I^{2L_1}_{2K_1-1}C^{2L-1}_{2K}+ C^{2L_1}_{2K}I^{2L-1}_{2K_1-1}- \sum_{K_2<K}C^{2L_1;2L-1}_{2K_2;2K}I^{2K_2}_{2K_1-1}+ \sum_{K<K_2}C^{2L_1;2L-1}_{2K;2K_2}I^{2K_2}_{2K_1-1}$$ $$+(\varepsilon_L+\varepsilon_{L_1}-\varepsilon_K-\varepsilon_{K_1}-\delta E) C^{2L_1;2L-1}_{2K_1-1;2K}+ \sum C^{2L_1;2L-1}_{2K_2-1;2K}I^{2K_2-1}_{2K_1-1}- \sum C^{2L_1;2L_2-1}_{2K_1-1;2K}I^{2L-1}_{2L_2-1}$$ $$+\sum_{L_2<L} C^{2L_2-1;2L-1}_{2K;2K_1-1}I^{2L_1}_{2L_2-1}- \sum_{L<L_2} C^{2L-1;2L_2-1}_{2K;2K_1-1}I^{2L_1}_{2L_2-1}$$ $$+\sum_{L_2<L;K<K_2} C^{2L_1;2L_2-1;2L-1}_{2K;2K_2;2K_1-1}I^{2K_2}_{2L_2-1}- \sum_{L_2<L;K_2<K} C^{2L_1;2L_2-1;2L-1}_{2K_2;2K;2K_1-1}I^{2K_2}_{2L_2-1}$$ $$-\sum C^{2L_1;2L-1;2L_2-1}_{2K;2K_2;2K_1-1}I^{2K_2}_{2L_2-1}+ \sum_{L<L_2;K_2<K} C^{2L_1;2L-1;2L_2-1}_{2K_2;2K;2K_1-1}I^{2K_2}_{2L_2-1}$$ $$-\sum_{L_2<L;K_1<K_2} C^{2L_1;2L_2-1;2L-1}_{2K_1-1;2K;2K_2-1}I^{2K_2-1}_{2L_2-1}+ \sum_{L_2<L;K_2<K_1} C^{2L_1;2L_2-1;2L-1}_{2K_2-1;2K;2K_1-1}I^{2K_2-1}_{2L_2-1}$$ $$+\sum_{L<L_2;K_1<K_2} C^{2L_1;2L-1;2L_2-1}_{2K_1-1;2K;2K_2-1}I^{2K_2-1}_{2L_2-1}- \sum_{L<L_2;K_2<K_1} C^{2L_1;2L-1;2L_2-1}_{2K_2-1;2K;2K_1-1}I^{2K_2-1}_{2L_2-1} =0~,$$ $$C^{2L-1}_{2K-1}I^{2L_1-1}_{2K_1}-C^{2L_1-1}_{2K-1}I^{2L-1}_{2K_1}+ \sum C^{2L_2;2L-1}_{2K-1;2K_1}I^{2L_1-1}_{2L_2}- \sum C^{2L_2;2L_1-1}_{2K-1;2K_1}I^{2L-1}_{2L_2}$$ $$+(\mu H+\varepsilon_L+\varepsilon_{L_1}-\varepsilon_K-\varepsilon_{K_1}- \delta E)C^{2L_1-1;2L-1}_{2K_1;2K-1}+ \sum C^{2L_1-1;2L-1}_{2K_2;2K-1}I^{2K_2}_{2K_1}$$ $$+\sum_{K<K_2}C^{2L_1-1;2L-1}_{2K-1;2K_2-1}I^{2K_2-1}_{2K_1}- \sum_{K_2<K}C^{2L_1-1;2L-1}_{2K_2-1;2K-1}I^{2K_2-1}_{2K_1}+ \sum_{K_1<K_2} C^{2L_2;2L_1-1;2L-1}_{2K_1;2K_2;2K-1}I^{2K_2}_{2L_2}$$ $$-\sum_{K_2<K_1}C^{2L_2;2L_1-1;2L-1}_{2K_2;2K_1;2K-1}I^{2K_2}_{2L_2}- \sum_{K<K_2}C^{2L_2;2L_1-1;2L-1}_{2K-1;2K_1;2K_2-1}I^{2K_2-1}_{2L_2}$$ $$+\sum_{K_2<K} C^{2L_2;2L_1-1;2L-1}_{2K_2-1;2K_1;2K-1}I^{2K_2-1}_{2L_2}=0~,$$ $$(\varepsilon_L+\varepsilon_{L_1}-\varepsilon_K-\varepsilon_{K_1}-\delta E) C^{2L_1;2L}_{2K_1;2K}+ \sum C^{2L;2L_2-1}_{2K_1;2K}I^{2L_1}_{2L_2-1}- \sum C^{2L_1;2L_2-1}_{2K_1;2K}I^{2L}_{2L_2-1}$$ $$-\sum_{K_1<K<K_2} C^{2L_1;2L;2L_2-1}_{2K_1;2K;2K_2}I^{2K_2}_{2L_2-1}+ \sum_{K_1<K_2<K} C^{2L_1;2L;2L_2-1}_{2K_1;2K_2;2K}I^{2K_2}_{2L_2-1}$$ $$-\sum_{K_2<K_1<K} C^{2L_1;2L;2L_2-1}_{2K_2;2K_1;2K}I^{2K_2}_{2L_2-1}+ \sum C^{2L_1;2L;2L_2-1}_{2K_2-1;2K_1;2K}I^{2K_2-1}_{2L_2-1}=0 ~,$$ $$-I^{2L_1-1}_{2K_1-1}C^{2L-1}_{2K-1}+ C^{2L-1}_{2K_1-1}I^{2L_1-1}_{2K-1}+ C^{2L_1-1}_{2K-1}I^{2L-1}_{2K_1-1}- C^{2L_1-1}_{2K_1-1}I^{2L-1}_{2K-1}$$ $$+(\varepsilon_L+\varepsilon_{L_1}-\varepsilon_K-\varepsilon_{K_1}-\delta E) C^{2L_1-1;2L-1}_{2K_1-1;2K-1}$$ $$-\Biggl ( \sum C^{2L_1-1;2L-1}_{2K_2;2K-1}I^{2K_2}_{2K_1-1}- \sum C^{2L_1-1;2L-1}_{2K_2;2K_1-1}I^{2K_2}_{2K-1} \Biggr )$$ $$+\Biggl ( \sum_{K_2<K} C^{2L_1-1;2L-1}_{2K_2-1;2K-1}I^{2K_2-1}_{2K_1-1}- \sum_{K_2<K_1} C^{2L_1-1;2L-1}_{2K_2-1;2K_1-1}I^{2K_2-1}_{2K-1} \Biggr )$$ $$-\Biggl ( \sum_{K<K_2} C^{2L_1-1;2L-1}_{2K-1;2K_2-1}I^{2K_2-1}_{2K_1-1}- \sum_{K_1<K_2} C^{2L_1-1;2L-1}_{2K_1-1;2K_2-1}I^{2K_2-1}_{2K-1} \Biggr )$$ $$-\Biggl ( \sum_{L_2<L} C^{2L_2-1;2L-1}_{2K_1-1;2K-1}I^{2L_1-1}_{2L_2-1}- \sum_{L_2<L_1} C^{2L_2-1;2L_1-1}_{2K_1-1;2K-1}I^{2L-1}_{2L_2-1} \Biggr )$$ $$+\Biggl ( \sum_{L<L_2} C^{2L-1;2L_2-1}_{2K_1-1;2K-1}I^{2L_1-1}_{2L_2-1}- \sum_{L_1<L_2} C^{2L_1-1;2L_2-1}_{2K_1-1;2K-1}I^{2L-1}_{2L_2-1} \Biggr )$$ $$-\sum_{L_2<L_1<L} C^{2L_2-1;2L_1-1;2L-1}_{2K_2;2K_1-1;2K-1}I^{2K_2}_{2L_2-1}+ \sum_{L_1<L_2<L} C^{2L_1-1;2L_2-1;2L-1}_{2K_2;2K_1-1;2K-1}I^{2K_2}_{2L_2-1}$$ $$-\sum_{L_1<L<L_2} C^{2L_1-1;2L-1;2L_2-1}_{2K_2;2K_1-1;2K-1} I^{2K_2}_{2L_2-1}+ \sum_{L_2<L_1<L;K<K_2} C^{2L_2-1;2L_1-1;2L-1}_{2K_1-1;2K-1;2K_2-1} I^{2K_2-1}_{2L_2-1}$$ $$-\sum_{K_1<K_2<K;L_2<L_1} C^{2L_2-1;2L_1-1;2L-1}_{2K_1-1;2K_2-1;2K-1} I^{2K_2-1}_{2L_2-1}+ \sum_{K_2<K_1;L_2<L_1} C^{2L_2-1;2L_1-1;2L-1}_{2K_2-1;2K_1-1;2K-1} I^{2K_2-1}_{2L_2-1}$$ $$-\sum_{K<K_2;L_1<L_2<L} C^{2L_1-1;2L_2-1;2L-1}_{2K_1-1;2K-1;2K_2-1} I^{2K_2-1}_{2L_2-1}+ \sum_{K_1<K_2<K;L_1<L_2<L} C^{2L_1-1;2L_2-1;2L-1}_{2K_1-1;2K_2-1;2K-1} I^{2K_2-1}_{2L_2-1}$$ $$-\sum_{K_2<K_1;L_1<L_2<L} C^{2L_1-1;2L_2-1;2L-1}_{2K_2-1;2K_1-1;2K-1} I^{2K_2-1}_{2L_2-1}+ \sum_{K_1<K<K_2;L<L_2} C^{2L_1-1;2L-1;2L_2-1}_{2K_1-1;2K-1;2K_2-1} I^{2K_2-1}_{2L_2-1}$$ $$-\sum_{L<L_2;K_1<K_2<K} C^{2L_1-1;2L-1;2L_2-1}_{2K_1-1;2K_2-1;2K-1} I^{2K_2-1}_{2L_2-1}+ \sum_{K_2<K_1;L<L_2} C^{2L_1-1;2L-1;2L_2-1}_{2K_2-1;2K_1-1;2K-1} I^{2K_2-1}_{2L_2-1}=0$$ Equations (A.2) are exact. Appendix ======== Our purpose is to obtain an expression for the self-energy terms $\Sigma^{(1)}_{(K,L)}$ and $\Sigma_{(K,L)}$ in fourth-order perturbation theory. To do this we should obtain equations on the quantities $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ in the “leading” approximation. That is, we can omit in such a system of equations the terms corresponding to “scattering” of terms $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ and connection terms with quantities $C^{\cdot\cdot\cdot\cdot}_{\cdot\cdot\cdot\cdot}$. Really, we need only six equations in the six quantities entering into Eqs. (A.2). The required system can be obtained from Eqs. (3) and (A.1). These equations are $$(\mu H+\varepsilon_L+\varepsilon_{L_1}+\varepsilon_{L_2}- \varepsilon_K-\varepsilon_{K_1}-\varepsilon_{K_2}) C^{2L_2;2L_1;2L-1}_{2K_2;2K_1;2K}$$ $$-\Bigl \{ C^{2L_1;2L-1}_{2K_1;2K}I^{2L_2}_{2K_2}- C^{2L_1;2L-1}_{2K_2;2K}I^{2L_2}_{2K_1}+ C^{2L_1;2L-1}_{2K_2;2K_1}I^{2L_2}_{2K} \Bigr.$$ $$-\Bigl. C^{2L_2;2L-1}_{2K_1;2K}I^{2L_1}_{2K_2}+ C^{2L_2;2L-1}_{2K_2;2K}I^{2L_1}_{2K_1}- C^{2L_2;2L-1}_{2K_2;2K_1}I^{2L_1}_{2K} \Bigr \}$$ $$-\Bigl \{ C^{2L_2;2L_1}_{2K_1;2K}I^{2L-1}_{2K_2}- C^{2L_2;2L_1}_{2K_2;2K}I^{2L-1}_{2K_1}+ C^{2L_2;2L_1}_{2K_2;2K_1}I^{2L-1}_{2K} \Bigr \} =0~,$$ $$(\varepsilon_L+\varepsilon_{L_1}+\varepsilon_{L_2}- \varepsilon_K-\varepsilon_{K_1}-\varepsilon_{K_2}) C^{2L_2;2L_1;2L-1}_{2K_2-1;2K_1;2K}$$ $$+C^{2L_2;2L_1}_{2K_1;2K}I^{2L-1}_{2K_2-1}+ \Bigl \{ C^{2L_1;2L-1}_{2K_1;2K}I^{2L_2}_{2K_2-1}- C^{2L_2;2L-1}_{2K_1;2K}I^{2L_1}_{2K_2-1} \Bigr \} =0~,$$ $$(\mu H+\varepsilon_L+\varepsilon_{L_1}+\varepsilon_{L_2}- \varepsilon_K-\varepsilon_{K_1}-\varepsilon_{K_2}) C^{2L_2;2L_1-1;2L-1}_{2K_2;2K_1;2K-1}$$ $$-\Bigl \{ C^{2L_2;2L-1}_{2K-1;2K_1}I^{2L_1-1}_{2K_2}- C^{2L_2;2L-1}_{2K-1;2K_2}I^{2L_1-1}_{2K_1}-\Bigr. \Bigl. C^{2L_2;2L_1-1}_{2K-1;2K_1}I^{2L-1}_{2K_2}+ C^{2L_2;2L_1-1}_{2K-1;2K_2}I^{2L-1}_{2K_1} \Bigr \}$$ $$-\Bigl \{ C^{2L_1-1;2L-1}_{2K_1;2K-1}I^{2L_2}_{2K_2}- C^{2L_1-1;2L-1}_{2K_2;2K-1}I^{2L_2}_{2K_1} \Bigr \} =0~,$$ $$(\varepsilon_L+\varepsilon_{L_1}+\varepsilon_{L_2}- \varepsilon_K-\varepsilon_{K_1}-\varepsilon_{K_2}) C^{2L_2;2L_1-1;2L-1}_{2K_2-1;2K_1;2K-1}$$ $$+\Bigl \{ C^{2L_2;2L-1}_{2K-1;2K_1}I^{2L_1-1}_{2K_2-1}- C^{2L_2;2L-1}_{2K_2-1;2K_1}I^{2L_1-1}_{2K-1}-\Bigr. \Bigl. C^{2L_2;2L_1-1}_{2K-1;2K_1}I^{2L-1}_{2K_2-1}+ C^{2L_2;2L_1-1}_{2K_2-1;2K_1}I^{2L-1}_{2K-1} \Bigr \}$$ $$+\Bigl \{ C^{2L_1-1;2L-1}_{2K_1;2K-1}I^{2L_2}_{2K_2-1}- C^{2L_1-1;2L-1}_{2K_1;2K_2-1}I^{2L_2}_{2K-1} \Bigr \} =0~,$$ $$(\mu H+\varepsilon_L+\varepsilon_{L_1}+\varepsilon_{L_2}- \varepsilon_K-\varepsilon_{K_1}-\varepsilon_{K_2}) C^{2L_2-1;2L_1-1;2L-1}_{2K_2;2K_1-1;2K-1}$$ $$-\Bigl \{ C^{2L_1-1;2L-1}_{2K_1-1;2K-1}I^{2L_2-1}_{2K_2}- C^{2L_2-1;2L-1}_{2K_1-1;2K-1}I^{2L_1-1}_{2K_2} + C^{2L_2-1;2L_1-1}_{2K_1-1;2K-1}I^{2L-1}_{2K_2} \Bigr \} =0~,$$ $$(\varepsilon_L+\varepsilon_{L_1}+\varepsilon_{L_2}- \varepsilon_K-\varepsilon_{K_1}-\varepsilon_{K_2}) C^{2L_2-1;2L_1-1;2L-1}_{2K_2-1;2K_1-1;2K-1}$$ $$+\Bigl \{ C^{2L_1-1;2L-1}_{2K_1-1;2K-1}I^{2L_2-1}_{2K_2-1}- C^{2L_1-1;2L-1}_{2K_2-1;2K-1}I^{2L_2-1}_{2K_1-1} + C^{2L_1-1;2L-1}_{2K_2-1;2K_1-1}I^{2L_2-1}_{2K-1} \Bigr.$$ $$-C^{2L_2-1;2L-1}_{2K_1-1;2K-1}I^{2L_2-1}_{2K_2-1}+ C^{2L_2-1;2L-1}_{2K_2-1;2K-1}I^{2L_1-1}_{2K_1-1} - C^{2L_2-1;2L-1}_{2K_2-1;2K_1-1}I^{2L_1-1}_{2K-1}$$ $$+\Bigl. C^{2L_2-1;2L_1-1}_{2K_1-1;2K-1}I^{2L-1}_{2K_2-1}- C^{2L_2-1;2L_1-1}_{2K_2-1;2K-1}I^{2L-1}_{2K_1-1} + C^{2L_2-1;2L_1-1}_{2K_2-1;2K_1-1}I^{2L-1}_{2K-1} \Bigr \} =0~.$$ Equations (B.1) can easily be supplemented by scattering terms $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}\rightarrow C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$, and Eqs. (7), (A.1), and (B.1) will still form a complete set. The structure of interaction Hamiltonian (1) is such that scattering leads to connection of the given term only with itself and with two (or one) neighboring terms. These terms can be obtained from the given one by a change of parity of one of the upper or lower indexes. The relationships among the various terms $C^{\cdot\cdot}_{\cdot\cdot}$ are presented in Fig. 1. Appendix ======== We are now able to obtain the self-energy parts $\Sigma^{(1)}_{(K,L)}$ and $\Sigma_{(K,L)}$ in fourth-order perturbation theory. Straightforward elimination of terms in $C^{\cdot\cdot\cdot}_{\cdot\cdot\cdot}$ with $P\geq 2$ from Eqs. (6) using Eqs. (A.2) and (B.1) gives $$\Sigma^{(1)}_{(K,L)}= \frac{I^{2K_1}_{2L_1}} {\mu H+\varepsilon_4(L,L_1,K,K_1)- |I^{2L_2}_{2K_2-1}|^2/ \varepsilon_6- |I^{2L_2}_{2K_2}|^2/ (\mu H+\varepsilon_6)-\delta E}$$ $$\times \Biggl \{ I^{2L_1}_{2K_1}- \frac{I^{2K_2}_{2K_1}}{\mu H+\varepsilon_4(L,L_1,K,K_1)} \Biggl ( I^{2L_1}_{2K_2}- \frac{I^{2K_3}_{2K_2}I^{2L_1}_{2K_3}} {\mu H+\varepsilon_4(L,L_1,K,K_3)}$$ $$+\frac{I^{2L_1}_{2L_2}I^{2L_2}_{2K_2}} {\mu H+\varepsilon_4(L,L_2,K,K_2)}- \frac{I^{2K_3-1}_{2K_2}I^{2L_1}_{2K_3-1}} {\varepsilon_4(L,L_1,K,K_3)} \Biggr )- \frac{I^{2L_1}_{2L_2}} {\mu H+\varepsilon_4(L,L_2,K,K_1)}$$ $$\times\Biggl (-I^{2L_2}_{2K_1}+ \frac{I^{2K_2}_{2K_1}I^{2L_2}_{2K_2}} {\mu H+\varepsilon_4(L,L_2,K,K_2)}- \frac{I^{2L_2}_{2L_3}I^{2L_3}_{2K_1}} {\mu H+\varepsilon_4(L,L_3,K,K_1)}+ \frac{I^{2K_2-1}_{2K_1}I^{2L_2}_{2K_2-1}} {\varepsilon_4(L,L_2,K,K_2)} \Biggr )$$ $$-\frac{I^{2K_2-1}_{2K_1}} {\varepsilon_4(L,L_1,K,K_2)} \Biggl ( I^{2L_1}_{2K_2-1}- \frac{I^{2K_3}_{2K_2-1}I^{2L_1}_{2K_3}} {\mu H+\varepsilon_4(L,L_1,K_3,K)}- \frac{I^{2K_3-1}_{2K_2-1}I^{2L_1}_{2K_3-1}} {\varepsilon_4(L,L_1,K,K_3)} \Biggr )$$ $$-\frac{I^{2K_2}_{2L_2}} {\mu H+\varepsilon_6} \Biggl ( \frac{I^{2L_1}_{2K_2}I^{2L_2}_{2K_1}}{\mu H+\varepsilon_4(L,L_2,K,K_1)}+ \frac{I^{2L_2}_{2K_1}I^{2L_1}_{2K_2}} {\mu H+\varepsilon_4(L_1,L,K,K_2)}$$ $$-\frac{I^{2L_1}_{2K_1}I^{2L_2}_{2K_2}} {\mu H+\varepsilon_4(L,L_2,K,K_2)} \Biggr )- \frac{I^{2K_2-1}_{2L_2}I^{2L_1}_{2K_2-1}I^{2L_2}_{2K_1}} {\varepsilon_6(\mu H+\varepsilon_4(L,L_2,K,K_1)} \Biggr \}$$ $$+\frac{I^{2K_1-1}_{2L_1}} {\varepsilon_4(L,L_1,K,K_1)- |I^{2K_2}_{2L_2-1}|^2/ (\mu H+\varepsilon_6)- |I^{2K_2-1}_{2L_2-1}|^2/ \varepsilon_6-\delta E}$$ $$\times\Biggl \{ I^{2L_1}_{2K_1-1}- \frac{I^{2K_2}_{2K_1-1}}{\mu H+\varepsilon_4(L,L_1,K,K_2)} \Biggl ( I^{2L_1}_{2K_2}- \frac{I^{2K_3}_{2K_2}I^{2L_1}_{2K_3}} {\mu H+\varepsilon_4(L,L_1,K,K_3)}$$ $$+\frac{I^{2L_1}_{2L_2}I^{2L_2}_{2K_2}} {\mu H+\varepsilon_4(L,L_2,K,K_2)}- \frac{I^{2K_3-1}_{2K_2}I^{2L_1}_{2K_3-1}} {\varepsilon_4(L,L_1,K,K_3)} \Biggr )- \frac{I^{2K_2-1}_{2K_1-1}} {\varepsilon_4(L,L_1,K,K_2)}$$ $$\times\Biggl (I^{2L_1}_{2K_2-1}- \frac{I^{2K_3}_{2K_2-1}I^{2L_1}_{2K_3}} {\mu H+\varepsilon_4(L,L_1,K,K_3)}- \frac{I^{2K_3-1}_{2K_2-1}I^{2L_1}_{2K_3-1}} {\varepsilon_4(L,L_1,K,K_3)} \Biggr )$$ $$+\frac{I^{2L-1}_{2L_2-1}I^{2L_2-1}_{2L_3}I^{2L_3}_{2K_1-1}} {(\mu H+\varepsilon_4(L,L_2,K,K_1)) \varepsilon_4(L,L_3,K,K_1)}- \frac{I^{2K_2-1}_{2L_2-1}I^{2L_2-1}_{2K_1-1}I^{2L_1}_{2K_2-1}} {\varepsilon_6\varepsilon_4(L,L_1,K,K_2)} \Biggr \}~,$$ $$\Sigma_{(K,L)}= \frac{I^{2K_1}_{2L_1-1}} {\mu H+\varepsilon_4(L,L_1,K,K_1)-\delta E- |I^{2L_2}_{2K_2-1}|^2/ \varepsilon_6- |I^{2L_2}_{2K_2}|^2/ (\mu H+\varepsilon_6)}$$ $$\times\Biggl \{ I^{2L_1-1}_{2K_1}- \frac{I^{2K_2}_{2K_1}}{\mu H+\varepsilon_4(L,L_1,K,K_2)} \Biggl ( I^{2L_1-1}_{2K_2}- \frac{I^{2K_3}_{2K_2}I^{2L_1-1}_{2K_3}} {\mu H+\varepsilon_4(L,L_1,K,K_3)}$$ $$-\frac{I^{2K_3-1}_{2K_2}I^{2L_1-1}_{2K_3-1}} {\varepsilon_4(L,L_1,K,K_3)} \Biggr )- \frac{I^{2K_2-1}_{2K_1}} {\varepsilon_4(L,L_1,K,K_2)} \Biggl ( I^{2L_1-1}_{2K_2-1}- \frac{I^{2K_3}_{2K_2-1}I^{2L_1-1}_{2K_3}} {\mu H+\varepsilon_4(L,L_1,K,K_3)}$$ $$+\frac{I^{2L_1-1}_{2L_2-1}I^{2L_2-1}_{2K_2-1}} {\varepsilon_4(L,L_2,K,K_2)}- \frac{I^{2K_3-1}_{2K_2-1}I^{2L_1-1}_{2K_3-1}} {\varepsilon_4(L,L_1,K,K_3)} \Biggr )+ \frac{I^{2L_1-1}_{2L_2}I^{2L_2}_{2L_3-1}I^{2L_3-1}_{2K_1}} {\varepsilon_4(L,L_2,K,K_1)(\mu H+\varepsilon_4(L,L_3,K,K_1))}$$ $$-\frac{I^{2K_2}_{2L_2}I^{2L_2}_{2K_1}I^{2L_1-1}_{2K_2}} {(\mu H+\varepsilon_6)(\mu H+\varepsilon_4(L,L_1,K,K_2))} \Biggr \}$$ $$+\frac{I^{2K_1-1}_{2L_1-1}} {\varepsilon_4(L,L_1,K,K_1)- |I^{2K_2}_{2L_2-1}|^2 \bigl / (\mu H+\varepsilon_6) - |I^{2K_2-1}_{2L_2-1}|^2 \bigl / \varepsilon_6-\delta E}$$ $$\times\Biggl \{ I^{2L_1-1}_{2K_1-1}- \frac{I^{2K_2}_{2K_1-1}}{\mu H+\varepsilon_4(L,L_1,K,K_2)} \Biggl ( I^{2L_1-1}_{2K_2}- \frac{I^{2K_3}_{2K_2}I^{2L_1-1}_{2K_3}} {\mu H+\varepsilon_4(L,L_1,K,K_3)}$$ $$-\frac{I^{2K_3-1}_{2K_2}I^{2L_1-1}_{2K_3-1}} {\varepsilon_4(L,L_1,K,K_3)} \Biggr )- \frac{I^{2K_2-1}_{2K_1-1}} {\varepsilon_4(L,L_1,K,K_2)} \Biggl ( I^{2L_1-1}_{2K_2-1}$$ $$-\frac{I^{2K_3}_{2K_2-1}I^{2L_1-1}_{2K_3}} {\mu H+\varepsilon_4(L,L_1,K,K_3)} + \frac{I^{2L_1-1}_{2L_2-1}I^{2L_2-1}_{2K_2-1}} {\varepsilon_4(L,L_2,K,K_2)}- \frac{I^{2K_3-1}_{2K_2-1}I^{2L_1-1}_{2K_3-1}} {\varepsilon_4(L,L_1,K,K_3)} \Biggr )$$ $$+\frac{I^{2L_1-1}_{2L_2-1}} {\varepsilon_4(L,L_2,K,K_1)} \Biggl ( I^{2L_2-1}_{2K_1-1}- \frac{I^{2K_2}_{2K_1-1}I^{2L_2-1}_{2K_2}} {\mu H+\varepsilon_4(L,L_2,K,K_2)}$$ $$+\frac{I^{2L_2-1}_{2L_3-1}I^{2L_3-1}_{2K_1-1}} {\varepsilon_4(L,L_3,K,K_1)}- \frac{I^{2K_2-1}_{2K_1-1}I^{2L_2-1}_{2K_2-1}} {\varepsilon_4(L,L_2,K,K_2)} \Biggr )+ \frac{I^{2K_2-1}_{2L_2-1}} {\varepsilon_6}$$ $$\times\Biggl ( \frac{I^{2L_1-1}_{2K_1-1}I^{2L_2-1}_{2K_2-1}} {\varepsilon_4(L,L_2,K,K_2)}- \frac{I^{2L_1-1}_{2K_2-1}I^{2L_2-1}_{2K_1-1}} {\varepsilon_4(L,L_2,K,K_1)} - \frac{I^{2L_2-1}_{2K_1-1}I^{2L_1-1}_{2K_2-1}} {\varepsilon_4(L,L_1,K,K_2)} \Biggr )$$ $$-\frac{I^{2K_2}_{2L_2-1}I^{2L_1-1}_{2K_2}I^{2L_2-1}_{2K_1-1}} {(\mu H+\varepsilon_6)\varepsilon_4(L,L_2,K,K_1)} \Biggr \}.$$ From Eqs. (6) and (A.2), the quantities $C^{2L-1}_{2K}$ and $C^{2L-1}_{2K-1}$ can easily be obtained in the third order of perturbation theory. We do not give these expressions here because only one statement is essential for us: direct comparison of the quantities $\delta E$ (Eq. (4)) and self-energy $\Sigma_{K,L}$ (Eq. (C.2)) shows that $$\delta E+\Sigma (K,L)_{\varepsilon_K=\varepsilon_L=\varepsilon_F}=0.$$ Equation (C.3) is valid for arbitrary spectrum $\varepsilon_K, \varepsilon_L$ and arbitrary transition matrix elements $I^{\cdot}_{\cdot}$. Our conjecture is that Eq. (C.3) holds in all orders of perturbation theory, and hence we can put $$\delta E +\Sigma (K,L)_{\varepsilon_K=\varepsilon_L=\varepsilon_F}= -\Delta,$$ where $\Delta$ is exponentially small and can be considered an order parameter. We also obtain from Eqs. (C.1) and (C.2) that self-energies $\Sigma^{(1)}$ and $\Sigma$ coincide only in the second order of perturbation theory. They start to be different in the third order of perturbation theory. In the fourth order of perturbation theory, we obtain from Eqs. (C.1) and (C.2) $$\Sigma^{(1)}_{(K,L)}-\Sigma_{(K,L)}= I^{2K_1}_{2L_1}I^{2L_1}_{2L_2}I^{2L_2}_{2K_1} \Biggl ( \frac{1}{(\mu H+\varepsilon_4(L,L_1,K,K_1) (\mu H+\varepsilon_4(L,L_2,K,K_1))}$$ $$-\frac{1}{\varepsilon_4(L,L_1,K,K_1)\varepsilon_4(L,L_2,K,K_1)} \Biggr ) -I^{2K_2}_{2K_1}I^{2K_1}_{2L_1}I^{2L_1}_{2L_2}I^{2L_2}_{2K_2}$$ $$\times\Biggl \{ \Biggr. \Biggl (\frac{1}{\mu H+\varepsilon_4(L,L_1,K,K_1)}+ \frac{1}{\varepsilon_4(L,L_1,K,K_1)} \Biggr )$$ $$\times\Biggl ( \frac{1}{(\mu H+\varepsilon_4(L,L_2,K,K_2)) (\mu H+\varepsilon_4(L,L_1,K,K_2))}$$ $$-\frac{1}{\varepsilon_4(L,L_1,K,K_2) \varepsilon_4(L,L_2,K,K_2)} \Biggr )+ \Biggl (\frac{1}{\mu H+\varepsilon_4(L,L_2,K,K_2)}$$ $$+\frac{1}{\varepsilon_4(L,L_2,K,K_2)} \Biggr ) \Biggl (\frac{1}{(\mu H+\varepsilon_4(L,L_2,K,K_1)) (\mu H+\varepsilon_4(L,L_1,K,K_1))}$$ $$-\frac{1}{\varepsilon_4(L,L_2,K,K_1)\varepsilon_4(L,L_1,K,K_1)} \Biggr )- \Biggl ( \frac{1} {\mu H+\varepsilon_4(L,L_2,K,K_1)}- \frac{1}{\varepsilon_4(L,L_2,K,K_1)} \Biggr )$$ $$\times\Biggl ( \frac{1} {(\mu H+\varepsilon_4(L,L_3,K,K_1)) (\mu H+\varepsilon_4(L,L_1,K,K_1)}$$ $$+\frac{1} {\varepsilon_4(L,L_3,K,K_1) \varepsilon_4(L,L_1,K,K_1)} \Biggr )$$ $$+\Biggl ( \frac{1} {\varepsilon_6(\mu H+\varepsilon_4(L,L_2,K,K_1)) (\mu H+\varepsilon_4(L,L_1,K,K_1))}$$ $$-\frac{1} {(\mu H+\varepsilon_6)\varepsilon_4(L,L_2,K,K_1) \varepsilon_4(L,L_1,K,K_1)} \Biggr )$$ $$+\Biggl ( \frac{1}{(\mu H+\varepsilon_6)(\mu H+\varepsilon_4(L,L_1,K,K_1)} \Biggr ) \cdot \Biggl ( \frac{1} {\mu H+\varepsilon_4(L,L_2,K,K_1)}$$ $$-\frac{1}{\mu H+\varepsilon_4(L,L_2,K,K_2)} \Biggr )- \Biggl ( \frac{1} {\varepsilon_4(L,L_2,K,K_1)}- \frac{1}{\varepsilon_4(L,L_2,K,K_2)} \Biggr )$$ $$\times\frac{1}{\varepsilon_6\varepsilon_4(L,L_1,K,K_1)} \Biggr \},$$ where $$\varepsilon_4(L,L_1,K,K_1)\equiv\varepsilon_L+\varepsilon_{L_1}- \varepsilon_K-\varepsilon_{K_1}~,$$ $$\varepsilon_6\equiv\varepsilon_L+\varepsilon_{L_1}+\varepsilon_{L_2}- \varepsilon_K-\varepsilon_{K_1}-\varepsilon_{K_2}.$$ Straightforward calculation of the integrals in Eq. (C.5) leads to Eqs. (40) and (41). Both Eqs. (40) and (41) are proved in two orders of perturbation theory. Our conjecture is that Eq. (41) is exact. Appendix ========= In this appendix we consider the role of the right-hand side of Eqs. (7) for a negative value of the coupling constant, $g<0$. In the first order of perturbation theory, we obtain from (A.2) $$C^{2L_1;2L-1}_{2K_1;2K}= \frac{1}{\mu\tilde H+\varepsilon_4(L,L_1,K,K_1)+\Delta}$$ $$\times\Biggl [ C^{2L-1}_{2K_1}I^{2L_1}_{2K}-C^{2L-1}_{2K}I^{2L_1}_{2K_1}+ C^{2L_1}_{2K}I^{2L-1}_{2K_1}-C^{2L_1}_{2K_1}I^{2L-1}_{2K} \Biggr ]~;$$ $$C^{2L_1;2L-1}_{2K_1-1;2K}= \frac{1}{\varepsilon_4(L,L_1,K,K_1)+\Delta} \Biggl [ I^{2L_1}_{2K_1-1}C^{2L-1}_{2K}-C^{2L_1}_{2K}I^{2L-1}_{2K_1-1} \Biggr ]~;$$ $$C^{2L_1-1;2L-1}_{2K_1;2K-1}= \frac{1}{\mu\tilde H+\varepsilon_4(L,L_1,K,K_1)+\Delta} \Biggl [ C^{2L_1-1}_{2K-1}I^{2L-1}_{2K_1}-C^{2L-1}_{2K-1}I^{2L_1-1}_{2K_1} \Biggr ]~;$$ $$C^{2L_1-1;2L-1}_{2K_1-1;2K-1}= \frac{1}{\varepsilon_4(L,L_1,K,K_1)+\Delta}$$ $$\times\Biggl [ I^{2L_1-1}_{2K_1-1}C^{2L-1}_{2K-1}-C^{2L-1}_{2K_1-1}I^{2L_1-1}_{2K-1}+ C^{2L_1-1}_{2K_1-1}I^{2L-1}_{2K-1}-C^{2L_1-1}_{2K-1}I^{2L-1}_{2K_1-1} \Biggr ].$$ Inserting (D.1) into (6), we obtain $$A_1\Bigl ( C^{2L-1}_{2K};C^{2L-1}_{2K-1},C^{2L}_{2K} \Bigr )=- \sum \frac {I^{2K_1}_{2L_1}\Bigl ( C^{2L-1}_{2K_1}I^{2L_1}_{2K}+ C^{2L_1}_{2K}I^{2L-1}_{2K_1}- C^{2L_1}_{2K_1}I^{2L-1}_{2K} \Bigr )} {\mu\tilde H+\varepsilon_4(L,L_1,K,K_1)+\Delta}$$ $$-\sum \frac {I^{2K_1-1}_{2L_1}I^{2L-1}_{2K_1-1}C^{2L_1}_{2K}} {\varepsilon_4(L,L_1,K,K_1)+\Delta}~,$$ $$A_2\Bigl ( C^{2L-1}_{2K};C^{2L-1}_{2K-1},C^{2L}_{2K} \Bigr )=$$ $$-\sum \frac {I^{2K_1-1}_{2L_1-1}\Bigl ( C^{2L-1}_{2K_1-1}I^{2L_1-1}_{2K-1}- C^{2L_1-1}_{2K_1-1}I^{2L-1}_{2K-1}+ C^{2L_1-1}_{2K-1}I^{2L-1}_{2K_1-1} \Bigr )} {\varepsilon_4(L,L_1,K,K_1)+\Delta}$$ $$-\sum \frac {I^{2K_1}_{2L_1-1}I^{2L-1}_{2K_1}C^{2L_1-1}_{2K-1}} {\mu\tilde H+\varepsilon_4(L,L_1,K,K_1)+\Delta}~,$$ $$A_3\Bigl ( C^{2L-1}_{2K};C^{2L-1}_{2K-1},C^{2L}_{2K} \Bigr )= \sum \frac {I^{2K_1}_{2L_1-1}\Bigl ( C^{2L_1-1}_{2K_1}I^{2L}_{2K}- C^{2L_1-1}_{2K}I^{2L}_{2K_1}- C^{2L}_{2K_1}I^{2L_1-1}_{2K} \Bigr )} {\mu\tilde H+\varepsilon_4(L,L_1,K,K_1)+\Delta}$$ $$-\sum \frac {I^{2K_1-1}_{2L_1-1}I^{2L}_{2K_1-1}C^{2L_1-1}_{2K}} {\varepsilon_4(L,L_1,K,K_1)+\Delta}$$ The quantities $\varepsilon_4,~\varepsilon_6$ here are the same as in Eq. (C.6). As before, only convolutions $Z_L,Y_L$ are large for $g<0$. Furthermore, $$|Z_L+Y_L|\sim g^2|Z_L-Y_L|.$$ As the result, Eqs. (7) can be reduced to just one equation: $$\Bigl ( Z_L-Y_L \Bigr ) \Biggl [ 1+g\ln\frac{\varepsilon_F}{y+\Delta}+ g\ln\frac{\varepsilon_F}{\mu\tilde H+y+\Delta}$$ $$+\frac{g^3}{2}\Biggl ( \frac{I_1}{g\ln\varepsilon_F \bigl /(\mu\tilde H+y+\Delta)}+ \frac{I_2}{g\ln\varepsilon_F \bigl /(y+\Delta)} \Biggr ) \Biggr ]$$ $$=Ig \Biggl ( \ln\frac {\varepsilon_f}{\mu\tilde H+y+\Delta}+ \ln \frac{\varepsilon_F}{y+\Delta} \Biggr )+ g\int dx \Biggl ( \frac{X_K}{\mu\tilde H+y+x+\Delta}- \frac{Y_K}{y+x+\Delta} \Biggr ),$$ where $$I_1=\int\frac{dxdydx_1} {(\mu\tilde H+y+x+\Delta) (\mu\tilde H+y+x_1+\Delta) (\mu\tilde H+y+x+y_1+x_1+\Delta)},$$ $$I_2=\int\frac{dxdydx_1} {(y+x+\Delta) (y+x_1+\Delta) (y+x+y_1+x_1+\Delta)}.$$ A simple calculation of the integrals (D.5) gives $$I_1=\frac{1}{3}\ln^3\Biggl ( \frac{\varepsilon_F}{\mu\tilde H+y+\Delta} \Biggr )~, \qquad$$ $$I_2=\frac{1}{3}\ln^3\Biggl ( \frac{\varepsilon_F}{y+\Delta} \Biggr ).$$ Now we can define the Kondo temperature $T_c$ to be $$|g|\ln \frac{\varepsilon_F}{T_c}=z,$$ where $z$ is a root of the ???quadratic equation $$1-2z+\frac{z^2}{3}=0; \qquad z=3-\sqrt{6}\approx 0.5505.$$ From Eq. (D.4) we obtain $$Z_L-Y_L=- \frac{I\tilde\beta} {|g|(1-z/3)\ln \Biggl ( \frac {(\mu\tilde H+y+\Delta)(y+\Delta)}{T^2_c}\Biggr )},$$ where $\tilde\beta$ is a number of order 1. Instead of Eqs. (41) and (42) we have now $$\mu\tilde H=\mu H-\delta\Sigma; \qquad \delta \Sigma=-\mu Hz(-1/2+\langle S_z\rangle ).$$ As before, the average spin $\langle S_z\rangle$ is given by Eq. (27) with the replacement $\mu H\rightarrow \mu\tilde H$: $$\langle S_z\rangle = \frac{\mu\tilde H}{4\bigl (T^2_c+(\mu\tilde H/2)^2\bigr )^{1/2}}.$$ The magnetic field dependence of the average spin $\langle S_z\rangle$ (Eqs. (D.10) and (D.11)) is given in Fig. 2. Dots are the experimental results of Ref. 4. [99]{} A. A. Abrikosov and A. A. Migdal, J. of Low Temp. Phys. [**3**]{}, 519 (1970). A. M. Tsvelick and P. B. Wigmann, Advances in Physics [**32**]{}, 453 (1983). N. Andrei, K. Furuya, and J. H. Lowenstein, Rev. Mod. Phys. [**55**]{}, 331 (1983). W. Felsch, Z. Phys. [**B29**]{}, 212 (1978). S. D. Bader, N. E. Phillips, M. B. Haple, and C. A. Luengo, Solid State Commun. [**16**]{}, 1263 (1975). P. Nozieres and T.C. de Dominicis, Phys. Rev. [**178**]{}, 1097 (1969). Yu. N. Ovchinnikov, A. M. Dyugaev, P. Fulde, and V. Z. Kresin, JETP Lett. [**66**]{}, 184 (1997). K. Yosida, Phys. Rev. [**147**]{}, 223 (1966). Hiroumi Ishii, Prog. Theor. Phys. [**40**]{}, 201 (1968). Hiroumi Ishii, Prog. Theor. Phys. [**43**]{}, 578 (1970). M. Fowler, A. Zawadowskii, Solid St. Comm. [**9**]{}, 471 (1971).
{ "pile_set_name": "ArXiv" }
--- abstract: 'In paper IV (solv-int/9704013) we have considered a string living in the infinite lattice that was, in a sense, generated by a “particle”. Here we show how to construct multi-string eigenstates generated by several particles. It turns out that, at least in some cases, this allows us to bypass the difficulties of constructing multi-particle states. We also present and discuss the “dispersion relations” for our particles–strings.' author: - 'I.G. Korepanov' date: May 1997 title: | Some eigenstates for a model associated with solutions of tetrahedron equation.\ V. Two cases of string superposition --- r@sec-twop makecaption\#1\#2[10@ to]{} Introduction {#introduction .unnumbered} ============ Let us recall that we have introduced in paper [@I] some “one-particle” eigenstates for the model based upon solutions of the tetrahedron equation. In the same paper, we have also constructed some “two-particle” states. However, some special condition arised in this construction, and the superposition of two [*arbitrary*]{} one-particle states was not achieved. Even the “creation operators” of paper [@II] did not give a clear answer concerning multi-particle states. On the other hand, in paper [@IV] we have brought in correspondence to a one-particle state some new state that could be called “one open string”. It was done using some special “kagome transfer matrix”. Here we will show that the superposition of such one-string states is easier to construct, because of degeneracy of kagome transfer matrix: it turns into zero the “obstacles” that hampered constructing of multi-particle states. The scheme of string—particle “marriage” in [@IV] was as follows: take a one-particle state from [@I; @II], and apply to it a kagome transfer matrix with boundary conditions corresponding to the presence of two string tails in the infinity, e.g. like this: (25.00,4.50) (25.00,2.00)[(50.00,4.00)\[l\]]{} . In this paper, we are going to complicate this scheme in the following way: the boundary conditions will correspond to the presence of an even number of string tails at the infinity, and instead of a one-particle state, we will use some special multi-particle vector $\Psi$. Its peculiarity will be in the fact that $\Psi$ is [*no longer an eigenstate*]{} of the hedgehog transfer matrix $T$ defined in [@I]. Instead, it will obey the condition T= + ’, \[V-int-1\] where $\lambda={\rm const}$, and $\Psi'$ is annulated by the kagome transfer matrix of paper [@IV] which we will denote $K$. Recall that we have defined $T$ in such a way that its degrees could be described geometrically as “oblique slices” of the cubic lattice. The transfer matrix $T$ can be passed through the transfer matrix $K$: TK=KT, \[V-int-2\] the boundary conditions (such as the number and form of tails at the infinity) for $K$ being intact. Define vector $\Phi$ as $$\Phi=K\Psi.$$ This together with (\[V-int-1\]) and (\[V-int-2\]) gives T=, \[V-int-4\] exactly as needed for an eigenvector. We also present in this paper the “dispersion relations” for our particles–strings in a workable form—something that was missing in papers [@I; @II]. Eigenvectors of the “several open strings” type for the infinite lattice {#secV-1} ======================================================================== Let there be $n$ one-particle amplitudes $\varphi_{\ldots}^{(1)}, \ldots, \varphi_{\ldots}^{(n)}$ of the same type as those described in the work [@I]. Let us compose an “$n$-particle vector” $\Psi$, i.e. put in correspondence to each unordered $n$-tuple of vertices $A^{(1)},\ldots,A^{(n)}$ of the kagome lattice the symmetrized amplitude in the following way: \_[A\^[(1)]{},…,A\^[(n)]{}]{} = \_s \_[A\^[s(1)]{}]{}\^[(1)]{} …\_[A\^[s(n)]{}]{}\^[(n)]{}, \[V-1-1\] where $s$ runs through the group of all permutations of the set $\{1,\ldots,n\}$. As for the boundary conditions for the transfer matrix $K$ described in the Introduction, let us assume that there are exactly $2n$ string tails, and they all go in positive directions, that is between the east and the north. Thus, in each of the points $A^{(1)},\ldots,A^{(n)}$ a string is created, and they are not annihilated. The vector (\[V-1-1\]) is not an eigenvector of transfer matrix $T$ due to problems arising when two or more points $A^{(k)}$ get close to one another. Nevertheless, the vector $\Phi=K\Psi$ [*is*]{} an eigenvector, because for it those problems disappear due to the simple fact: [*creation of two or more strings within one triangle of the kagome lattice is geometrically forbidden*]{}. Eigenvectors of the “closed string” type for the infinite lattice {#secV-2} ================================================================= In this section, we will put in correspondence to each unordered pair of vertices of the infinite kagome lattice an “amplitude” $\Psi_{AB}$ according to the following rules. If one of the vertices, say $A$, [*precedes*]{} the other one, say $B$, in the sense that they can be linked by a path—a broken line—going along lattice edges in positive directions, namely northward, eastward, or to the north-east, then let us put \_[AB]{}= \_A \_B - \_A \_B, \[V-2-1\] where $\varphi_{\ldots}$ and $\psi_{\ldots}$ are two one-particle amplitudes of the same type as in paper [@I]. In the case if vertices $A$ and $B$ cannot be joined by a path of such kind, let us put $$\Psi_{AB}=0.$$ The values $\Psi_{AB}$ are components of the vector $\Psi$ that belong to the two-particle subspace of tensor product of two-dimensional spaces situated in all kagome lattice vertices. What prevents $\Psi_{AB}$ from being an eigenvector of the hedgehog transfer matrix is discrepancies arising near those pairs $A,B$ that lie at the “border” between such pairs where one of the vertices precedes the other (so to speak, “the interval $AB$ is timelike”), and such pairs where it does not (“the interval $AB$ is spacelike”). Those discrepancies, however, disappear for the vector $\Phi=K\Psi$, where $K$ is the kagome transfer matrix described in the Introduction with the boundary conditions reading [*no string tails at the infinity*]{}. This is because if a string cannot, geometrically, be created at the point $A$ (or $B$) and then annihilated at the point $B$ (or $A$), then the amplitude $\Psi_{AB}$ doesn’t influence at all the vector $\Phi$. The only thing that remains to be checked for (\[V-int-4\]) to hold is a situation where $A$ and $B$ are in the same kagome lattice triangle that will be turned inside out by one of the hedgehogs of transfer matrix $T$, as in Figure \[figV-1\]. (65.00,33.00) (3.00,12.00)[(1,0)[20.00]{}]{} (23.00,12.00)[(0,1)[20.00]{}]{} (23.00,32.00)[(-1,-1)[20.00]{}]{} (30.00,17.00)[(1,0)[6.00]{}]{} (43.00,2.00)[(0,1)[20.00]{}]{} (43.00,22.00)[(1,0)[20.00]{}]{} (63.00,22.00)[(-1,-1)[20.00]{}]{} (3.00,12.00) (23.00,12.00) (23.00,32.00) (43.00,22.00) (63.00,22.00) (43.00,2.00) (13.00,22.00)[(1,1)[1.00]{}]{} (23.00,22.00)[(0,1)[1.00]{}]{} (13.00,12.00)[(1,0)[1.00]{}]{} (43.00,12.00)[(0,1)[1.00]{}]{} (53.00,12.00)[(1,1)[1.00]{}]{} (53.00,22.00)[(1,0)[1.00]{}]{} (1.00,12.00)[(0,0)\[rc\][$A$]{}]{} (25.00,32.00)[(0,0)\[lc\][$B$]{}]{} (41.00,2.00)[(0,0)\[rc\][$B'$]{}]{} (65.00,22.00)[(0,0)\[lc\][$A'$]{}]{} Acting in the same manner as in Section \[sec-twop\] of work [@I], write = , = , \[V-2-3\] where =-, -=-1. \[V-2-4\] It follows from the formulas (\[V-2-3\]) and (\[V-2-4\]) that $$\varphi_A \psi_B-\varphi_B \psi_A= \varphi_{B'} \psi_{A'}-\varphi_{A'} \psi_{B'},$$ i.e.$$\Psi_{AB}=\Psi_{B'A'},$$ exactly what was needed to comply with the fact that an $S$-operator-hedgehog acts as a unity operator in the two-particle subspace. Dispersion relations {#V-sec-disp} ==================== The constructed eigenvectors of transfer matrix $T$ are of course eigenvectors for translation operators through periods of kagome lattice as well. Let us consider here relations between the corresponding eigenvalues, starting from the simplest one-particle eigenstate. Consider once again some triangle $ABC$ of the kagome lattice, and its image $A'B'C'$ under the action of $S$-matrix-hedgehog, as in Figure \[figV-3-1\]. =1.00mm (65.00,32.50) (3.00,12.00)[(1,0)[20.00]{}]{} (23.00,12.00)[(0,1)[20.00]{}]{} (23.00,32.00)[(-1,-1)[20.00]{}]{} (30.00,17.00)[(1,0)[6.00]{}]{} (43.00,2.00)[(0,1)[20.00]{}]{} (43.00,22.00)[(1,0)[20.00]{}]{} (63.00,22.00)[(-1,-1)[20.00]{}]{} (3.00,12.00) (23.00,12.00) (23.00,32.00) (43.00,22.00) (63.00,22.00) (43.00,2.00) (1.00,12.00)[(0,0)\[rc\][$A$]{}]{} (25.00,32.00)[(0,0)\[lc\][$C$]{}]{} (41.00,2.00)[(0,0)\[rc\][$C'$]{}]{} (65.00,22.00)[(0,0)\[lc\][$A'$]{}]{} (41.00,22.00)[(0,0)\[rc\][$B'$]{}]{} (25.00,12.00)[(0,0)\[lc\][$B$]{}]{} Let us write out some relations of the type (\[V-2-3\]), namely = , \[V-3-1\] = , \[V-3-2\] where $\varphi_{\ldots}$ is any one-particle vector, and the numbers $a, \ldots ,\tilde d$ satisfy conditions of type (\[V-2-4\]), i.e.$$\matrix{ a=-d,& \qquad ad-bc=-1, \cr \tilde a=-\tilde d,& \qquad \tilde a\tilde d-\tilde b\tilde c=-1.}$$ From (\[V-3-1\]) follows = [-a(\_A/\_[A’]{})+1(\_A/\_[A’]{})-a]{}, \[V-3-3\] and from (\[V-3-2\]) follows $${\varphi_C\over \varphi_{C'}}= {-\tilde a(\varphi_B/\varphi_{B'})+1 \over (\varphi_B/\varphi_{B'})-\tilde a}.$$ Surely, the numbers $a$ and $\tilde a$ depend on an $S$-operator-hedgehog. On the other hand, this latter is parameterized by exactly two parameters. So, it seems that it makes sense to take $a$ and $\tilde a$ as those parameters. We can take for eigenvalue of the hedgehog transfer matrix $T$ either $\varphi_{A'}/\varphi_A$, or $\varphi_{B'}/\varphi_B$, or $\varphi_{C'}/\varphi_C$. These variants correspond, strictly speaking, to different definitions of $T$, but each of them is consistent with the requirement that the degrees of $T$ must be represented graphically as “oblique layers” of cubic lattice (the difference being that, with the three different definitions, the action of transfer matrix $T$ corresponds to the shifts through cubic lattice periods along three different axes). Our goal is to express the eigenvalues of translation operators acting within the kagome lattice for a given one-particle state through, say, $\varphi_{A'}/\varphi_A$. If we speak about translation through one lattice period [*to the right*]{} in the sense of Figures \[figV-3-1\] and \[figV-3-2\], (50.00,50.00) (25.00,0.00)[(0,1)[50.00]{}]{} (0.00,25.00)[(1,0)[50.00]{}]{} (0.00,20.00)[(1,1)[30.00]{}]{} (20.00,0.00)[(1,1)[30.00]{}]{} (4.00,27.00)[(0,0)\[rb\][$A$]{}]{} (23.00,46.00)[(0,0)\[rb\][$C$]{}]{} (46.00,23.00)[(0,0)\[lt\][$D$]{}]{} (26.00,3.00)[(0,0)\[lt\][$E$]{}]{} (27.00,27.00)[(0,0)\[lb\][$B$]{}]{} (5.00,25.00) (25.00,5.00) (25.00,25.00) (25.00,45.00) (45.00,25.00) then this eigenvalue is $\varphi_D/\varphi_A$. It is clear that $${\varphi_D\over \varphi_B}={\varphi_{A'}\over \varphi_{B'}}$$ —the ratios of values $\varphi_{\ldots}$ in the triangle $DBE$ are the same as in $A'B'C'$. Thus, =[\_[A’]{}\_[B’]{}]{} [\_B\_A]{}=[\_[A’]{}\_A]{} \[V-3-6\] (we have used (\[V-3-3\]). A similar relation can be written out for the translation through one lattice period in [*upward*]{} direction in the sense of Figures \[figV-3-1\] and \[figV-3-2\], namely = [\_[B’]{}\_B]{} , \[V-3-7\] where one has to substitute the expression (\[V-3-3\]) for $\varphi_B/\varphi_{B'}$. It is clear that the “dispersion relations” of type (\[V-3-6\]–\[V-3-7\]) survive also for a string “created by a particle”, if we substitute the eigenvalue of transfer matrix $T$ instead of $\varphi_{A'}/ \varphi_A$, and the eigenvalues of translation operators instead of $\varphi_D/ \varphi_A$ and $\varphi_C/ \varphi_E$. As for the multi-string states, all of the eigenvalues are obtained for them as products of corresponding eigenvalues for each string. Discussion {#sec-V-discussion} ========== We have shown in this paper that the string—particle “marriage” from paper [@IV] makes possible a simple and clear construction of at least some multi-string states. Recall that, from all the corresponding multi-particle states, we only could explicitely construct some two-particle states [@I], with an additional restriction that could be formulated as “the total momentum of two particles is zero”. As for the present paper, the momenta of “particles” generating the multi-string states of Sections \[secV-1\] and \[secV-2\] can change independently. These states have been constructed for the infinite kagome lattice. We have to recognize that constructing such states on a finite lattice remains an open problem. It is also unclear how to combine the results of Sections \[secV-1\] and \[secV-2\], i.e. construct such states with string “creation” and “annihilation” where the total number of “creating” and “annihilating” particles would be more than two. Note that in Section \[secV-1\] we use the symmetrized product of one-particle amplitudes, while in Section \[secV-2\]—the antisymmetrized one. Concerning the dispersion relations of Section \[V-sec-disp\], let us remark that perhaps there are too many of them. It is probably caused by the fact that, for now, we managed to construct not all one-particle and/or one-string states. [99]{} I.G. Korepanov, [*Some eigenstates for a model associated with solutions of tetrahedron equation*]{}, solv-int/9701016, 7p. I.G. Korepanov, [*Some eigenstates for a model associated with solutions of tetrahedron equation. II. A bit of algebraization*]{}, solv-int/9702004, 8p. I.G. Korepanov, [*Some eigenstates for a model associated with solutions of tetrahedron equation. III. Tetrahedral Zamolodchikov algebras and perturbed strings*]{}, solv-int/9703010, 7p. I.G. Korepanov, [*Some eigenstates for a model associated with solutions of tetrahedron equation. IV. String—particle marriage*]{}, solv-int/9704013, 6p.
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we explore extensions of the Minimal Supersymmetric Standard Model involving two $SU(2)_L$ triplet chiral superfields that share a superpotential Dirac mass yet only one of which couples to the Higgs fields. This choice is motivated by recent work using two singlet superfields with the same superpotential requirements. We find that, as in the singlet case, the Higgs mass in the triplet extension can easily be raised to $125\,{~\text{GeV}}$ without introducing large fine-tuning. For triplets that carry hypercharge, the regions of least fine tuning are characterized by small contributions to the $\mathcal T$ parameter, and light stop squarks, $m_{\tilde t_1} \sim 300-450\,{~\text{GeV}}$; the latter is a result of the $\tan\beta$ dependence of the triplet contribution to the Higgs mass. Despite such light stop masses, these models are viable provided the stop-electroweakino spectrum is sufficiently compressed.' author: - 'C. Alvarado[^1]' - 'A. Delgado[^2]' - 'A. Martin[^3]' - 'and B. Ostdiek[^4]' bibliography: - 'MyBib.bib' title: Dirac Triplet Extension of the MSSM --- Introduction {#sec:intro} ============ The Minimal Supersymmetric Standard Model (MSSM) sets $m_{Z}$ as the upper bound of the tree-level mass of the lightest [*CP*]{} even scalar in the spectrum. Since this particle is commonly identified with the Standard Model Higgs boson, either large one-loop corrections due to heavy third family squarks or a high degree of stop mixing are necessary to push $m_{h}$ up to the observed value of $\sim 125 {~\text{GeV}}$ [@Aad:2012tfa; @Chatrchyan:2012ufa]. Either of these two requirements on the stops introduces sub-percent fine tuning [@Hall:2011aa]. This occurs because both effects radiatively induce large corrections to the soft mass of the Higgs field $m_{H_u}^2$, which must be canceled off in order stabilize the electroweak scale. In this sense, the observation of the Higgs with a 125 GeV mass makes the MSSM alarmingly fine-tuned, independent of the fact that we have not yet discovered any supersymmetric particles. A variety of techniques have been proposed to avoid such a heavy stop spectrum. The simplest possibilities are to extend the MSSM gauge group or field content, respectively modifying the $D$- and $F$-terms of the Higgs potential [@Cvetic:1997ky; @Ellwanger:2009dp; @Delgado:2010cw]. While the former necessarily alters the quartic terms in a manner dictated by the gauge group, the later relies on raising the quartic coupling of the Higgses via the inclusion of extra superpotential couplings. A class of well-known models based on this effect is the Next-to-Minimal Supersymmetric Standard Model (NMSSM) which adds a gauge singlet field $S$ to the MSSM. Although capable of rendering the correct Higgs mass, the NMSSM does so by decoupling the scalar part of the singlet superfield. However, the soft mass of the singlet feeds back into $m^2_{H_u}, m^2_{H_d}$ at one loop via the renormalization group equations. Large singlet masses therefore can lead to large corrections to $m_{H_{u,d}}^2$, so the NMSSM solution to the Higgs mass comes at the expense of substantial tine tuning. The authors of [@Lu:2013cta] extended the NMSSM with a second singlet $\bar{S}$ which does not couple to the Higgs doublets yet has a superpotential mass term with $S$: $$W=W_{\text{Yukawa}}+(\mu+\lambda S)H_{u}H_{d}+MS\bar{S},$$ where $W_{\text{Yukawa}}$ stands for the usual MSSM Yukawa terms; due to the Dirac mass term between the singlets, the model was dubbed the DiracNMSSM. The tree level Higgs mass squared in this setup is modified, receiving a positive contribution that depends on the $\bar S$ soft mass, and a negative contribution that depends on the $S$ soft mass. Including the one-loop correction from stop loops (see for example Ref [@Carena:2011aa]), the resulting Higgs mass is $$\begin{aligned} m^2_h = &m_Z^2 \cos^2(2\beta) + (\text{stop loops}) \nonumber \\ & + \lambda^2 v^2 \sin^2(2\beta)\left(\frac{m_{\bar{S}}^2}{M^2+m_{\bar{S}}^2}\right) - \frac{\lambda^2v^2}{M^2+m_S^2}\left|A_{\lambda} \sin(2\beta) -2\mu^* \right|^2. \label{eqn:singletModel}\end{aligned}$$ To efficiently raise the Higgs mass, one takes advantage of the positive term while trying to keep the negative term as small as possible. The positive term is increased by taking the soft mass of the non coupled singlet – $m^2_{\bar S}$ – to be much larger than the supersymmetric mass term, $M$. If $M$ is also larger than $\lambda^2v^2$ then the negative term is minimized. While large singlet masses in the NMSSM come hand-in-hand with increased tuning, this does not happen here. Specifically, the authors of [@Lu:2013cta] showed that the mass of $\bar{S}$ can be raised almost indefinitely without introducing fine tuning – a clear violation of the conventional wisdom that increases to the Higgs mass require new light states. As explained in [@Lu:2013cta], the keys to this behavior are the Dirac mass term between $S$ and $\bar S$ and the absence of couplings of $\bar{S}$ with $H_{u}, H_{d}$. A detailed study of the DiracNMSSM was performed in Ref. [@Kaminska:2014wia], taking into account all corrections at one loop order and dominant two-loop corrections. Beyond fine tuning, constraints from SUSY searches and dark matter were also applied. One disadvantage of the original DiracNMSSM is that the singlet contribution to $m^2_h$ has the same $\tan\beta$ dependence as in the NMSSM. Specifically, the singlet piece is largest at low $\tan\beta$, exactly the region where the MSSM tree level Higgs mass vanishes. This can be overcome, but requires sizable coupling of the singlet to Higgses. In this paper, we examine the effects of replacing the singlets in the DiracNMSSM with triplets under $SU(2)_L$, maintaining the key features of the Dirac mass and with only one triplet coupled to the Higgses. Triplet extensions of the MSSM have been studied extensively [@Espinosa:1991wt; @FelixBeltran:2002tb; @DiChiara:2008rg; @Agashe:2011ia; @Basak:2012bd; @Delgado:2012sm; @Delgado:2013zfa; @Bandyopadhyay:2013lca; @Kang:2013wm; @deBlas:2013epa]; they offer richer phenomenology than singlet extensions, but they are also more constrained. Specifically, the neutral components of the triplets generically acquire vacuum expectation values (vev), causing tension with electroweak precision observables [@Khandker:2012zu; @Englert:2013zpa][^5]. Nonetheless, triplets offer appealing features when compared with singlets, especially in the context of the DiracNMSSM: (i) more variety due to two possible hypercharge assignments, $Y=0$ or $Y=\pm1$, and (ii) triplets with hypercharge must be included in pairs for anomaly cancellation and can only have Dirac-type superpotential mass. The rest of the paper is organized as follows. In Sec. \[sec:model\] we introduce the key superpotential interactions and give the correction to the Higgs mass for both the $Y=0$ and $Y=\pm1$ triplet models. Next, in Sec. \[sec:ft\] we analytically study the various sources of fine tuning, pinpointing the dependence of each term on the triplet parameters. This is followed up by a discussion of the precision electroweak $\mathcal{T}$ parameter. From this discussion, it will be clear that the $Y = \pm 1$ model works better at raising the mass of the Higgs, avoiding fine tuning, and staying within the electroweak precision constraints. In Sec. \[sec:Num\] we perform a numerical study, focusing on the $Y = \pm 1$ scenario. As one of the primary differences between singlets and triplets is the existence of additional charged and potentially light fermions, in Sec. \[sec:TripPheno\] we review the phenomenology of ‘exotic’ states, examining both direct production and indirect effects such as altered stop decays. Finally, conclusions are drawn in Sec. \[sec:conclusions\]. The Models {#sec:model} ========== There are two signature features in the DiracNMSSM [@Lu:2013cta], a Dirac mass term between two strictly different superfields, and the fact that only one of the two singlets couples to the Higgs doublets. The extension explored here, where a pair of triplets take the role of the singlets should maintain both properties. With this in mind, we define $\Sigma_1$ to be a $SU(2)_{L}$ triplet chiral superfield which couples to the Higgses in the superpotential, and a define a second triplet $\Sigma_{2}$, which does not. This is not the most general superpotential allowed by the symmetries of the model but we follow the setup of the original DiracNMSSM, in any case the choice is radiatively stable since superpotential couplings can not be generated via radiative corrections[^6]. With the inclusion of the triplets $\Sigma_{1,2}$ the superpotential is enlarged to $$W=\mu H_{u}\cdot H_{d}+\mu_{\Sigma}\text{Tr}(\Sigma_{1}\cdot \Sigma_{2})+W_{H-\Sigma}+W_{\text{Yukawa}} \label{eqn_SuperPotential}$$ where the isospin product employs the convention $a\cdot b\equiv a_{i}\varepsilon_{ij}b_{j}$ with $\varepsilon_{21}=-\varepsilon_{12}=-1$. The parameter $\mu_{\Sigma}$ is a supersymmetric Dirac mass for the triplets, $W_{H-\Sigma}$ couples $H_{u,d}$ with $\Sigma_{1}$ in a way specified by the hypercharge assignments of the triplets, and $W_{\text{Yukawa}}$ represents the standard MSSM Yukawa couplings. We will analyze the cases $Y=0$ and $Y=\pm 1$ for the hypercharge of the triplets[^7]. When the triplets have hypercharge $Y=0$, they can couple to a combination of $H_u H_d$. This case should be seen as a simple extension of the singlet DiracNMSSM scenario, as the couplings take the same form up to factors of $\sqrt{2}$ coming from the normalization of the triplets. On the other hand, triplets with a hypercharge $Y=\pm1$ can only couple to $H_d^2$ or $H_u^2$. We examine the case where $H_u$ couples to the triplet but $H_d$ does not, since the latter will only generate an increased Higgs mass for the unphysical region of $\tan\beta<1$. Both triplet scenarios contain charged scalars and fermions that are absent in the singlet DiracNMSSM. While potentially interesting at colliders, these extra states have minimal impact on the Higgs mass or fine tuning, so we will largely ignore them here. Comments on the phenomenology of the extra states can be found in Sec. \[sec:TripPheno\]. $Y=0$ case ---------- Triplets with hypercharge $Y=0$ couple to both $H_u$ and $H_d$ and are a simple extension to the singlet case studied in [@Lu:2013cta]. The superpotential is given by Eq. (\[eqn\_SuperPotential\]) with $$W_{H-\Sigma}=\lambda H_{d}\cdot \Sigma_{1}H_{u}.$$ Forming the scalar potential, the superpotential terms are accompanied by the soft terms $$\Delta V_{\text{soft}} =m^2_{T} \text{Tr}{|\Sigma_1|^2} + m^2_{\chi} \text{Tr}{|\Sigma_2|^2} + \left( \lambda A_{\lambda} H_d \cdot \Sigma_1 H_u + \mu_{\Sigma} B_{\Sigma} \text{Tr}{(\Sigma_1 \cdot \Sigma_2)} + \text{h.c.} \right), \label{eq:vsoft}$$ and the usual $SU(2)_{L}$ and $U(1)_{Y}$ $D$-terms. Here, $m_{T,\chi}$ are the triplet soft masses, $A_{\lambda}$ and $B_{\Sigma}$ are the trilinear and bilinear soft couplings respectively. While it is possible to give $Y=0$ triplets a non-Dirac supersymmetric mass, we ignore this possibility here as we are particularly interested in the effects of Dirac masses. Focusing on the *CP* even scalar sector of the theory, the sole difference between the triplet and singlet MSSM extensions are factors of $\sqrt{2}$ coming from the normalization of the triplet. The full *CP* even scalar potential for this scenario is shown in Appendix \[sec:appY0\]. Isospin triplets can potentially disrupt electroweak precision tests unless their vevs remain small. A simple way to mitigate the size of the triplet vevs is to take the scalar triplets to be heavier than the Higgses. In this limit, which we will assume throughout, the scalar triplets can be integrated out and are effectively replaced by combinations of lighter fields: $$\begin{aligned} \Sigma_{1,\text{neut}} \equiv T^0 & \rightarrow &\frac{\lambda}{\sqrt{2}} \frac{\mu (|H_u^0|^2 + |H_d^0|^2) - A_{\lambda} H_u^{0*} H_d^{0*}}{\mu_{\Sigma}^2 + m_T^2}+\mathcal{O}\left( \dfrac{1}{D_{T}^{2}},\dfrac{1}{D_{T}D_{\chi}},\dfrac{1}{D_{\chi}^{2}} \right) \label{eqn:defTy0} \\ \Sigma_{2,\text{neut}} \equiv \chi^0 & \rightarrow& \frac{\lambda \mu_{\Sigma}}{\sqrt{2}} \frac{H_u^0 H_d^0}{\mu_{\Sigma}^2 + m_{\chi}^2}++\mathcal{O}\left( \dfrac{1}{D_{T}^{2}},\dfrac{1}{D_{T}D_{\chi}},\dfrac{1}{D_{\chi}^{2}} \right). \label{eqn:defCy0}\end{aligned}$$ where $D_{T,\chi}\equiv \mu_{\Sigma}^{2}+m_{T,\chi}^{2}$. The resulting effective potential for the Higgses can be found in Eq. . From the effective potential, we can read off the modified tree-level *CP*-even scalar mass matrices. Taking the decoupling limit for simplicity and adding the one-loop stop contribution to lightest tree-level mass eigenvalue, we find the Higgs mass: $$\begin{aligned} m_{h}^{2} &=& m_Z^2 \cos^2(2\beta) + (\text{stop loops})+\frac{v^2 \lambda^2}{2}\sin^2(2\beta) \frac{m^2_{\chi}}{\mu^2_{\Sigma}+m^2_{\chi}} \nonumber \\ &&- \frac{v^2\lambda^2}{2} \frac{\left|2 \mu^* -A_{\lambda} \sin(2\beta)\right|^2}{\mu^2_{\Sigma}+m^2_{T}}. \label{eqn_HiggsY0}\end{aligned}$$ The expression above, with a positive (negative) piece that depends on the uncoupled (coupled) triplet soft mass is clearly reminiscent of the singlet DiracNMSSM, Eq. . As in the singlet case, the interplay between the two terms plays an important role in the fine tuning of the model. $Y=\pm1$ case ------------- Given that the superpotential should conserve hypercharge and be holomorphic, a supersymmetric mass term for a triplet with hypercharge $Y=1$ can only be included if there is a second triplet with $Y=-1$. Anomaly cancellation also rests on introducing hypercharge triplets in vector-like pairs. As in the $Y= 0$ scenario above, we assume $\Sigma_1$ is the triplet with superpotential couplings to the Higgses. Depending on its hypercharge $\Sigma_1$ will only be able to couple either to $H_u^2$ or $H_d^2$, which is distinct from the $Y=0$ setup. To get the largest impact from the triplet-Higgs coupling, we want it to couple as much as possible to the physical Higgs boson. At large $\tan\beta$ and large $m_A$, the Higgs boson resides primarily in $H_u$, therefore we assign $Y = -1$ to $\Sigma_1$, permitting the interaction $$W_{H-\Sigma}=\lambda H_{u}\cdot \Sigma_{1}H_{u}.$$ The second triplet $\Sigma_2$ (now with hypercharge Y = 1) has no superpotential couplings. The soft terms are as in Eq. (\[eq:vsoft\]) with the same modification to the $A_{\lambda}$ term as in the superpotential, and the complete *CP* even scalar potential is given in Appendix \[sec:appY1\]. When the triplet scalars are integrated out in this scenario, the neutral components are replaced by: $$\begin{aligned} \Sigma_{1,\text{neut}} \equiv T^0 & \rightarrow & \frac{\lambda \left(A_{\lambda} H_u^{0*}H_u^{0*}-2 \mu H_u^{0*} H_d^0 \right)}{\mu_{\Sigma}^2 + m_T^2}+\mathcal{O}\left( \frac{1}{D_\chi^2},\frac{1}{D_\chi D_T},\frac{1}{D_T^2} \right) \label{eqn:defTY1} \\ \Sigma_{2,\text{neut}} \equiv \chi^0 & \rightarrow& \frac{-\lambda \mu_{\Sigma} H_u^0 H_u^0}{\mu_{\Sigma}^2 +m_{\chi}^2}+\mathcal{O}\left( \frac{1}{D_\chi^2},\frac{1}{D_\chi D_T},\frac{1}{D_T^2} \right). \label{eqn:defCY1}\end{aligned}$$ Working with the effective Higgs potential and proceeding as in the $Y = 0$ case, we find the decoupling-limit Higgs mass to be $$\begin{aligned} m_{h}^{2} &=& m_Z^2 \cos^2(2\beta) + (\text{stop loops}) + 4 v^2 \lambda^2 \sin^4(\beta)\left( \dfrac{ m_{\chi}^2}{\mu_{\Sigma}^2+m^2_{\chi}} \right) \nonumber\\ &&-\dfrac{v^2 \lambda^2 \sin^2{(2\beta)}}{\mu^2_{\Sigma} +m^2_{T}}\left|2\mu^* - A_{\lambda} \tan{(\beta)}\right|^2 . \label{eqn_HiggsY1}\end{aligned}$$ Comparing $m^2_h$ in the two models, Eqs. and , we see similar features. In both models there is a positive contribution to the Higgs mass proportional to $m^2_{\chi}/(\mu^2_{\Sigma}+m^2_{\chi})$. This is maximized when $m_\chi^2 \gg \mu_{\Sigma}^2$, and goes to zero when $m_{\chi}^2 \ll \mu_{\Sigma}^2$, so the Higgs mass is increased the most by decoupling the scalar part of $\Sigma_2$. In Section \[sec:ft\] we will show that the decoupling of $m_{\chi}^{2}$ barely affects the fine tuning. The amplitude and $\tan \beta$ dependence of the positive term is different for the $Y=0$ triplets and the $Y=\pm1$ triplets, $$C_0(\beta)=\frac{v^2 \lambda^2}{2} \sin^2(2\beta)$$ for $Y=0$ and $$C_1(\beta)= 4 v^2 \lambda^2 \sin^4 \beta.$$ for $Y=1$. $C_0$ is maximized when $2\beta=\pi/2$, or $\tan\beta=1$. However, $C_1$ is maximal as $\beta \rightarrow \pi/2$, or $\tan\beta \rightarrow \infty$. As the $\tan\beta$ dependence of $C_{1}$ aligns with that of the MSSM, the size of the triplet contributions to the Higgs mass do not need to be as large, leading to smaller values of $\lambda$ in the $Y=\pm1$ model. Equations and also have a term which acts to lower $m^2_h$. The negative terms depend on the mass of $\Sigma_1$, the triplet which couples to the doublets. A large soft mass for $\Sigma_1$ decreases the absolute value of the negative term, raising the Higgs mass. However, $m^2_T$ also enters into the radiative corrections of the Higgs soft masses, so the $m_T$ value that minimizes the fine tuning is less clear cut and is best tackled numerically. Both of the negative terms also contain a factor which depends on the difference between $\mu$ and $A_{\lambda}$, $\left| 2 \mu^* - A_{\lambda} \sin(2\beta) \right| ^2$ for the $Y=0$ case and $\left| 2 \mu^* - A_{\lambda} \tan \beta \right|^2$ for $Y=\pm1$ respectively. The same expressions appear in the effective triplet vevs, Eq.(\[eqn:defTy0\], \[eqn:defCy0\]) or Eq.(\[eqn:defTY1\],\[eqn:defCY1\]) after the Higgs doublets acquire vacuum expectation values. The $\mathcal{T}$ parameter is tightly constrained by precision electroweak measurements, however, the fact that the same expressions appear in the Higgs mass and the triplet effective vevs implies that regions with the smallest negative contribution to the Higgs mass are also the regions with the smallest $\mathcal{T}$ parameter. Having shown how the Higgs mass is altered in the two Dirac Triplet scenarios and identified key parameters, we now move on to study the fine tuning. Fine tuning calculations and $\mathcal T$ parameter {#sec:ft} =================================================== Equations and show that decoupling the soft mass of $\Sigma_2$ leads to a maximal increase in the Higgs mass. Ordinarily, the introduction of large scalar masses to correct the Higgs mass increases the fine tuning. In the next subsection we show that this is not the case for this model; the fact that $\Sigma_2$ does not couple to the doublets allows it to be decoupled with small effects on the fine tuning, as in the original DiracNMSSM model. Beyond the fine tuning of the Higgs mass, triplet models are also constrained by the $\mathcal{T}$ parameter, which we examine more closely in Section \[subsec:T\]. Fine tuning of $m^2_{H_u}$ {#subsec:MH2} -------------------------- We adopt the definition of fine tuning of [@Lu:2013cta], $$\Delta = \frac{2}{m^2_h} \text{max}\left(m^2_{H_u}, m^2_{H_d}, \frac{d m^2_{H_u}}{d\log{(u)}} L, \frac{d m^2_{H_d}}{d\log{(u)}} L, \delta m_{H_{u}^{0}}^{2}, \mu B_{\mu,\text{eff}} \right) \label{eqn:fine-tuning}$$ where $L\equiv \log(\Lambda/m_{\widetilde{t}})$ accounts for the running to the SUSY breaking scale, $\log(u)$ is the running scale and $\delta m_{H_{u}^{0}}^{2}$ is the one-loop finite threshold correction from the triplets; following [@Lu:2013cta], we set $L=6$. Although we use the same definition for $\Delta$ that was used for the singlet model, we expect the triplet case to be slightly different due to larger triplet-Higgs couplings (coming from the normalization of the triplets) and the different hypercharge possibilities. Putting all of the components together and taking the maximum contribution is best done numerically. However, before launching into numerics, in this section we examine each of the different components of $\Delta$ to get a better feeling for their relative importance and to see how they depend on the triplet parameters. The first entries in $\Delta$ are $m^2_{H_u}$ and $m^2_{H_d}$, the tree-level soft masses for the Higgs doublets. These are not free parameters, rather they are set by the requirement that electroweak symmetry is broken at the minimum of the scalar potential (see Eq.  and for $Y=0$ and and for $Y=\pm1$). In solving the minimization conditions, $m^2_{H_u}$ and $m^2_{H_d}$ inherit a complicated dependence on the triplet parameters that is difficult to generalize. As these entries are typically subdominant in $\Delta$, we do not attempt to tease out the triplet parameter dependence analytically. The next components of $\Delta$ are $\frac{d m^2_{H_u}}{d\log{(u)}} L, \frac{d m^2_{H_d}}{d\log{(u)}} L$, the radiative corrections to the Higgs soft masses. While nominally one-loop effects, these radiative pieces have the potential to be important because they depend quadratically on the masses of heavy particles (stops, triplets, etc.) – objects that do not appear or are subdominant in the tree level Higgs potential. Additionally, the radiative effects are enhanced by $L$, the logarithm that encapsulates the running of soft masses down from the supersymmetry mediation scale. As a result, these radiative pieces are often the largest component of $\Delta$. To see how the triplet parameters enter, we need the renormalization group equations (RGE) governing the evolution of $m^2_{H_u}, m^2_{H_d}$: $$(Y=0) ~~~ \left\{ \begin{matrix} 16 \pi^2 \frac{d m^2_{H_u}}{dt} \supset 6 h_t^2 \left(m^2_{Q_3} + m^2_{U_3} + m^2_{H_u} \right) + 6 \lambda^2 \left(m^2_{H_u} + m^2_{H_d} + m^2_{T} + A_{\lambda}^2 \right)\\ 16 \pi^2 \frac{d m^2_{H_d}}{dt} \supset 6 h_b^2 \left(m^2_{Q_3} + m^2_{D_3} + m^2_{H_d} \right) + 6 \lambda^2 \left(m^2_{H_u} + m^2_{H_d} + m^2_{T} + A_{\lambda}^2 \right) \end{matrix} \right. \label{eqn:HuRGEY0}$$ and $$(Y=\pm1) ~~~ \left\{ \begin{matrix} 16\pi^2 \frac{dm^2_{H_u}}{dt} \supset 6 h_t^2 \left( m^2_{Q_3} + m^2_{U_3} + m^2_{H_u} \right) + 6 \lambda^2 \left( 2 m^2_{H_u} + m^2_{T} + A_{\lambda}^2 \right) \\ 16\pi^2 \frac{d m^2_{H_d}}{dt}\supset 6 h_b^2 \left(m^2_{Q_3} + m^2_{D_3} + m^2_{H_u} \right) \end{matrix} \right. . \label{eqn:HdRGEY1}$$ The large top Yukawa, $h_t$ and the dependence on the stop masses needed in the MSSM to raise the Higgs mass are what drives the fine tuning. In the triplet scenario, the extra contributions to the (tree level) Higgs mass from the triplets permits lighter stops and allows for a less tuned model. The key difference between the DiracNMSSM and the traditional NMSSM is that the mass of the uncoupled state does not feed into the Higgs soft masses at loop level. This same behavior is reproduced in Eq (\[eqn:HuRGEY0\]) nor (\[eqn:HdRGEY1\]) , neither of which depends on $m_{\chi}$, the mass of $\Sigma_2$. As a result, large $m_{\chi}$ – and thereby large positive contributions to the Higgs mass – are permitted without giving rise to fine tuning. The soft mass of $\Sigma_1$ and the trilinear soft term $A_{\lambda}$ enter into the running of $m^2_{H_u}, m^2_{H_d}$, so in principle large values for them would increase $\Delta$. However, both $m_T^2$ and $A_{\lambda}^2$ enter into the beta functions multiplied by $\lambda^{2}$, hence a smaller $\lambda$ would permit these two quantities to take moderate values without dominating the fine tuning. Following the radiative piece in $\Delta$ is the threshold correction $\delta m_{H_{u}^{0}}^{2}$, the finite contribution to $m^2_{H_u}$ that emerges when heavy fields are integrated out. The threshold terms are important as they are the only place where the soft mass of the uncoupled triplet $m_{\chi}^{2}$ (or the non-coupling singlet, in the model of Ref. [@Lu:2013cta]) enters into the fine tuning. The threshold corrections, presented in full in Appendix \[sec:AppFTC\], depend on the soft masses of both triplets. However, since $m_T$ also appears in the (log-enhanced) RGE part of the tuning discussed above, keeping $m_T$ small minimizes the tuning. With $m_T$ kept small, the threshold correction is well approximated by the $\Sigma_2$ piece alone: $$\begin{aligned} (Y=0):~~~~ \delta m_{H_u^0}^2 & \simeq \frac{3}{2}\frac{\lambda^2 \mu_{\Sigma}^2}{16\pi^2} \log\frac{m^2_{\chi} + \mu_{\Sigma}^2}{\mu_{\Sigma}^2} \text{ and} \\ (Y=\pm1):~~~~ \delta m_{H_u^0}^2 & \simeq 6 \frac{ \lambda^2 \mu_{\Sigma}^2}{16 \pi^2} \log \frac{m^2_{\chi} + \mu_{\Sigma}^2}{\mu_{\Sigma}^2}. \label{eqn:thresholdCorrection} \end{aligned}$$ If $\mu_{\Sigma}^2 \gtrsim m_{\chi}^2$, there is little fine tuning from the threshold correction. We saw in Sec. \[sec:model\] that the most interesting parameter space – where the triplet contribution to the Higgs mass is large and positive – occurs when $m_{\chi}^2 \gg \mu^2_{\Sigma}$. For this hierarchy of parameters, the threshold contribution can be non-negligible, though only when $\mu^2_{\Sigma}$ is large (compared to $m_h$) as well. The final component of $\Delta$ is the dependence on $\mu$ and $B_{\mu}$. For the triplet scenario with hypercharge, this component of the tuning is identical to the MSSM. Triplets without hypercharge are slightly more complex, since the effective triplet vevs shift $\mu$ and $B_{\mu}$ from their MSSM values. The shifted values are given by $$\begin{aligned} \mu_{eff} &= \mu - \frac{\sqrt{2}}{2}\lambda \left\langle T^0 \right \rangle \text{ and} \label{eqn:muEff} \\ \mu B_{\mu,\text{eff}} &= \mu B_{\mu} - \frac{\lambda}{\sqrt{2}} \left(A_{\lambda} \left\langle T^0 \right \rangle + \mu_{\Sigma} \left \langle \chi^0 \right \rangle \right ). \label{eqn:BEff}\end{aligned}$$ Though not usually the dominant component in $\Delta$, these contributions to the fine tuning measure are inevitable as $\mu$ and $B_{\mu}$ enter directly into the tree-level mass matrix of the Higgs. After considering the individual components of the fine tuning measure, we are now ready for a full numerical study of the tuning over a range of triplet parameters. Before doing so, we first examine how the $\mathcal{T}$ parameter constrains the available parameter space. Constraints from the $\mathcal{T}$ parameter {#subsec:T} -------------------------------------------- Electroweak scalar triplets that acquire vacuum expectation values notoriously spoil the relation between $m_W$ and $m_Z$. This mass ratio is more commonly expressed as the $\mathcal T$ parameter $$\alpha \mathcal{T} = \frac{m_W^2}{m_Z^2 \cos^2 \theta_W}-1.$$ The authors of [@Baak:2012kk; @Baak:2014ora] used data from $Z$ pole measurements [@ALEPH:2005ab], the running quark masses [@Beringer:1900zz], the five-quark hadronic vanuum polarization contribution to $\alpha\left(M_Z^2\right)$, $\Delta\alpha_{\text{had}^{(5)}} \left(M_Z^2\right)$ [@Davier:2010nc], the mass and width of the $W$ [@Beringer:1900zz], top quark mass [@ATLAS:2014wva], and Higgs mass measurements [@Aad:2014aba; @CMS:2014ega] to preform a global fit of electroweak data. A value of $\mathcal{T} = 0.09\pm 0.13$ gives the best fit of the data if all of the oblique parameters are allowed to float[^8]. Forcing the (tree-level) triplet contributions to the $\mathcal T$ parameter to lie within the 1-$\sigma$ uncertainty, we can derive a bound on the triplet model parameters. In an effective theory where we have integrated out the triplets, there are no triplet fields around to get vevs, but the $\mathcal T$ contributions are still present in the form of higher dimensional operators. Specifically, after integrating out the triplets, the kinetic term for the $\Sigma_i$ becomes (schematically) $$\left| D_{\mu} \Sigma_i \right|^2 \xrightarrow[\text{integrated out}]{\Sigma} \frac{1}{\Lambda^2} \left| H D_{\mu} H \right|^2,$$ which, once the Higgses are set to their vevs, contributes differently to the $W$ and $Z$ mass. The operator is intentionally left vague, as the actual combinations of the $H_u$ and $H_d$ and the mass scale $\Lambda$ are different for each triplet. For the triplets with $Y=0$, this operator contributes to $\mathcal{T}$ by $$\mathcal{T}_{Y=0}=\dfrac{1}{\alpha} ~ \dfrac{4\bigl( \langle \chi^{0}\rangle^{2}+\langle T^{0}\rangle^{2} \bigr)}{v^{2}-4\bigl( \langle \chi^{0}\rangle^{2}+\langle T^{0}\rangle^{2} \bigr)} \label{eqn:TparamY0}$$ where $\langle T^0 \rangle$ and $\langle \chi^0 \rangle$ are the values of equation and after the doublets have developed vevs – what we dub ‘effective vevs’ for the triplets. The effective vevs are approximately given by $$\left\langle T^0 \right\rangle_{Y=0} \approx \dfrac{v^2 \lambda}{2\sqrt{2}} \dfrac{2 \mu^* - A_{\lambda}\sin(2\beta)}{\mu^2_{\Sigma}+m^2_{T}} \text{ and} ~~~~ \left\langle \chi^0 \right \rangle_{Y=0} \approx -\dfrac{v^2 \lambda}{2\sqrt{2}} \dfrac{\mu_{\Sigma} \sin(2\beta)}{\mu^2_{\Sigma} + m^2_{\chi}}, \label{eqn_vev0}$$ up to higher order terms in $1/(\mu_{\Sigma}^{2}+m_{T,\chi}^{2})$. For the case with hypercharge, the $\mathcal{T}$ parameter takes the form $$\mathcal{T}_{Y=\pm1} = -\frac{1}{\alpha}~ \frac{2 \left(\langle \chi^0 \rangle^2 + \langle T^0 \rangle^2 \right) }{v^2} \label{eqn_Y1}$$ with $\langle T^0 \rangle$ and $\langle \chi^0\rangle$ now coming from Eq. and once the doublets acquired the vevs, $$\left\langle T^0 \right \rangle_{Y=-1} \approx -\dfrac{v^2\lambda}{2} \dfrac{ \sin(2\beta) \left( 2 \mu^{*} -A_{\lambda} \tan{(\beta)} \right)}{ \mu^2_{\Sigma}+m^2_{T} }\text{ and} ~~~~ \left\langle \chi^0 \right\rangle_{Y=1} \approx v^2 \dfrac{ - \lambda \mu_{\Sigma} \sin^2{(\beta)}}{ \mu^2_{\Sigma}+m^2_{\chi} }. \label{eqn_vev1}$$ Inspecting these equations, we can identify several parameter combinations that dictate the size of the $\mathcal{T}$ parameter. - $m_{\chi}^{2}$: The effective vev ${\langle}\chi^0{\rangle}\rightarrow 0$ in the limit of large $m_{\chi}$. In order to effectively raise the Higgs mass, we want $m_{\chi}^2 \gg \mu^2_{\Sigma}$. Large $m_{\chi}$ also does not add to the fine tuning (see previous subsection), so large $m_{\chi}$ is preferred for both the fine tuning and the $\mathcal{T}$ parameter. - $m_{T}^{2}$: Similarly, the effective vev ${\langle}T^0 {\rangle}\rightarrow 0$ in the limit of large $m_{T}$. A large value for $m_{T}$ also reduces the negative term in the Higgs mass squared equations. However, $m^2_T$ enters into the tuning from the RGE running terms and can quickly dominate the fine tuning. - $\mu_{\Sigma}$: Both ${\langle}\chi^0 {\rangle}\rightarrow 0$ and ${\langle}T^0 {\rangle}\ \rightarrow 0$ for large $\mu_{\Sigma}$. This is not desired as it decreases the triplet contribution to the Higgs mass and removes any interesting phenomenology of extra light states. - $\lambda$: The $\mathcal{T}$ parameter goes as $\lambda^2$. The fact that the $Y=\pm1$ model can easily get the correct Higgs mass for lower values of $\lambda$ implies that the model with hypercharge will not be as constrained by the $\mathcal{T}$ parameter for fixed stop masses. - $\mu$ *and* $A_{\lambda}$: One could also have a cancelation between the $\mu$ and $A_{\lambda}$ terms. This would be a cancellation between a supersymmetric term and a soft term, which is in itself unnatural. - $\tan \beta$: The triplets with hypercharge $Y=\pm1$ have an extra dependence on $\sin(2\beta)$ in ${\langle}T^0{\rangle}$. At large values of $\tan\beta$, this goes to 0. Large values of $\tan\beta$ were already preferred for $Y=\pm1$ in order to raise the Higgs mass as much as possible. The $Y=0$ model is not as lucky. Considering these points, in particular the $\lambda$ and $\tan\beta$ dependence, it is clear that the $\mathcal{T}$ parameter is more constraining on the $Y=0$ model. In addition, for fixed triplet-Higgs coupling $\lambda$, the triplet contribution to the Higgs mass in the $Y = 0$ model is smaller than in the singlet DiracNMSSM scenario because of the $\sqrt 2$ factor in the normalization of the neutral components. As this scenario suffers in fine tuning and the $\mathcal{T}$ parameter without the promise of interesting phenomena, we choose to ignore the $Y=0$ Dirac triplet model for the rest of the paper and focus our numerical and phenomenological study on $Y \ne 0$. Lastly, we point out that ${\langle}T^0 {\rangle}$ and ${\langle}\chi^0{\rangle}$ contribute to the $\mathcal{T}$ parameter at tree level, and to order $\lambda^2$. To be consistent, we have also calculated the one-loop fermionic contributions to the $\mathcal{T}$ parameter to order $\lambda^2$. Because the triplet fermions are Dirac particles, and the mixing to order $\lambda^2$ keeps the entire triplet representation the same mass, there is no contribution to the $\mathcal{T}$ parameter at order $\lambda^2$. Numerical study: $Y = \pm 1$ {#sec:Num} ============================ The analytical expressions of the last section allowed us to determine the overall scheme needed to minimize fine tuning and yet maximize the triplet contributions to the Higgs mass. Focusing entirely on the $Y = \pm 1$ scenario, the preferred regions are large $\tan\beta$, large $m_{\chi}$, and small values for $m_{T}$ and the stop masses. The coupling $\lambda$ needs to be large enough to raise the Higgs mass without being so large as to induce large triplet vevs. While there are multiple free parameters at hand, we wish to keep our numerical analysis both detailed and manageable. For this reason, we limit the parameters we vary to two scans, one over $\lambda$ and $m_T$ and the other over $\mu_{\Sigma}$ and $m_{\chi}$. The other parameters are fixed to benchmark values shown in Table. \[tab\_BenchmarkFT\].\ -------------------------- --------------------------------- ----------------- $\tan{\beta} = 10$ $m_A = 300{~\text{GeV}}$ $A_t=0$ $\mu = 250{~\text{GeV}}$ $B_{\Sigma} = 100{~\text{GeV}}$ $A_{\lambda}=0$ -------------------------- --------------------------------- ----------------- : Benchmark parameter values for the calculation of the fine tuning variables for the $Y=\pm1$ model. For simplicity, the gaugino masses and all squark/slepton masses other than the stop are assumed to be decoupled.[]{data-label="tab_BenchmarkFT"} The values for the fixed parameters in Table \[tab\_BenchmarkFT\] are motivated by several considerations. First, since the Higgs mass contribution, fine tuning, and $\mathcal T$ parameter are improved at large $\tan{\beta}$, we select $\tan\beta = 10$ as a representative value. Next, the scalar masses $m_A$ and $B_{\Sigma}$ play little role in our results, so they are good parameters to fix. The mass $m_A$ enters into the Higgs mass matrix, however as we always assume the decoupling limit it has little effect (so long as the value we choose is large enough to justify the decoupling limit). Similarly, the soft parameter $B_{\Sigma}$ mixes the scalars from $\Sigma_1$ and $\Sigma_2$. This mixing does not change our results, but complicates the translation between the scalar mass eigenstates and the Lagrangian parameters. Therefore we select a small $B_{\Sigma}$ for simplicity. The effective vev ${\langle}T^0 {\rangle}$ (and therefore the $\mathcal{T}$ parameter) depend on the difference between $\mu$ and $A_{\lambda}$, however, this term is suppressed at large values of $\tan\beta$. Varying $A_{\lambda}$ over a moderate range of values, we find the fine tuning does not change much. Therefore, we set $A_{\lambda}$ to 0 (together with $A_{t}$ for consistency), a choice that fits well within gauge mediated SUSY breaking scenarios [@Dine:1981gu; @Nappi:1982hm; @AlvarezGaume:1981wy; @Dine:1993yw; @Dine:1994vc; @Dine:1995ag]. The last parameter we fix is $\mu$. Since we have decoupled/ignored the wino, the chargino mass is set by $\mu$, thus the existing LEP2 bound [@lepii] on charginos sets a lower bound of $\mu \gtrsim 100\,{~\text{GeV}}$. High $\mu$ values are also disfavored by fine tuning, so we therefore pick an intermediate value of $\mu = 250\,{~\text{GeV}}$ for our benchmark. The contribution to the tuning for this choice $\Delta(\mu)=8.47$; as this value is independent of the rest of the spectrum, $\Delta(\mu)$ should be regarded of as the minimum tuning possible according to our measure. From the fine tuning perspective alone, a value of $\mu$ closer to the LEP2 bound would be better. However, as we will detail in section \[sec:TripPheno\], $\mu$ also plays a role in stop phenomenology. To study the fine tuning, we scan over the remaining triplet parameters, the coupling $\lambda$, the Dirac mass, $\mu_{\Sigma}$, and the soft masses, $m_{\chi}$ and $m_T$. Once values for these are chosen, the triplet contribution to the Higgs mass is known (see Eq.) and the stops are the only part left to enforce $m_h=125{~\text{GeV}}$. As the stop contribution to the Higgs mass depends on the masses of both stops, we must make some assumptions in order to extract the values. We study two different assumptions: 1. Left and right-handed stop have the same mass. ($m_{\tilde{Q}_3}=m_{\tilde{u}^c_3}$) 2. The right-handed stop is used to set the Higgs mass while the left-handed stop is set to 800 GeV, which is above the most stringent LHC limits [@Aad:2012xqa; @Aad:2014qaa; @Aad:2012uu; @Aad:2014nra; @Aad:2014bva; @Aad:2012ywa; @Chatrchyan:2012lia; @Chatrchyan:2013xna; @Chatrchyan:2014lfa; @CMS:2014yma; @CMS:2014wsa]. Next, we use SuSpect2 [@Djouadi:2002ze] to find the mass of the Higgs in the MSSM for the benchmark values and a given set of stop masses. The final Higgs mass squared is then the result of adding the MSSM part and the triplet contribution in quadrature. $$m_h^2 \equiv (125.5{~\text{GeV}})^2 = m_h^2 (\text{MSSM}) + m_h^2 (\text{Triplet}).$$ We vary the value of the stop mass until this relationship is achieved. Then, once the stop mass is known, we can calculate the fine tuning defined in Eq. . Knowing that the triplet contribution to the Higgs mass is largest when $m_{\chi} \gg \mu_{\Sigma}$, we first choose to fix $$m_{\chi}=10 {~\text{TeV}}~~~\text{ and } ~~~ \mu_{\Sigma}=300{~\text{GeV}}$$ and scan over value of $\lambda$ and $m_T$. The left panels of Fig. \[fig:ftLambda\] show the values of the stop soft masses that are needed in order to set the correct Higgs mass; in Fig. \[fig:finetuningTB10BothEqualLambda\], both stop soft masses are equal, while in Fig. \[fig:finetuningTB10ChangeRightLambda\] the left-handed soft mass is fixed at $800{~\text{GeV}}$ and the right-handed soft mass is indicated by the contours. The triplets do not affect the Higgs mass in the MSSM limit that $\lambda\rightarrow0$, so very large stop masses are needed. As $\lambda$ is increased from zero, the necessary stop mass decreases. If $\lambda \gtrsim 0.35$, the triplet $F$-terms generate a Higgs mass that is alway greater than observed value. These regions are marked in green in the figures. The soft mass $m_T$ only affect the mass of the Higgs through the negative term in Eq. (\[eqn\_HiggsY1\]). For large values of $\tan\beta$, this term is negligible. The fine tuning is calculated at each point once the stop masses have been obtained. Contours of $\Delta$ are shown in the right panels of Fig. \[fig:ftLambda\]. The white, pink, and blue regions represent a fine tuning of $\Delta \le 100$, $100 < \Delta \le 1000$, and $\Delta > 1000$ respectively. The RGE running part of the fine tuning measure is dominant and depends on $h_t^2 (m^2_{Q_3} + m^2_{U_3})$ and $\lambda^2 m_T^2$. Increasing $\lambda$ lowers the stop masses, decreasing the fine tuning until $\lambda^2 m_T^2$ is comparable to $h_t^2 (m^2_{Q_3} + m^2_{U_3})$. As such, a small value of the soft mass is preferred for fine tuning, although the $\mathcal{T}$ parameter can cause issues if $m_T$ is too light. ![The left panels show contours of the stop soft mass needed in order to raise the Higgs mass to he observed value when $\mu_{\Sigma}=300{~\text{GeV}}$, $m_{\chi}=10{~\text{TeV}}$ and $\tan\beta=10$. In () both stops have the same mass while () only changes the right-handed soft mass and keeps the left-handed stop at $800{~\text{GeV}}$. The right panels show the corresponding contours of fine tuning. The dark red region marks where the vevs of the triplets cause too-large contributions to the $\mathcal{T}$ parameter. The orange region supposes an improvement in the measured $\mathcal{T}$ parameter by an order of magnitude.[]{data-label="fig:ftLambda"}](Stops_MuSig_300_mchi_10000_Both.pdf "fig:"){width="0.45\linewidth"} ![The left panels show contours of the stop soft mass needed in order to raise the Higgs mass to he observed value when $\mu_{\Sigma}=300{~\text{GeV}}$, $m_{\chi}=10{~\text{TeV}}$ and $\tan\beta=10$. In () both stops have the same mass while () only changes the right-handed soft mass and keeps the left-handed stop at $800{~\text{GeV}}$. The right panels show the corresponding contours of fine tuning. The dark red region marks where the vevs of the triplets cause too-large contributions to the $\mathcal{T}$ parameter. The orange region supposes an improvement in the measured $\mathcal{T}$ parameter by an order of magnitude.[]{data-label="fig:ftLambda"}](FT_MuSig_300_mchi_10000_Both.pdf "fig:"){width="0.45\linewidth"} ![The left panels show contours of the stop soft mass needed in order to raise the Higgs mass to he observed value when $\mu_{\Sigma}=300{~\text{GeV}}$, $m_{\chi}=10{~\text{TeV}}$ and $\tan\beta=10$. In () both stops have the same mass while () only changes the right-handed soft mass and keeps the left-handed stop at $800{~\text{GeV}}$. The right panels show the corresponding contours of fine tuning. The dark red region marks where the vevs of the triplets cause too-large contributions to the $\mathcal{T}$ parameter. The orange region supposes an improvement in the measured $\mathcal{T}$ parameter by an order of magnitude.[]{data-label="fig:ftLambda"}](Stops_MuSig_300_mchi_10000.pdf "fig:"){width="0.45\linewidth"} ![The left panels show contours of the stop soft mass needed in order to raise the Higgs mass to he observed value when $\mu_{\Sigma}=300{~\text{GeV}}$, $m_{\chi}=10{~\text{TeV}}$ and $\tan\beta=10$. In () both stops have the same mass while () only changes the right-handed soft mass and keeps the left-handed stop at $800{~\text{GeV}}$. The right panels show the corresponding contours of fine tuning. The dark red region marks where the vevs of the triplets cause too-large contributions to the $\mathcal{T}$ parameter. The orange region supposes an improvement in the measured $\mathcal{T}$ parameter by an order of magnitude.[]{data-label="fig:ftLambda"}](FT_MuSig_300_mchi_10000.pdf "fig:"){width="0.45\linewidth"} At each point in the scan we calculate the effective triplet vevs and their contribution to the $\mathcal{T}$ parameter. The red regions show where the triplet contributions to $\mathcal{T}$ are larger than the 0.13 1-$\sigma$ uncertainty [@Baak:2014ora]. We also mark in orange what could be excluded by a new precision study of the $Z$-pole if the uncertainty on the $\mathcal{T}$ parameter were decreased by an order of magnitude. Fig. \[fig:ftLambda\] has the soft mass of $\Sigma_2$ decoupled ($m_{\chi}=10{~\text{TeV}}$), so ${\langle}\chi^0{\rangle}$ is negligible and $\mathcal{T}$ is only affected by ${\langle}T^0 {\rangle}$. The large value of $\tan\beta$ suppresses ${\langle}T^0 {\rangle}$ so the current $\mathcal{T}$ bounds can only exclude $m_T < 200 {~\text{GeV}}$ at the largest allowed values of $\lambda$. An improved measurement brings the exclusion to values of $\lambda$ as low as 0.1 and soft masses as large as $500{~\text{GeV}}$. The vev ${\langle}T^0 {\rangle}$ is proportional to $1/(\mu_{\Sigma}^2+m_T^2)$, so the reach of this exclusion region is strongly dependent on the value of $\mu_{\Sigma}$ as well, which has been kept fixed up to this point. Before discussing the differences between the two different stop assumptions, we scan over $\mu_{\Sigma}$ and $m_{\chi}$ to understand how these affect the Higgs mass, fine tuning, and $\mathcal{T}$. We chose the point $$\lambda=0.25 ~~~\text{ and }~~~ m_T = 800~{~\text{GeV}},$$ which in the first scan lies close to the smallest fine tuned contour and is beyond the reach of the improved $\mathcal{T}$ exclusion. Figure \[fig:ft\] shows the results of the second scan again with the stop masses in the left panels and the shaded regions the same as in Fig. \[fig:ftLambda\]. The triplet contribution to $m_h^2$ is proportional to $m_{\chi}^2/(\mu_{\Sigma}^2+m_{\chi}^2)$. Larger values of $m_{\chi}$ decrease the stop masses while larger $\mu_{\Sigma}$ decouples the effect of the triplets and forces larger stop masses. Lines of constant stop mass run along the diagonal. The right panels of Fig. \[fig:ft\] show the corresponding fine tuning measure. Over most of the parameter space, the fine tuning contours follow the stop mass contours which implies that the RGE running term is dominating the fine tuning. This is not the case in the upper right part of the plots for large values of $m_{\chi}$ and $\mu_{\Sigma}$. In these regions the finite threshold correction piece of the fine tuning dominates. This term is never dominant for $\mu_{\Sigma} \lesssim 1~{~\text{TeV}}$ or $m_{\chi} \lesssim 10{~\text{TeV}}$. The $\mathcal{T}$ parameter constrains more of the parameter space in this scan. In this case, $m_T$ is large so ${\langle}T^0 {\rangle}$ does not contribute much to $\mathcal{T}$. Instead, $\mathcal{T}$ is controlled by ${\langle}\chi^0 {\rangle}$ which is proportional to $\mu_{\Sigma}/(\mu_{\Sigma}^2 + m^2_{\chi})$. Keeping the triplet contributions to $\mathcal{T}$ within the 1-$\sigma$ uncertainty excludes out to $\mu_{\Sigma} \le 1.1{~\text{TeV}}$ for $m_{\chi} \lesssim 800{~\text{GeV}}$. The orange region again shows what could be excluded if the uncertainty were improved by an order of magnitude. This may be the best method for explicitly excluding parameter space and reaches out to $\mu_{\Sigma} \le 1.5{~\text{TeV}}$ for $m_{\chi} \lesssim1.2{~\text{TeV}}$. Having a low value for $\mu_{\Sigma}$ allows for a large triplet contribution to the Higgs mass without the need to worry about the finite threshold correction term in the fine tuning. In this region, the $\mathcal{T}$ parameter forces $m_{\chi}$ to large values to decrease ${\langle}\chi^0 {\rangle}$. This in turn *increases* the triplet contribution to the Higgs mass, lowering the fine tuning. ![Analogous panels to Fig.\[fig:ftLambda\], this time with varying $\mu_{\Sigma}$ and $m_{\chi}$ for fixed $\lambda=0.25$ and $m_{T}=800{~\text{GeV}}$. In section \[sec:TripPheno\], we study the phenomenology of the dashed green line.[]{data-label="fig:ft"}](ChangeBoth_TB10_NoCancel_Stops "fig:"){width="0.45\linewidth"} ![Analogous panels to Fig.\[fig:ftLambda\], this time with varying $\mu_{\Sigma}$ and $m_{\chi}$ for fixed $\lambda=0.25$ and $m_{T}=800{~\text{GeV}}$. In section \[sec:TripPheno\], we study the phenomenology of the dashed green line.[]{data-label="fig:ft"}](ChangeBoth_TB10_NoCancel "fig:"){width="0.45\linewidth"} ![Analogous panels to Fig.\[fig:ftLambda\], this time with varying $\mu_{\Sigma}$ and $m_{\chi}$ for fixed $\lambda=0.25$ and $m_{T}=800{~\text{GeV}}$. In section \[sec:TripPheno\], we study the phenomenology of the dashed green line.[]{data-label="fig:ft"}](OnlyRight_TB10_NoCancel_Stops "fig:"){width="0.45\linewidth"} ![Analogous panels to Fig.\[fig:ftLambda\], this time with varying $\mu_{\Sigma}$ and $m_{\chi}$ for fixed $\lambda=0.25$ and $m_{T}=800{~\text{GeV}}$. In section \[sec:TripPheno\], we study the phenomenology of the dashed green line.[]{data-label="fig:ft"}](OnlyRight_TB10_NoCancel "fig:"){width="0.45\linewidth"} Having discussed how the fine tuning depends on the triplet parameters, we now examine the effects of the different stop assumptions. The general results apply to both scans, but we focus only on the second scan, with $\lambda$ and $m_T$ fixed. The stop contribution to the Higgs mass depends on the geometric mean of the stop masses. At $\mu_{\Sigma}= m_{\chi} = 10{~\text{TeV}}$, the geometric mean of the stops needs to be around $800{~\text{GeV}}$. In this case, both assumptions for choosing the stop mass give $m_{\tilde{Q}_3}=m_{\tilde{u}_3^c}=800{~\text{GeV}}$ and the corresponding measure of fine tuning is around 50. Lowering the value of $\mu_{\Sigma}$ increases the triplet contributions to the Higgs mass and decreases the stop masses and fine tuning. The minimum stop mass (still along $m_{\chi}=10{~\text{TeV}}$) is reached when $\mu_{\Sigma} \le 2 {~\text{TeV}}$. When both stop soft masses are simultaneously changed, they take on a minimum mass of around $450{~\text{GeV}}$. The minimum fine tuning is then $\Delta\sim9$. On the other hand, when only changing the right-handed soft mass, it needs to be even lighter. Its minimum soft mass is around 260 GeV which gives a fine tuning of 17. Although one stop mass is lighter, the RGE running (and thus the tuning) are worse because the left-handed mass is still at $800{~\text{GeV}}$. We have marked the line $m_{\chi}=10{~\text{TeV}}$ with a green dashed line and will study the phenomenology along this line in more detail in the next section. The benchmark values that we have used allow for quite low values of fine tuning for both assumptions about the stop masses. This low fine tuning comes at the cost of having light stops. In fact, for stop mass assumptions, the minimum stop mass achieved is well below the 750-800 GeV LHC limits [@Aad:2012xqa; @Aad:2014qaa; @Aad:2012uu; @Aad:2014nra; @Aad:2014bva; @Aad:2012ywa; @Chatrchyan:2012lia; @Chatrchyan:2013xna; @Chatrchyan:2014lfa; @CMS:2014yma; @CMS:2014wsa]. In the next section we will show that these searches do not exclude all of our regions of low fine tuning. However, it does raise the question about how the model can deal with LHC SUSY searches and what other signatures to search for. Although the triplet scalars need to be heavy, their fermion counterparts – the [*tripletinos*]{} – with mass $\sim \mu_{\Sigma}$, can be light enough to be reachable by the LHC. In the next section we briefly explore the phenomenology of the tripletinos at the LHC. We will examine both the direct constraints on these particles and how tripletinos affect the decay of the stops. Triplet Fermion Phenomenology {#sec:TripPheno} ============================= (Lack of) Constraints on Tripletinos {#sec:TripConstraints} ------------------------------------ The $Y=1$ triplets contain neutral, $\pm1$ and $\pm2$ charged fermions. The neutral and singly-charged fermions mix with the neutralinos and charginos, respectively (the mass matrices of the fermions are shown in Appendix \[sec:AppMixing\]). The doubly-charged states, on the other hand, do not mix with SM particles. One might expect that strong bounds would exist for such exotic states. The tripletinos, however, are good at hiding. 1. Direct Searches The charge $\pm 1, 0$ tripletinos are subject to MSSM electroweakino searches, which currently exclude regions where the LSP mass is less than around 150 GeV if there are no light sleptons [@Aad:2014vma; @CMS:2013dea]. These searches are most powerful if the LSP is light and if there is a large separation between the mass of the LSP and the mass of the rest of the other states. As a result, these conventional searches fail for quasi-degenerate electroweakino spectra, such as one expects in a pure Higgsino scenario or with a Higgsino-tripletino admixture. Another possibility is to look for disappearing tracks [@CMS:2014gxa] or long-lived charged particles [@Aad:2013pqd; @Chatrchyan:2013oca], though these approaches require a level of degeneracy that is atypical in the region of tripletino-Higgsino parameter space we are interested in. One potential avenue is a search focusing on the doubly charged tripletinos and $\mu_{\Sigma} < \mu$. The (lighter) mass eigenstates are then given by $$\begin{aligned} m_{\tilde{\chi}^{++}} &=\mu_{\Sigma}, \\ m_{\tilde{\chi}^{+}} &= \mu_{\Sigma} \left(1-\frac{1}{2}\frac{\lambda^2 v^2}{\mu^2} (1-\cos(2\beta) \right), ~~\text{and} \\ m_{\tilde{\chi}^0} &= \mu_{\Sigma} \left(1-\frac{\lambda^2 v^2}{\mu^2} (1-\cos(2\beta) \right). \end{aligned}$$ For benchmark parameters $\mu=250{~\text{GeV}}, \lambda = 0.25$ and taking $\mu_{\Sigma}=150~{~\text{GeV}}$, the masses are 150, 145.5, and 141 GeV respectively. The pair production cross section of the doubly charged state at the LHC is $1.05$ ($2.48$) pb for the LHC at $8$ ($14$) TeV. These decay down to the neutral state through $W^{\pm}$ bosons. Although the decay products will be soft and hard to detect, the signal has 4 $W^{\pm}$ bosons which can decay leptonically. A dedicated search is beyond the scope of this paper, but the relatively large cross section along with the clean final state could motivate a search for the doubly charged particles – recoiling off a hard, initial-state jet for triggering purposes. 2. Oblique parameters Triplet fermions have the potential to generate a loop level contribution to the $\mathcal T$ parameter. However, at $O(\lambda^2)$ we find this contribution to be zero due to the Dirac nature of the tripletinos and the near degeneracy of the states. We calculated this using mass insertions to account Higgsinos-tripletino mixing, as well as in an effective theory where the Higgsinos were integrated out. In both cases the vacuum polarization amplitudes $\Pi^{11}(0)$ and $\Pi^{33}(0)$ are non-zero, but their difference is zero. 3. Higgs observables The addition of $SU(2)_L$ triplets to the content of the MSSM adds more charged particles which couple to the Higgs and could affect the decay of $h\rightarrow \gamma\gamma$. Unlike more traditional triplet extensions [@Delgado:2012sm; @Delgado:2013zfa; @Kang:2013wm; @Arina:2014xya] only one of the triplets couples to Higgses, and in the $Y=\pm1$ Dirac Triplet extension of the MSSM, the partial width is not affected to lowest order. The only way that the triplets in this model play a role in the diphoton rate is allowing for lower stop masses which affect both the production and the decay of the Higgs [@Carena:2011aa; @Carena:2013iba]. Moving to direct production at the LHC, the triplet fermions are hard to detect due to the small mass splitting. Giving the triplets a Dirac mass and having only one triplet couple to the doublet makes their presence hard to find in sensitive loop level processes too. The effects of the triplets can still be seen in the efficient raising of the Higgs mass leading to light stops. If the triplet fermions happen to be lighter than the stops it would be possible use stop decays to observe the triplet fermions. Stop Decays {#subsec:StopDecays} ----------- We have seen that the inclusion of $Y = \pm 1$ triplets with interactions inspired by the DiracNMSSM – namely where only one triplet couples to Higgses – leads to light stops. While nice from a fine-tuning perspective, light stops are constrained by the LHC, so we must make sure these ‘natural’ scenarios are not ruled out by experimental searches. As we illustrate in this section, the phenomenology of the stops depends on the hierarchy of $\mu$ and $\mu_{\Sigma}$ and whether the lightest stop is left or right-handed. In all four scenarios we sketch out the viable parameter space. In most circumstances, we find that compressed spectra are required to avoid LHC limits, such that larger values of $\mu$ are necessary; this a posteriori motivates our benchmark choice $\mu = 250\,{~\text{GeV}}$. To anchor our phenomenology study, we fix $\lambda=0.25$, $m_T=800{~\text{GeV}}$ $m_{\chi} = 10{~\text{TeV}}$, and vary $\mu_{\Sigma}$ (all other parameters are taken from Table \[tab\_BenchmarkFT\]). This parameter slice is indicated by the green dashed line in Figs. \[fig:finetuningTB10BothEqual\] and \[fig:finetuningTB10ChangeRight\] and is characterized by low fine-tuning. The spectrum of the charginos, neutralinos and stops along this line is shown below in Fig. \[fig:Spectrum\]. The solid colored lines show the chargino/neutralino masses; the sharp feature at $\mu_{\Sigma} \sim \mu = 250\,{~\text{GeV}}$ corresponds to where the composition of the lightest $\tilde{\chi}^0_i, \tilde{\chi}^+$ shifts from primarily tripletino to primarily Higgsino. ![Spectrum of the stops, neutralinos and charginos. The Higgsino mass parameter $\mu=250{~\text{GeV}}$ while the triplet mass is along the horizontal axis. Two methods of choosing the stop mass are shown. The solid black line labelled $\tilde{t}_{1,2}$ marks changing both the left and right soft masses simultaneously. The dashed lines keep the left-handed soft mass at $800{~\text{GeV}}$ and use the right-handed mass to set the Higgs mass.[]{data-label="fig:Spectrum"}](LowSpectrum_stop_fermions_v2){width="0.45\linewidth"} The black lines in Figure \[fig:Spectrum\] indicate the stop spectra for both stop selection choices (see Sec. [\[sec:Num\]]{}). The solid line corresponds to changing both the left and the right-handed soft masses simultaneously. The dashed line, labeled $\tilde{t}_1$, and the dotted line, labelled $\tilde{t}_2$ mark the masses of the two stops when the left-handed soft mass is set to $800\,{~\text{GeV}}$ and the right-handed mass moves to accommodate the Higgs mass. The next ingredient in the stop phenomenology is the branching ratio. Using the same set of parameters as in Fig. \[fig:Spectrum\], we plot the branching ratio below in Fig. \[fig:BranchingRatio\] for both stop scenarios. In the branching ratio calculations we only keep the two-body final states. ![Branching ratios of the stops when only considering 2-body decays. The bino and wino have been completely decoupled, only leaving the Higgsino and tripletino for the stop decays. The left panel has the left-handed stop mass set to $800{~\text{GeV}}$ and uses the right-handed mass to raise the Higgs mass. The right panel has both soft masses change to set the Higgs mass. The Higgsino mass is $\mu=250{~\text{GeV}}$. []{data-label="fig:BranchingRatio"}](Branching_caseIVA_Ronly "fig:"){width="0.45\linewidth"} ![Branching ratios of the stops when only considering 2-body decays. The bino and wino have been completely decoupled, only leaving the Higgsino and tripletino for the stop decays. The left panel has the left-handed stop mass set to $800{~\text{GeV}}$ and uses the right-handed mass to raise the Higgs mass. The right panel has both soft masses change to set the Higgs mass. The Higgsino mass is $\mu=250{~\text{GeV}}$. []{data-label="fig:BranchingRatio"}](Branching_caseIVA_LR "fig:"){width="0.45\linewidth"} Both sets of branching ratios show a feature at $\mu_{\Sigma} \sim 250\,{~\text{GeV}}$ where the character of the electroweakinos changes. For light right-handed stops (left panel of \[fig:BranchingRatio\]) the branching fraction for $\tilde t_1 \to b\,\chi^+_1$ is $\sim100\%$ over a wide range of $\mu_{\Sigma}$ because the triplet states do not couple directly to the stops and the stop mass in this scenario is nearly the same mass as our benchmark Higgsino (the LSP) mass. In the right panel, where both left and right-handed stops have the same mass, there is more variety in the branching ratios because the stops are heavy enough to undergo both $\tilde t \to t\, \chi^0_i$ and $\tilde t \to b\, \chi^+_i$ decays. From Figs. \[fig:Spectrum\] and \[fig:BranchingRatio\], we can see the phenomenology naturally splits up into four categories, $\mu < \mu_{\Sigma}, \mu > \mu_{\Sigma}$ for either $m_{\tilde t_1} \ll m_{\tilde t_2}$ or $m_{\tilde t_1} \cong m_{\tilde t_2}$, which we discuss in more detail below:\ [*Case $m_{\tilde t_1} \ll m_{\tilde t_2}$:*]{} Here the left-handed stop mass is fixed to $800\,{~\text{GeV}}$ and the right-handed stop mass is variant to satisfy the Higgs mass. For $\mu_{\Sigma} \lesssim 2\,{~\text{TeV}}, m_{\tilde t_1} \sim 300\,{~\text{GeV}}$. - $\mu_{\Sigma} > \mu$: Here the tripletinos play little role, and the low energy states are simply stops and Higgsinos. These scenarios are tightly constrained unless the Higgsino mass $\mu$ is nearly the same as the stop mass and the only two-body decay mode is $\tilde t \to b\,\chi^{+}_1$. As $\mu$ approaches $m_{\tilde t_1}$, the $b$ and subsequent $\chi^+_1$ decay products become soft and conventional stop searches become inefficient. For $m_{\tilde t_1} = 300\,{~\text{GeV}}$, a Higgsino mass of $\mu \gtrsim 180\,{~\text{GeV}}$ is needed [@Aad:2014nra; @CMS:2014yma; @CMS:2014wsa; @Kribs:2013lua] to avoid current LHC bounds. - $\mu_{\Sigma} < \mu$: In this case the tripletinos are lighter than the Higgsinos, so stop decays proceed in two steps; stop decaying to Higgsino, then Higgsino decaying to tripletino. The visibility of this setup depends on the $\mu_{\Sigma} - \mu$ difference. If the two scales are sufficiently separated, the Higgsino decays are energetic and will be picked up by standard stop searches, regardless of how degenerate $\mu$ and $m_{\tilde t_1}$. Therefore, for this scenario to be viable, all three scales $m_{\tilde t_1}, \mu$ and $\mu_{\Sigma}$ must be nearby; for the benchmark value $\mu = 250\,{~\text{GeV}}$, we estimate $\mu_{\Sigma} \gtrsim 200\,{~\text{GeV}}$ is required. [*Case $m_{\tilde t_1} \sim m_{\tilde t_2}$:*]{} In this case, the stop masses are changed together to accommodate the Higgs mass. The stops have a mass of around $450{~\text{GeV}}$ for $\mu_{\Sigma}\lesssim 2\,{~\text{TeV}}$. For larger $\mu_{\Sigma}$, the triplet contribution to $m^2_h$ shrinks and the stops quickly increase in mass. - $\mu_{\Sigma} > \mu$: The stop now has phase space to decay through a top quark and does so around 30$\%$ of the time. Searches for this mode include the leptonic decays and all hadronic decays [@Aad:2014bva; @Aad:2012ywa; @Chatrchyan:2012lia; @Chatrchyan:2014lfa]. For a stop mass of 450 GeV, the limits extend up to an LSP mass of around 220 GeV, thus our model with $\mu=250{~\text{GeV}}$ survives. However, a left-handed stop implies a left-handed sbottom of similar mass. The sbottom searches are very effective for this sort of the spectrum and place constraints on the sbottom up to a mass of $\sim700{~\text{GeV}}$ for an LSP mass of $250{~\text{GeV}}$ [@Aad:2013ija; @CMS:2014nia]. The sbottom (and stop) mass is raised above $700{~\text{GeV}}$ when the triplet effects are decoupled with $\mu_{\Sigma} > 10{~\text{TeV}}$. In the large region of parameter space where the sbottoms are $450{~\text{GeV}}$, in order to be viable, the LSP mass ($\mu$ in this case) must be raised to $\sim300{~\text{GeV}}$. - $\mu_{\Sigma} < \mu$: In this region, all stops and bottoms first decay to Higgsino plus $b/t$, with the Higgsino subsequently decaying to tripletino. The sbottom searches can again be useful, but one potential caveat is that the sbottom decays in our scenario are quite busy, containing extra objects from the Higgsino decay. These final states may be inefficient in sbottom searches such as [@CMS:2014nia] which explicitly veto events with leptons or with more than two jets. The extent to which this scenario can evade the sbottom searches without being collected by another search requires a dedicated analysis, though it is possible that a region window near $\mu_{\Sigma} \sim\mu$ exists undetected by current stop or sbottom searches. Summarizing, the light stops that are a consequence of this triplet extension are safe from current LHC bounds if the spectrum is sufficiently squeezed. For $m_{\tilde t_1} \ll m_{\tilde t_2}$ (light right-handed stop), the benchmark ($\mu = 250{~\text{GeV}}$) scenario is safe provided $\mu_{\Sigma} > 200{~\text{GeV}}$. For degenerate left and right-handed stops, the bounds are more stringent and are driven by sbottom searches. For the benchmark set of parameters to be safe, either the entire stop spectrum must be raised to $\gtrsim 700{~\text{GeV}}(\mu_{\Sigma} > 10{~\text{TeV}})$ or the Higgsinos and tripletinos must be made more degenerate with the stops, $\mu_{\Sigma} \sim \mu \gtrsim 300{~\text{GeV}}$. Continued searches for stop and sbottom squarks will place tighter constraints on the model if no sparticle is found. These stop limits may be alleviated, for example by lowering $\lambda$ or raising $\mu$, though at the expense of increased fine tuning. Discussion and conclusion {#sec:conclusions} ========================= We have examined extensions of the MSSM by two $SU(2)_L$ triplets where only one triplet is permitted to couple to the Higgs doublets. While not generic, this setup is radiatively stable and has the property – first pointed out in the DiracNMSSM [@Lu:2013cta] using singlets – that large, $\gtrsim\,\text{few}\,{~\text{TeV}}$ soft masses for the uncoupled field generate tree level contributions to the Higgs mass without the price of increased fine tuning. Triplet extensions can either have $Y = 0$ or $Y = \pm 1$, we have a studied the Higgs mass contributions, fine tuning, and $\mathcal T$-parameter constraints for both cases. Triplets with nonzero hypercharge are well-suited to this scenario as they must appear in pairs and can only have Dirac-type superpotential masses. For $Y = \pm 1$ scenarios, we find $m_h = 125\,{~\text{GeV}}$ can be achieved with fine tuning as small as one part in ten (according to the same fine tuning measure used in [@Lu:2013cta]). We find that the least tuned regions of parameter space coincide with regions where the $\mathcal T$-parameter constraint – usually a thorn in the side of triplet models – is not an issue. The smallness of the $\mathcal T$-parameter is a consequence the $\tan\beta$ dependence of the triplet-Higgs interaction, aided by the fact that the uncoupled triplet soft mass can be very large ($\gtrsim {~\text{TeV}}$). The least tuned regions also have light stop spectra, either $m_{\tilde t_1} \sim300\,{~\text{GeV}}$ or $m_{\tilde t_1} \sim 450\,{~\text{GeV}}$ depending on whether only one stop is light or both. Such light stops are running out of hiding places at the the LHC. In order to remain undetected, the stops must be fairly degenerate with the LSP, $m_{\tilde t_1} - m_{LSP} \lesssim 100\,{~\text{GeV}}$, though the details of the bounds depend on the hierarchy of the triplet Dirac mass $\mu_{\Sigma}$ and the Higgsino mass $\mu$, as well as on the handedness of the lightest stop; scenarios with light right-handed stops are less constrained than with left-handed. In addition to light stops, the charged and neutral fermionic components of the triplets, the tripletinos, may be light. In the parameter space of interest for the purposes of raising the Higgs mass, these triplets are unconstrained by existing LHC searches. This stealthiness is due to the small splitting among the triplet states and because the tripletinos only couple to Higgs and gauge bosons at tree level. Finally, for certain triplet parameters – for example $\mu_{\Sigma} \sim m_{\chi} \sim 2\,{~\text{TeV}}$ for the parameter set in Fig. \[fig:finetuningTB10ChangeRight\], the $\mathcal T$-parameter contribution from the triplet sector may be within the reach of future precision electroweak studies. Acknowledgments {#acknowledgments .unnumbered} --------------- The work of AD was partially supported by the National Science Foundation under Grant No. PHY-1215979, and the work of AM was partially supported by the National Science Foundation under Grant No. PHY-1417118. Potential for the $Y=0$ triplets {#sec:appY0} ================================ In the following two appendices we list the effective potential, the expressions for the soft masses in terms of the model parameters (via the minimization conditions) and the change in the Higgs mass coming from the triplet sector. It must be emphasized that all of these are tree-level quantities that will receive loop corrections. For the model involving two $Y=0$ triplets, the triplet fields are given by $$\begin{aligned} \Sigma_{1} &= \begin{pmatrix} T^{0}/\sqrt{2} & -T_{2}^{+} \\ T_{1}^{-} & -T^{0}/\sqrt{2} \end{pmatrix} \text{ and}\\ \Sigma_{2} & = \begin{pmatrix} \chi^0/\sqrt{2} & -\chi^+_2 \\ \chi^-_1 & -\chi^0/\sqrt{2} \end{pmatrix}. \end{aligned}$$ The only change in the superpotential from the MSSM is $$W \supset \lambda H_u \cdot \Sigma_2 H_d.$$ Expanding the neutral scalar potential including the soft terms leads to $$\begin{aligned} V _{\text{neutral}} &= m^2_{H_u}|H_u^0 |^2 + m^2_{H_d^0}|H_d^0|^2 + m^2_{\chi}|\chi^0|^2 + m^2_{T}|T^0|^2 \notag \\ & +\left| \frac{\lambda}{\sqrt{2}} H_d^0 T^0 - \mu H_d^0 \right|^2 + \left| \frac{\lambda}{\sqrt{2}} H_u^0 T^0 - \mu H_u^0 \right|^2 \notag \\ &+ \left| \mu_{\Sigma} \chi^0 + \frac{\lambda}{\sqrt{2}} H_d^0 H_u^0 \right|^2 + \left|\mu_{\Sigma} T^0 \right|^2 + \frac{g^2 + g^{\prime 2}}{8} (|H^0_d|^2 - |H^0_u|^2)^{2} \notag \\ &+ \left(\mu_{\Sigma} B_{\Sigma} \chi^0T^0 + B_{\mu} \mu H_d^0 H_u^0 + \frac{A_{\lambda}\lambda}{\sqrt{2}} H_d^0 H_u^0 \chi^0 + \text{h.c.} \right). \label{eqn:NeutralPotentialY0}\end{aligned}$$ The heavy triplet scalars are then integrated out, leading to an effective potential of $$\begin{aligned} V_{\text{eff}} &\supset \left(m^2_{H_u} + |\mu|^2\right)|H_u^{0}|^2 + \left(m^2_{H_d} + |\mu|^2\right)|H_d^{0}|^2 \notag \\ &+ \frac{m_Z^2}{4 v^2} (|H_d^0|^{2}-|H_u^0|^{2})^2 - \left(B_{\mu} \mu H_d^0 H_u^0 + \text{ h.c.}\right) \notag \\ &+ \frac{\left| \lambda H_d^0 H_u^0 \right|^2}{2} \left(1- \frac{\mu_{\Sigma}^2}{\mu_{\Sigma}^2 + m^2_{\chi}} \right) \notag \\ &- \frac{\lambda^2}{2 (\mu_{\Sigma}^2 + m^2_{T})} \left|A_{\lambda} H_d^0 H_u^0 -\mu \left( |H_u^0|^2 + |H_d^0|^2 \right) \right|^2 + (\text{higher order}) \label{eqn:VintoutY0} .\end{aligned}$$ Terms of order $O(D_{\chi}^{-2},D_{T}^{-2},D_{\chi}^{-1}D_{T}^{-1})$ and higher inverse powers have been neglected, where $D_{\chi,T}\equiv (\mu_{\Sigma}^{2}+m_{\chi,T}^{2})$. The conditions needed to achieve EWSB at the minimum of this potential are $$\begin{aligned} m^2_{H_u} &=& -|\mu|^2 + \frac{m^2_Z}{2} \cos(2\beta) + m^2_A \cos^2 \beta -\frac{\lambda^2v^2}{2} \cos^2 \beta {\nonumber \\ }& &+ \frac{v^2 \lambda^2}{2} \frac{- 4 |\mu|^2 - A_{\lambda}(\mu + \mu^*)(\cos(2\beta)-2)\cot\beta - 2 A_{\lambda}^2 \cos^2 \beta}{\mu_{\Sigma}^2 + m^2_{T}} {\nonumber \\ }&& - \mu_{\Sigma}^2 v^2 \lambda^2 \frac{\cos^2 \beta}{\mu_{\Sigma}^2 + m^2_\chi} \text{, and} \label{eqn:mincondHu} \\ m^2_{H_d} &=& -|\mu|^2 -\frac{m^2_Z}{2} \cos(2\beta) + m^2_A \sin^2 \beta - \frac{\lambda^2 v^2}{2} \sin^2\beta {\nonumber \\ }&& + \frac{v^2 \lambda^2}{2} \frac{-4 |\mu|^2 + A_{\lambda} (\mu + \mu^*)(2+\cos(2\beta))\tan\beta - A_{\lambda}^2 \sin^2 \beta} {\mu_{\Sigma}^2 + m^2_{T}} {\nonumber \\ }&& - \mu_{\Sigma}^2 v^2 \lambda^2 \frac{\sin^2\beta}{\mu_{\Sigma}^2 + m^2_{\chi}} \label{eqn:mincondHd}\end{aligned}$$ The corresponding shift in the MSSM physical Higgs mass in the decoupling limit $$\Delta m_{h}^{2}=\frac{v^2 \lambda^2}{2}\sin^2(2\beta) \frac{m^2_{\chi}}{\mu^2_{\Sigma}+m^2_{\chi}} -\frac{v^2\lambda^2}{2} \frac{\left|2 \mu^* -A_{\lambda} \sin(2\beta)\right|^2}{\mu^2_{\Sigma}+m^2_{T}}.$$ Potential for the $Y=\pm1$ triplets {#sec:appY1} =================================== Now we examine the model where the triplets have hypercharge $Y=\pm1$, which can then be expressed as $$\begin{aligned} \Sigma_{1} &= \left(\begin{array}{cc} T^{-}/\sqrt{2} & -T^{0} \\ T^{--} & -T^{-}/\sqrt{2} \end{array}\right) \text{ and} \\ \Sigma_{2}&= \left(\begin{array}{cc} \chi^{+}/\sqrt{2} & -\chi^{++} \\ \chi^{0} & -\chi^{+}/\sqrt{2} \end{array} \right). \end{aligned}$$ The superpotential is modified from the MSSM with $$W \supset \lambda H_u \cdot \Sigma_1 H_u$$ The neutral potential is then given by $$\begin{aligned} V_{\text{neutral}} &= & m^2_{H_u} \left| H_u \right|^2 + m^2_{H_d} \left| H_d \right|^2 + m_{\chi}^2 {|\chi^0|^2} + m_{T}^2 {|T^0|^2} {\nonumber \\ }&&+ \left|2 \lambda H_u^0 T^0 + \mu H_d^0 \right|^2 +\left|\mu H_u^0\right|^2 +\left|\mu_{\Sigma} T^0\right|^2 + \left| \mu_{\Sigma} \chi^0 + \lambda H_u^0 H_u^0 \right|^2 {\nonumber \\ }&&+ \frac{g^2 + g^{\prime~2}}{8} \left( H_d^0 H_d^{0*} - H_u^0 H_u^{0*} + 2 T^0 T^{0*} - 2\chi^0 \chi^{0*} \right)^2 {\nonumber \\ }& &+ \left( - \lambda A_{\lambda} H_u^0 H_u^0 T^0 - \mu B_{\mu} H_d^0 H_u^0 - \mu_{\Sigma} B_{\Sigma} T^0 \chi^0 + \text{h.c.} \right)\end{aligned}$$ The heavy triplets are integrated out, leaving an effective potential of $$\begin{aligned} V_{\text{eff,neut}} &=& \left(m^2_{H_u} + \mu^2 \right) |H_u^0|^2 + \left(m^2_{H_d} + \mu^2 \right) |H_d^0|^2 {\nonumber \\ }&&+ \frac{m_Z^2}{4 v^2} \left( |H_d^0|^2 - |H_u^0|^2 \right)^2 -\left( \mu B_{\mu} H_d^0 H_u^0 +\text{h.c.}\right) {\nonumber \\ }&&+ \lambda^2 |H_u^0H_u^0|^2 \left(1 - \frac{2A_{\lambda}^2}{\mu_{\Sigma}^2 +m^2_{T}} - \frac{2\mu_{\Sigma}^2}{\mu_{\Sigma}^2 +m^2_{\chi}} \right){\nonumber \\ }&& -8 |H_u^0|^2 |H_d^0|^2 \lambda^2 \mu^2 \frac{1}{\mu_{\Sigma}^2+m^2_{T}}{\nonumber \\ }&& + \frac{4 \lambda^2 A_{\lambda}}{\mu^2_{\Sigma} + m^2_{T}} \left(\mu^* H_u^0 H_u^{0*} H_u^{0*} H_d^{0*} + \text{h.c.} \right) + \mathcal{O}(\frac{1}{D_\chi^2},\frac{1}{D_\chi D_T}, \frac{1}{D_T^2}). \label{eqn:VintoutY1}\end{aligned}$$ The minimization conditions are given by $$\begin{aligned} m^2_{H_u} &=&- |\mu|^2 +\frac{1}{2}m^2_Z \cos(2\beta) +m^2_A \cos^2(\beta) - 2 v^2 \lambda^2 \sin^2(\beta) \nonumber \\ &&+ 2 \sin^2(\beta) \frac{\mu^2_{\Sigma} v^2 \lambda^2 }{\mu^2_{\Sigma} + m^2_{\chi}} +2 v^2\lambda^2 \sin^2(\beta) \frac{A_{\lambda}^2 + 2 \mu^2 \cot^2(\beta) + 2 A_{\lambda} \mu \cot(\beta) }{\mu^2_{\Sigma}+m^2_{T}} \label{eqn:mincondHuY1}\\ m^2_{H_d} &=& -|\mu|^2 -\frac{1}{2}m^2_Z \cos(2\beta) - m^2_A \sin^2(\beta) + 4 \frac{|\mu|^2 v^2 \lambda^2 \sin^2(\beta)}{\mu^2_{\Sigma}+m^2_{T}}. \label{eqn:mincondHdY1}\end{aligned}$$ This leads to a shift in the Higgs mass in the decoupling limit of $$\Delta m_{h}^{2}=4 v^2 \lambda^2 \sin^4(\beta)\left( \dfrac{ m_{T_1}^2}{\mu_{\Sigma}^2+m^2_{\chi}} \right) -\dfrac{4 v^2 \lambda^2 \sin^2{(\beta)}}{\mu^2_{\Sigma} +m^2_{T}}\left|2\mu^* \cos {(\beta)} - A_{\lambda} \sin{(\beta)}\right|^2.$$ Finite threshold correction {#sec:AppFTC} =========================== The threshold correction arises when the heavy triplet fields are integrated out. The one loop contribution is given by $$\begin{aligned} \delta m_{H_u^0}^2 &= \frac{8(\lambda^2 + \frac{\lambda^2}{2}) \mu_{\Sigma}^2 }{16\pi^2} \left(\frac{2}{\epsilon} - \gamma + 1 + \log 4\pi - \log(\mu_{\Sigma}^2) \right) \notag \\ & + \left(4 \lambda^2 \mu{\Sigma}^2 + 2 \lambda^2 \mu_{\Sigma}^2 \right)\frac{1}{16\pi^2} \left( -\frac{2}{\epsilon} +\gamma -1 -\log4\pi + \log(m_{\chi}^2 + \mu_{\Sigma}^2) \right) \notag \\ &+ \lambda^2 (4 + 2) \left(\mu_{\Sigma}^2+m_{T}^2 \right) \frac{1}{16\pi^2} \left(-\frac{2}{\epsilon} +\gamma -1 -\log4\pi + \log(m_{T}^2 + \mu_{\Sigma}^2) \right) \notag \\ =& \frac{12 \lambda^2 \mu_{\Sigma}^2}{16 \pi^2} \left(\frac{1}{2} \log (m^2_{\chi} + \mu_{\Sigma}^2) +\frac{1}{2} \log (m^2_{T} + \mu_{\Sigma}^2) - \log(\mu_{\Sigma}^2) \right) \notag \\ &+ \frac{6 \lambda^2 m^2_{T}}{16\pi^2} \left(-\frac{2}{\epsilon} +\gamma -1 -\log4\pi + \log(m_{\chi}^2 + \mu_{\Sigma}^2) \right) \notag \\ =& \frac{6 \lambda^2 \mu_{\Sigma}^2}{16 \pi^2} \left(\log \frac{(m^2_{\chi} + \mu_{\Sigma}^2)}{\mu_{\Sigma}^2} + \log \frac{(m^2_{T} + \mu_{\Sigma}^2)}{\mu_{\Sigma}^2} \right) + \frac{6 \lambda^2 m^2_{T}}{16\pi^2} \left(-\frac{2}{\epsilon} +\gamma -1 -\log4\pi + \log(m_{T}^2 + \mu_{\Sigma}^2) \right).\end{aligned}$$ We are only interested in the finite piece. Neutralino and chargino mixing in $Y=\pm1$ {#sec:AppMixing} ========================================== The $Y=\pm1$ mixing matrix for the neutralinos in the basis $\psi^0=\left( \widetilde{B},\widetilde{W}^{0},\widetilde{H_{d}^{0}},\widetilde{H_{u}^{0}},\widetilde{T}^{0},\widetilde{\chi}^{0} \right)$ is given by $$\begin{aligned} {\mathcal{L}}_{\text{Neutralino Mass}} &=& -\frac{1}{2} (\psi^0)^T \mathbf{M}_{\tilde{N}} \psi^0 + \text{c.c.} \\ \mathbf{M}_{\tilde{N}} &=& \begin{pmatrix} M_{1} & 0 & -c_{\beta}s_{W}m_{Z} & s_{\beta}s_{W}m_{Z} & -\sqrt{2}g'v_{T} & \sqrt{2}g'v_{\chi} \\ 0 & M_{2} & c_{\beta}c_{W}m_{Z} & -s_{\beta}c_{W}m_{Z} & -\sqrt{2}gv_{T} & \sqrt{2}gv_{\chi} \\ -c_{\beta}s_{W}m_{Z} & c_{\beta}c_{W}m_{Z} & 0 & -\mu & 0 & 0 \\ s_{\beta}s_{W}m_{Z} & -s_{\beta}c_{W}m_{Z} & -\mu & -2v_{T}\lambda & -2v\lambda s_{\beta} & 0 \\ -\sqrt{2}g'v_{T} & -\sqrt{2}gv_{T} & 0 & -2v\lambda s_{\beta} & 0 & -\mu_{\Sigma} \\ \sqrt{2}g'v_{\chi} & \sqrt{2}gv_{\chi} & 0 & 0 & -\mu_{\Sigma} & 0 \end{pmatrix}, \nonumber \label{neutralinoMixY1}\end{aligned}$$ where $c_{\beta}$, $s_{\beta}$, $c_W$, and $s_W$ represent the cosine or sine of beta or $\theta_W$. The triplets add one chargino. Using the basis $\psi^{\pm} = \left( \widetilde{W}^{+},\widetilde{H}_{u}^{+},\widetilde{\chi}^{+}, \widetilde{W}^-, \widetilde{H}_d^-, \widetilde{T}^- \right)$, the chargino mass matrix is $${\mathcal{L}}_{\text{Chargino Mass}}=-\frac{1}{2} (\psi^{\pm})^T \mathbf{M}_{\tilde{C}} \psi^{\pm}, {\nonumber \\ }$$ where $$\mathbf{M}_{\tilde{C}}= \begin{pmatrix} \mathbf{0} & \mathbf{X}^T \\ \mathbf{X} & \mathbf{0} \end{pmatrix} \nonumber, \\$$ and $$\mathbf{X}= \begin{pmatrix} M_{2} & gvs_{\beta} & -\sqrt{2}gv_{\chi} \\ gvc_{\beta} & \mu & 0 \\ -\sqrt{2}gv_{T} & \sqrt{2}\lambda vs_{\beta} & \mu_{\Sigma} \end{pmatrix}.\label{eqn:charginoMixY1}$$ Finally, the doubly-charged fermion mass matrix is $${\mathcal{L}}_{\text{Doubly Charged}} = -\frac{1}{2}\begin{pmatrix} \widetilde{\chi}^{++} & \widetilde{T}^{--} \end{pmatrix} \begin{pmatrix} 0 & -\mu_{\Sigma} \\ -\mu_{\Sigma} & 0 \end{pmatrix} \begin{pmatrix} \widetilde{\chi}^{++} \\ \widetilde{T}^{--} \end{pmatrix}.$$ [^1]: E-mail: calvara1@nd.edu [^2]: E-mail: adelgad2@nd.edu [^3]: E-mail: amarti41@nd.edu [^4]: E-mail: bostdiek@nd.edu [^5]: Triplet extension which preserve custodial symmetry, such as the Supersymmetric Custodial Triplet Model, allow for large triplet vevs (and light scalars) without tension from electroweak precision observables [@Cort:2013foa; @Garcia-Pepin:2014yfa; @Delgado:2015aha]. [^6]: One could also use a spurion analysis of an extra broken symmetry which would suppress the unwanted couplings [@Lu:2013cta]. [^7]: These are the only possibilities that simultaneously permit a Dirac mass term and supply extra neutral scalars to raise $m_{h}^{2}$. [^8]: If the $U$ parameter is fixed to $U=0$, the best fit is $\mathcal{T}=0.10\pm0.07$.
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'SSRN\_main.bib' --- Introduction {#sec:intro} ============ Volunteers in the U.S. provide around $8$ billion hours of free labor annually. However, roughly $30\%$ of volunteers become disengaged the following year, representing a loss of approximately $\$70$ billion in economic value as well as a significant challenge for the sustainability of organizations relying on volunteerism [@nationalservice2015; @independentsector2018]. Lack of retention partially stems from overutilization as well as the mismatch between a volunteer’s preferences and the opportunities presented to her [@locke2003hold; @brudney2009ain]. The emergence of online volunteer crowdsourcing platforms presents a unique opportunity to design data-driven volunteer management tools that cater to volunteers’ [heterogeneous]{} preferences. In the present work, we move toward this goal by taking [an algorithmic]{} approach to designing nudging mechanisms commonly used to encourage volunteers to perform tasks. This work is motivated by our collaboration with a nonprofit platform, called Food Rescue U.S. (FRUS), that [recovers food from local businesses and donates it to nonprofit agencies by crowdsourcing the transportation to volunteers]{}. In the following, we provide background on FRUS and highlight the challenge it faces when making volunteer nudging decisions. Further, we offer insights into volunteer behavior by analyzing FRUS data from different locations. Then, we list a summary of our contributions. [**FRUS: A Crowdsourcing Platform for Food Recovery:**]{} FRUS is a leading online platform that simultaneously addresses the societal problems of food waste and hunger. Over 60 million tons of food go to waste in the U.S. each year, while 37 million people—including 6 million children—live in food-insecure households. This mismatch is driven in part by the cost of last-mile transportation required to recover perishable donated food from local restaurants and grocery stores. FRUS has empowered donors by connecting them to local agencies and enabling free delivery through its dedicated volunteer base. Currently, it operates in tens of locations across different states, and so far it has recovered over 50 million pounds of food. On FRUS, a volunteering task—which is referred to as a [*rescue*]{}—involves transporting a prearranged, perishable food donation from a donor to a local agency. Scheduled donations are often recurring and they are posted on the FRUS app in advance. While around $78\%$ of rescues are claimed organically by volunteers before the day of the rescue, around $22\%$ remain unclaimed on the last day.[^1] In that case, to encourage volunteers to claim the rescue, FRUS [notifies]{} a subset of volunteers with the hope that at least one of them responds positively. However, based on our interviews with the platform’s local managers, FRUS faces a challenge when deciding whom to notify: on the one hand, it aims to minimize the probability of a missed rescue—which is achievable by notifying more volunteers.[^2] On the other hand, it wants to avoid excessive notifications because that may reduce volunteer engagement.[^3] [Understanding volunteer behavior can help resolve the aforementioned trade-off]{}: if volunteers have preferences for certain rescues, then FRUS should mainly notify them for those tasks. Our analysis of two years of data indeed indicates that volunteer preferences are fairly consistent. To highlight this, in Figure \[fig:pcatotal\] we visualize the first three principal components for characteristics of rescues completed by the most active volunteers in two FRUS locations. Each color represents a different volunteer and the size of each circle is proportional to the frequency with which the volunteer completes a rescue of that type. For instance, more than 90% of the rescues completed by the red volunteer in Location (a), as shown in Figure \[fig:PCA1\], are clustered within a cube whose volume is less than one tenth of the PCA component range. As evident from these plots, volunteers tend to claim rescues that have similar characteristics, reflecting their geographical and time preferences.[^4] Our interviews and empirical findings raise a key question that motivates our work: facing such volunteer behavior, how should a volunteer-based online platform, such as FRUS, design an effective [notification system]{} for time-sensitive tasks? [0.48]{} ![Figure \[fig:PCA1\] shows the first three principal components (PCs) for characteristics of rescues completed by the five most active volunteers in Location (a). Each color represents a different volunteer and the size of each circle is proportional to the frequency with which the volunteer completes a rescue with those PCs. Figure \[fig:PCA2\] shows the same plot for Location (b).[]{data-label="fig:pcatotal"}](images/reg1_pca.jpeg "fig:"){width="\textwidth"} [0.48]{} [**Summary of Contributions:**]{} Motivated by our collaboration with FRUS, we (1) introduce the online volunteer notification problem which captures key features of volunteer labor consistent with the literature, (2) develop two online randomized policies that achieve constant-factor guarantees for the online volunteer notification problem, (3) establish upper bounds on the performance of any online policy, and (4) demonstrate the effectiveness of our policies by testing them on FRUS’s data from various locations across the U.S. [**Modeling the Platform’s Notification Problem:**]{} We introduce the [*online volunteer notification problem*]{} to model a platform’s notification decisions when utilizing volunteers to complete time-sensitive tasks. There are three main considerations that the platform should take into account: (1) volunteers’ response to a notification is uncertain, (2) the platform cannot expect volunteers to respond promptly, and (3) if notified excessively, volunteers may suffer from notification fatigue. To include all these considerations in our model, we assume that when each task arrives, the platform simultaneously notifies a [*subset*]{} of volunteers in the hope that at least one responds positively. To model a volunteer’s adverse reaction toward excessive notifications, we assume that a volunteer can be in one of two possible states: [*active*]{} or [*inactive*]{}. In the former state, the volunteer pays attention to the platform’s notifications [and responds positively with her task-specific match probability, whereas in the latter state she ignores all notifications.]{} Upon [notification]{}, an active volunteer will transition to the inactive state for a random inter-activity period. [Because these platforms usually require the recurring completion of similar tasks, they can use historical data to predict their future last-minute needs.]{} For instance, FRUS usually receives donations from the same source on a weekly basis. We model this by assuming that tasks belong to a given set of types and they arrive according to a (time-varying) distribution. The platform makes online decisions aiming to maximize the number of completed tasks knowing the arrival rates, match probabilities, and the inter-activity time distribution, but without observing the state of each volunteer. [**Developing Online Policies:**]{} We develop two randomized policies that are based on ex ante fractional solutions that can be computed in polynomial time. In order to assess the performance of our policies, we use a [linear program benchmark]{} whose optimal value serves as an upper bound on the value of a clairvoyant solution which knows the sequence of arrivals a priori as well as the state of volunteers at each time (see Program (LP), Proposition \[prop:LP\] and Definition \[def:compratio\]). We remark that the platform’s objective—[maximizing the number of completed tasks]{}—[*jointly*]{} depends on the response of all volunteers and exhibits diminishing returns. For example, if the platform notifies two active volunteers $v$ and $u$ about a task ${s}$, then the probability of completion would be $[1 - (1- p_{v,{s}}) (1 - p_{u,{s}})]$ where $p_{v,{s}}$ and $p_{u,{s}}$ are the match probabilities of the pairs $(v,{s})$ and $(u,{s})$, respectively. [This]{} objective function presents two challenges: (1) an ex ante solution based on upper bounding such an objective function by a piecewise linear one [can be]{} ineffective in practice, and (2) jointly analyzing volunteers’ contribution for an online policy while keeping track of the joint distribution of their states (active or inactive) is prohibitively difficult. We address the former challenge by computing ex ante solutions that “better” approximate the true objective function as opposed to only relying on the LP solution (see Programs and and Proposition \[prop:fxstar\]). We overcome the latter one by assuming an artificial priority among volunteers which allows us to decouple their contributions (see Definition \[def:priority\] and Lemma \[lem:falg\]). [Attempting to follow the fractional ex ante solution can result in poor performance since volunteers can become inactive at inopportune times (see Appendix \[ex:followingxstar\]).]{} Therefore, in the design of our policies, we modify the ex ante solution to [account for inactivity]{} while guaranteeing a constant-factor competitive ratio. Our first policy, the [*Scaled-Down Notification (SDN) Policy*]{}, relies on a prori computing the probability that a volunteer is active when following this policy. Equipped with these probabilities, the SDN policy notifies each volunteer such that the [*joint*]{} probability that a volunteer is active and notified is proportional to the ex ante solution (see Algorithm \[alg:one\] and the preceding discussion). On the other hand, our second policy, the [*Sparse Notification (SN) Policy*]{}, relies on [solving a sequence of Dynamic Programs (DPs)—one for each volunteer—to resolve the trade-off between notifying a volunteer now and saving her for future tasks. We solve the DPs in order of volunteers’ artificial priorities, and each subsequent DP is formulated based on the previous solutions]{} (see Algorithm \[alg:two\] and the preceding discussion). Our policies are parameterized by the minimum discrete hazard rate (MDHR) of the inter-activity time distribution, which serves as a sufficient condition for the level of “activeness” of volunteers (see Definition \[def:hazard\] and the following discussion). We analyze the competitive ratios of both policies as functions of the MDHR. Interestingly, both policies achieve the same competitive ratio (see Theorems \[thm:alg1\] and \[thm:alg2\]). However, the SN policy demonstrates significantly better performance in practice (as shown and discussed in Section \[sec:data\]). The analysis of both policies relies on [decomposing the problem]{} into individual contributions based on our (artificial) priority scheme. Further, the analysis of SDN relies on proving that the probability of being active can be computed in advance and in polynomial time (see Section \[subsec:alg1\] and Appendix \[proof:beta\]). The analysis of SN crucially uses the dual-fitting framework of [@alaei2012online] and it relies on formulating a linear program along with its dual to place a lower bound on the optimal value of each volunteer’s DP (see Section \[subsec:alg2\] and Appendix \[proof:factorrevealingLP\]). [**[Upper Bound on Online Policies]{}:**]{} In order to gain insight into the limitation of online policies when compared to our benchmark, we develop an upper bound on the achievable competitive ratio of any online policy. [Like our policies, the upper bound is]{} parameterized by the MDHR (see Theorem \[thm:hardness\]). [As a consequence,]{} the gap between the achievable upper bound and our lower bound (attained through our policies) depends on the MDHR (see Figure \[fig:hardness\]). When it is small but positive, the gap is fairly small; however, the gap grows as the MDHR increases. Our upper bound relies on analyzing two instances, one of which provides a relatively tight upper bound when the MDHR is small. [**Testing on FRUS Data:**]{} In order to illustrate the effectiveness of our modeling approach and our policies in practice, we evaluate the performance of our policies by testing them on FRUS’s data from different locations. In Section 6, we describe how we estimate model primitives and construct problem instances. Then we numerically show the superior perform of our policies when compared to strategies that resemble the current practice at different locations. The rest of the paper is organized as follows. In Section \[sec:lit\], we review the related literature. In Section \[sec:model\], we formally introduce the online volunteer notification problem as well as the benchmark and the measure of competitive ratio. Section \[sec:algos\] is the main algorithmic section of the paper and is devoted to describing and analyzing our two online policies. In Section \[sec:hardness\], we present our upper bound on the achievable competitive ratio of any online policy. In Section \[sec:data\], we revisit the FRUS application and demonstrate the effectiveness of our policies by testing them on the platform’s data from various locations. Section \[sec:conclude\] concludes the paper. For the sake of brevity, we only include proof ideas in the main text. A detailed proof of each statement is provided [in the referenced appendix.]{} false 1. General voluteer stuff (reuse the first two sentences of abstract); add citations. (done!) 2. Intro to FRUS (done!) 3. Discuss FRUS current approch and the prefernce graphs (done!) 4. Define research question; Summary of contributions. (Short one ) (almost done!) 5. one paragraph model 6. ex-ante solution and why it’ hard to implement; motivate the two algs. 7. one paragraph both alg 1; talk proof ideas 8. one paragraph alg 2; tla proof ideas 9. short paragraph on hardness 10. one paragaph on data 11. organization Related Work {#sec:lit} ============ Our work relates to and contributes to several streams of literature. [**Volunteer Operations and Staffing**]{}: Due to the differences between volunteer and traditional labor as highlighted in [@sampson2006optimization], managing a volunteer workforce provides unique challenges and opportunities that have been studied in the literature using various methodologies [@lacetera2014rewarding; @ata2016dynamic; @sonmez2016improving; @dickerson2019blood; @urrea2019volunteer]. One key operational challenge is the uncertainty in both volunteer labor supply and demand. Using an elegant queuing model, [@ata2016dynamic] study the problem of volunteer staffing with an application to gleaning organizations. Our approach to modeling volunteer behavior (specifically, assuming that notifying an active volunteer triggers a random period of inactivity) bears [some resemblance to]{} the approach taken in [@ata2016dynamic]. In a novel recent work, [@dickerson2019blood] studies the problem of matching blood donors to donation centers, assuming that donors have preferences (over centers) and constraints on the frequency of receiving notifications. Using a stochastic matching policy, they demonstrate strong numerical performance relative to various benchmarks. [There are some similarities between our modeling approach and the approach used in [@dickerson2019blood], but we highlight [three]{} key differences.]{} (1) While their work focuses on the numerical evaluation of policies, we theoretically analyze the performance of our policies and provide an upper bound on the performance achievable by any online policy, as stated in Theorems \[thm:alg1\], \[thm:alg2\], and \[thm:hardness\]. (2) We model volunteers’ adverse reactions to excessive notifications in a general form by considering an arbitrary inter-activity time distribution. (3) We parameterize our achievable upper and lower bounds by the minimum discrete hazard rate of that distribution. false [**Crowdsourcing Platforms**]{}: Reflecting the growth of online technologies, there is a burgeoning literature on the operations of crowdsourcing platforms (see e.g. [@karger2014budget; @hu2015product; @alaei2016dynamic; @papanastasiou2018crowdsourcing]). Our work adds to the growing collection of papers that focus specifically on nonprofit crowdsourcing platforms, with applications as varied as educational crowdfunding [@song2018matching], disaster response [@han2019harnessing], and smallholder supply chains [@de2018sustaining; @de2019crowdsourcing]. [Nonprofits often use crowdsourcing in the absence of monetary incentives; in such settings, successful crowdsourcing relies on efficient utilization and engagement of participants.]{} We contribute to this literature by designing online policies for effectively notifying volunteers while avoiding overutilization. [**Online Matching and Prophet Inequalities**]{}: Abstracting away from the motivating application, our work is related to the stream of papers on online stochastic matching, prophet matching inequalities, and online allocation of reusable resources. Given the scope of this literature, we highlight only recent advances and kindly refer the interested reader to [@mehta2013online] for an informative survey. A standard approach is to design online policies based on an offline solution (see e.g. [@feldman2009online; @manshadi2012online; @jaillet2014online; @wang2018online; @stein2019advance]) and to compare the performance of these policies to a benchmark such as the clairvoyant solution described in [@golrezaei2014real]. Our work builds on this approach by applying techniques from prophet matching inequalities and the magician’s problem [@alaei2012online; @alaei2014bayesian]. Most similar to our work are [@dickerson2018allocation] and [@Rad2019], which both consider settings with [unit-capacity]{} reusable resources (see also [@gong2019online] and [@rusmevichientong2020dynamic] which mainly focus on large-capacity settings). The former designs an adaptive policy to address an online stochastic matching problem, while the latter considers an online assortment optimization problem. We highlight three key differences between our work and these papers. (1) In our work, the platform’s objective function is non-linear. Despite that, we only consider offline solutions that can be computed in polynomial time (as opposed to relying on an oracle). [(2) Volunteers—which represent the resources in our setting—can]{} become unavailable without being matched (i.e., just through notification). (3) We develop parameterized lower and upper bounds based on the minimum discrete hazard rate of the usage duration. Such an approach enables us to gain insight into the impact of the characteristics of the usage duration distribution on the achievable bounds. Model {#sec:model} ===== In this section, we formally introduce the *online volunteer notification problem* that a volunteer-based crowdsourcing platform faces when deciding whom to notify for a task. [As part of]{} the problem definition, we highlight the platform’s objective as well as the trade-off it faces due to the volunteers’ adverse reactions to excessive notifications and the uncertainty in future tasks. Further, we define the measure of competitive ratio and establish a benchmark against which we compare the performance of any online policy. The online volunteer notification problem consists of a set of volunteers, denoted by ${\mathcal{V}}$, and a set of task types, denoted by ${\mathcal{S}}$.[^5] Volunteers (resp. tasks) are indexed from $1$ to $|{\mathcal{V}}| = V$ (resp. $|{\mathcal{S}}| = {S}$). Over $T$ time steps, the platform solicits volunteers to complete a sequence of tasks. In particular, in each time step $t$, a task of type ${s}$ arrives with known probability $\lambda_{{s},t}$. Without loss of generality, we assume at most one task arrives in each time step. Said differently, we assume $\sum_{{s}= 1}^{{S}} \lambda_{{s},t} \leq 1$ and with probability $1- \sum_{{s}= 1}^{{S}} \lambda_{{s},t} := \lambda_{0,t}$, no task arrives. [Whenever a task arrives, the platform can notify volunteers. However, ]{} excessively notifying a volunteer may lead her to suffer from notification fatigue. To model this behavior in a general form, we assume that a volunteer can be in two possible states: [*active*]{} or [*inactive*]{}. In the former state, the volunteer pays attention to the platform’s notifications, whereas in the latter state, she is inattentive. Initially, each volunteer is active. However, upon being notified she transitions to the inactive state and will only become active again in $Z$ periods, where $Z$ is independently drawn from a known inter-activity time distribution denoted by $g(\cdot)$. Mathematically, ${\mathbb{P}\left(Z = \tau\right)} = g(\tau)$. To capture the minimum rate at which volunteers transition from inactive to active, we define the minimum discrete hazard rate of the inter-activity time distribution as follows: \[def:hazard\] For a probability distribution $g(\cdot)$, the *minimum discrete hazard rate* (MDHR) is given by $q = \min_{\tau \in \mathbb{N}} \frac{g(\tau)}{1-G(\tau-1)}$, [where $G(\cdot)$ denotes the corresponding CDF.]{}[^6] Note that a large value of $q$ is a sufficient condition to ensure that volunteers’ activity level is high. For example, if $g(\cdot)$ is a geometric distribution, $q$ is the same as its success probability. [However, a small value of $q$ does not imply inactive volunteers:]{} if $g(2) = 1$, i.e., if the inter-activity times are deterministic and equal to 2 periods, then $q=0$ but volunteers are quite active. Before proceeding, we point out that similar modeling assumptions have been made in previous work. In particular, [@ata2016dynamic] models volunteer staffing for gleaning and assumes once a volunteer is utilized, she will go into a random repose period. Similarly, [@dickerson2019blood] focuses on blood donation and puts a constraint on the frequency with which a volunteer can be notified, which is equivalent to assuming a deterministic inter-activity time. The latter strategy is also practiced in [many]{} FRUS locations. When a donation arrives, the platform observes the donation type and must immediately and irrevocably notify a subset of volunteers.[^7] If an active volunteer $v$ is notified about a task ${s}$, she will respond with match probability $p_{v,{s}}$, independently from all other volunteers. Thus the arriving task is completed if at least one notified volunteer responds. If task ${s}$ arrives at time $t$ and if the the subset of volunteers that are both notified and active is given by ${\mathcal{U}}$, then the task will be completed with probability $1 - \prod_{v \in {\mathcal{U}}} (1-p_{v,{s}})$. We highlight that this probability is monotone and submodular with respect to the set ${\mathcal{U}}$. In Section \[sec:data\], we describe how $p_{v,{s}}$ can be estimated accurately in the FRUS setting by using historical data. As mentioned earlier, all volunteers are initially active. The platform knows the arrival rates $\lambda_{{s}, t}$, the match probabilities $p_{v, {s}}$, and the inter-activity time distribution $g(\cdot)$, but it does not observe volunteers’ states. For any instance [${\mathcal{I}}$]{} of the online volunteer notification problem [where]{} ${\mathcal{I}} = \big(\{\lambda_{{s},t}:{s}\in [{S}], t \in [T]\}, \{p_{v,{s}}:v \in [V], {s}\in [{S}]\}, g \big)$,[^8] the platform’s goal is to employ an online policy that maximizes the expected number of completed tasks. In order to evaluate an online policy, we compare its performance to that of a clairvoyant solution that knows the entire sequence of arrivals in advance as well as volunteers’ states in each period. However, [the clairvoyant solution]{} does not know *before* notifying a volunteer how long her period of inactivity will be. Two observations enable us to upper bound the clairvoyant solution with a polynomially-solvable program. First, note that if the clairvoyant solution notifies a subset of volunteers ${\mathcal{U}}$ about task ${s}$ at time $t$, the probability of completing ${s}$ is $$\begin{aligned} 1 - \prod_{v \in {\mathcal{U}}}(1-p_{v,{s}_t}) \leq \min\Big\{ \sum_{v \in {\mathcal{U}}}p_{v,{s}_t} , 1\Big\}\end{aligned}$$ In words, we can upper bound the success probability of a subset ${\mathcal{U}}$ with a piecewise-linear function that is the minimum of the expected total number of volunteer responses and $1$. Second, recall that the clairvoyant solution only notifies active volunteers and does not know how long those notified volunteers will remain inactive. As a consequence, we can upper bound the clairvoyant solution via the following program which we denote by (LP): $$\begin{aligned} \mathbf{LP}_{\mathcal{I}} = \text{max}_{\mathbf{x}} \quad \quad \quad & \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \min\Big\{ \sum_{v=1}^V x_{v,{s},t}p_{v,{s}}, 1\Big\}& \tag{LP$^*$} \label{LP} \\ \text{s.t.} \ \ \ \qquad \quad &0\leq x_{v,{s},t}\leq 1 &\forall v, {s}, t \label{eq:lpcon1} \\ &1 \geq \sum_{\tau =1}^t \sum_{{s}=1}^{{S}} \lambda_{{s},t} x_{v,{s},t} (1-G(t-\tau)) &\forall v, t \label{eq:lpcon2}\end{aligned}$$ ------------------------------------------------------------------------ The decision variables $x_{v,{s},t}$ represent the probability of notifying volunteer $v$ when a task of type ${s}$ arrives at time $t$. Constraint ensures that $x_{v,{s},t}$ is a valid probability. Constraint places limits on the frequency with which volunteers can be notified according to the inter-activity time distribution. In particular, the clairvoyant solution will only notify an active volunteer who will then become inactive for a random number of periods. Thus, in expectation the clairvoyant solution must meet constraint . For ease of reference, in the following, we define the set of all feasible solutions to (LP). Such a definition proves helpful in the rest of the paper. \[def:P\] For any $\mathbf{x} \in \mathbb{R}^{V \times {S}\times T}$, $\mathbf{x} \in {\mathcal{P}}$ if [and only if]{} it satisfies constraints and . The following proposition, which we prove in Appendix \[proof:LP\], establishes the relationship between the clairvoyant solution and $\mathbf{LP}_{\mathcal{I}}$: \[prop:LP\] [For any instance ${\mathcal{I}}$ of the online volunteer notification problem, $\mathbf{LP}_{\mathcal{I}}$ is an upper bound on its clairvoyant solution]{}. In light of Proposition \[prop:LP\], we use $\mathbf{LP}_{\mathcal{I}}$ as a benchmark against which we compare the performance of any policy. Consequently, we define the competitive ratio of an online policy as follows: \[def:compratio\] An online policy is $c$-competitive for the online volunteer notification problem if for any instance ${\mathcal{I}}$, we have: $\mathbf{POL}_{{\mathcal{I}}} \geq c \mathbf{LP}_{{\mathcal{I}}}$, where $\mathbf{POL}_{{\mathcal{I}}}$ represents the expected number of completed tasks by the online policy for instance ${{\mathcal{I}}}$. We will use the competitive ratio as a way to quantify the performance of an online policy. For each of our two policies (presented in the following section), the competitive ratio is parameterized by the MDHR, $q$, and it improves as $q$ increases. Online Policies {#sec:algos} =============== In this section, we present and analyze two policies for the online volunteer notification problem. Both policies are randomized and rely on a fractional solution we compute ex ante using the instance primitives. Thus, we begin this section by introducing the ex ante solution in Section \[subsec:offline\]. We then proceed to describe our algorithms and analyze their competitive ratios in Sections \[subsec:alg1\] and \[subsec:alg2\]. Ex Ante Solution {#subsec:offline} ---------------- As stated in Section \[sec:intro\], both of our online policies rely on an ex ante solution which we denote by $\mathbf{x^*} \in [0,1]^{V \times {S}\times T}$. Given our benchmark, we focus our attention on solutions that are feasible in (LP), i.e., $\mathbf{x^*} \in {\mathcal{P}}$ (see Definition \[def:P\]). Clearly, $\mathbf{x^*_{LP}}$—the solution to (LP) in Section \[sec:model\]—is a potential ex ante solution. However, in practice, such a solution [can]{} prove ineffective because it does not take into account the diminishing returns of notifying an additional volunteer about a task. As a result, it may ignore some tasks while notifying an excessive number of volunteers about others [(e.g., see Appendix \[ex:xstarcandidates\])]{}. Given any $\mathbf{x} \in {\mathcal{P}}$, for a moment, suppose volunteers are always active. Then if we notify each volunteer independently according to $\mathbf{x}$, the expected number of completed tasks would be:[^9] $$\begin{aligned} \label{eq:fdef1} f(\mathbf{x}):= \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \Big(1 - \prod_{v=1}^{V} (1- x_{v,{s},t}p_{v,{s}}) \Big).\end{aligned}$$ Because $\mathbf{x^*_{LP}}$ is the optimal solution of a piecewise-linear objective, it ignores the submodularity in $f(\mathbf{x})$.[^10] In light of this intuition, we introduce two other candidates that can be computed in polynomial time. First, we aim to find the feasible point that maximizes $f(\cdot)$. We denote this optimization problem by (AA) which stand for [*Always Active*]{}. Even though \[MRG\] is $NP$-hard [@bian2017guaranteed], simple polynomial-time algorithms such as the variant of the Frank-Wolfe algorithm described below (proposed in [@bian2017guaranteed]) are known to work well in practice. The algorithm iteratively maximizes a linearization of $f(\mathbf{x})$ and returns a convex combination of feasible solutions, which therefore must be feasible. We denote the output of this algorithm by $x^*_{AA}$ and use it as another candidate for the ex ante solution. $$\max_{\mathbf{x} \in \mathcal{P}} f(\mathbf{x}) \label{MRG} \tag{AA}$$ 1. Set $\mathbf{x}^0 = \mathbf{0}$. 2. **For** $i$ from $1$ to $n$: 1. Solve $\mathbf{y}^i =\text{argmax}_{\mathbf{x} \in {\mathcal{P}}} \langle \mathbf{x}, \nabla f(\mathbf{x}^{i-1}) \rangle$ 2. Set $\mathbf{x}^i = \mathbf{x}^{i-1} +\frac{1}{n} \mathbf{y}^i$ 3. Return $\mathbf{x}^n$ Note that the expected number of completed tasks, as defined in , jointly depends on the contributions of all volunteers. This property makes optimizing such an objective challenging. Further, when assessing any online policy, jointly analyzing volunteers’ contribution while keeping track of the joint distribution of their state (active or inactive) is prohibitively difficult.[^11] We overcome this challenge by defining the following artificial priority scheme among volunteers which enables us to “decouple” the contributions of volunteers and find our last candidate for the ex ante solution. \[def:priority\] Under the index-based priority scheme, when multiple volunteers respond to a notification, the one with the smallest index completes the task. [^12] Following the index-based priority scheme allows us to define individual contributions for each volunteer as shown in the following lemma (proven in Appendix \[proof:falg\]). \[lem:falg\] For any $\mathbf{x} \in [0,1]^{V \times {S}\times T}$, $f(\mathbf{x}) = \sum_{v = 1}^{V} f_v(\mathbf{x})$ where $f(\cdot)$ is defined in and $$\begin{aligned} \label{eq:decouple} f_v(\mathbf{x}) := \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \Big(\prod_{u < v}(1-p_{u, {s}} x_{u, {s}, t})\Big) p_{v, {s}} x_{v, {s}, t}.\end{aligned}$$ For any $v \in [V]$, the term $\left(\prod_{u < v}(1-p_{u, {s}} x_{u, {s}, t})\right) p_{v, {s}} x_{v, {s}, t}$ in represents the probability that under the index-based priority scheme, volunteer $v$ is the lowest-indexed volunteer to respond positively to a notification about task ${s}$ at time $t$. Further, this term only depends on the fractional solution of volunteers with lower index than $v$. In addition, if we treat $x_{u, {s}, t}$ as fixed for $1 \leq u <v$, then $\left(\prod_{u < v}(1-p_{u, {s}} x_{u, {s}, t})\right) p_{v, {s}} x_{v, {s}, t}$ is linear in $x_{v, {s}, t}$. In light of these observations, we define our last candidate as the solution of a sequence of linear programs in which volunteers maximize their individual contributions in the order of their priority. This is summarized in the program . **For** $v$ from $1$ to $V$: $$\begin{aligned} \max_{\{x_{v,{s},t}:{s}\in [{S}], t \in [T]\}} \ \ \quad \quad & \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \left(\prod_{u \leq v}(1-p_{u, {s}} x^{SQ}_{u, {s}, t})\right) p_{v, {s}} x_{v, {s}, t}& \tag{SQ-$v$} \label{SQ} \\ \text{subject to} \qquad \ \ 0 \leq &x_{v,{s},t} \leq 1 &\forall {s}, t \nonumber \\ 1 \geq & \sum_{\tau =1}^t \sum_{{s}=1}^{{S}} \lambda_{{s},t} x_{v,{s},t} (1-G(t-\tau)) &\forall t \nonumber \end{aligned}$$ For a given volunteer $v$, the program uses the solutions from previous iterations, i.e., $x^{SQ}_{u,{s},t}$ for $u < v$. As a result, this solution takes into account the diminishing returns from notifying multiple volunteers. We denote the solution to these $V$ sequential LPs as $\mathbf{x^*_{SQ}}$. Finally, we remark that the above decoupling idea proves helpful in both designing and analyzing our online policies. Having three candidates, we define $$\mathbf{x^*}:= \text{argmax}_{\mathbf{x} \in \{\mathbf{x^*_{LP}}, \mathbf{x^*_{AA}}, \mathbf{x^*_{SQ}} \}} f(\mathbf{x}) \label{eq:xstar}$$ The following proposition establishes a lower bound on $f(\mathbf{x^*})$ based on the benchmark $\mathbf{LP}$. \[prop:fxstar\] For $\mathbf{x^*}$ defined in , $$f(\mathbf{x^*}) \geq (1-\frac{1}{e})\mathbf{LP}.$$ The above worst case ratio is achieved by the ratio of $f(\mathbf{x^*_{LP}})$ to $\mathbf{LP}$, and it is tight. [However, we stress that $\mathbf{x^*_{AA}}$ and $\mathbf{x^*_{SQ}}$ can provide significant improvements. [A simple example illustrating this point can be found in Appendix \[ex:xstarcandidates\], while a full proof of Proposition \[prop:fxstar\] can be found in Appendix \[proof:fxstar\].]{} When testing our policies on FRUS data (as detailed in Section \[sec:data\]), we find that using $\mathbf{x^*}$ instead of $\mathbf{x^*_{LP}}$ results in an average improvement of 5% up to maximum of 23%.]{} We conclude this section by noting that an online policy which directly follows $\mathbf{x^*}$ (i.e., a policy that at time $t$, upon arrival of ${s}$, notifies volunteer $v$ independently with probability $x^*_{v,{s},t}$) does not achieve a good competitive ratio. This stems from the fact that $\mathbf{x^*}$ “respects” the inactivity period of volunteers only in expectation. Consequently, it is possible that volunteers are inactive when high-value tasks arrive (e.g. tasks where the match probability is close to $1$) because they were notified earlier (according to $\mathbf{x^*}$) for low-value tasks. We present an illustrative example in Appendix \[ex:followingxstar\]. Therefore, we develop two policies based on two different modifications of the ex ante solution: (1) properly scaling it down and (2) sparsifying it. The former guides our first policy which we call the [*scaled-down notification*]{} policy, whereas the latter guides our second policy, referred to as the [*sparse notification*]{} policy. These policies are described and analyzed in the next two sections, respectively. Scaled-Down Notification Policy {#subsec:alg1} ------------------------------- In this section, we present our scaled-down notification (SDN) policy which is a non-adaptive randomized policy that independently notifies volunteers according to a predetermined set of probabilities based on $\mathbf{x^*}$.[^13] The policy relies on the following ideas: (1) [Fixing a policy, suppose we can compute the *ex ante* probability that any volunteer $v$ is active at time $t$ when following that policy.]{} Let us denote such an ex ante probability by $\beta_{v,t}$. Then if ${s}$ arrives at time $t$, we notify $v$ with probability ${c}x^*_{v,{s},t}/\beta_{v,t}$ [where ${c}\in [0,\beta_{v,t}/x^*_{v,{s},t}]$.]{} As a result, she will be active *and* notified with probability ${c}x^*_{v,{s},t}$. (2) If she was the only notified volunteer, then her probability of completing this task would be simply ${c}x^*_{v,{s},t}p_{v,{s}}$. Even though this is not the case, using the index-based priority scheme and the contribution decoupling idea in Lemma \[lem:falg\], we can show her contribution will be proportional to ${c}x^*_{v,{s},t}p_{v,{s}}$. (3) Consequently, we would like to set ${c}$ as large as possible. However, ${c}$ cannot be larger than $\frac{\beta_{v,t}}{x^*_{v, {s}, t}}$ since notification probabilities cannot exceed $1$. Thus in the design of the policy, we find the largest feasible ${c}$, which we prove to be $1/(2-q)$ where $q$ is the MDHR of the inter-activity time distribution (see Definition \[def:hazard\]). The formal definition of our policy is presented in Algorithm \[alg:one\]. In the rest of this section, we analyze the competitive ratio of the SDN policy. Our main result is the following theorem: **Offline Phase**: 1. Compute $\mathbf{x^*}$ according to . 2. Set $\beta_{v,1} = 1$ and $\beta_{v, t} = 1 - \sum_{t'=1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} \frac{x^*_{v,{s},t'}}{2-q}(1-G(t - t'))$ for all $v \in [V], t \in [T] \setminus [1]$ **Online Phase**: 1. **For** $t \in [T]$: 1. If task ${s}$ arrives in time $t$, then **for** $v$ in $V$: 1. Notify $v$ with probability $\frac{x^*_{v,{s},t}}{(2-q)\beta_{v,t}}$ \[thm:alg1\] Suppose the MDHR of the inter-activity time distribution is $q$. Then the scaled-down notification policy, defined in Algorithm \[alg:one\], is $\frac{1}{2-q}(1-\frac{1}{e})$-competitive. We remark that Theorem \[thm:alg1\] implies that the competitive ratio of our policy improves as $q$ increases. However, a larger value of $q$ does not imply that the probability of notification is uniformly larger. If $q$ increases, the ex ante solution as well as the ex ante probability of being active will also change, both of which affect the notification probability. The proof of Theorem \[thm:alg1\] builds on the ideas described above and consists of several steps. First, in the following lemma, we prove that for any $v \in [V]$ and $t \in [T]$, $\beta_{v,t}$ defined in Algorithm \[alg:one\] is indeed the probability that $v$ is active at time $t$ and $\beta_{v,t}$ is at least $\frac{1}{2-q}$.[^14] \[lem:beta\] For the SDN policy defined in Algorithm \[alg:one\], let $\mathcal{E}_{v,t}$ represent the event that volunteer $v$ is active in period $t$. Then for all $v \in [V]$ and all $t \in [T]$, ${\mathbb{P}\left(\mathcal{E}_{v,t}\right)} = \beta_{v,t}$. Further, $\beta_{v,t}\geq \frac{1}{2-q}$. We prove this lemma via total induction, relying critically on the fact that $\mathbf{x^*} \in \mathcal{P}$ (see Definition \[def:P\]). The full proof can be found in Appendix \[proof:beta\]. Next, utilizing the index-based priority scheme (in Definition \[def:priority\]) and the contribution decoupling idea [(in Lemma \[lem:falg\])]{}, we lower bound the contribution of each volunteer according to their priority in the following lemma: \[lem:contributionsalg1\] Under the index-based priority scheme (in Definition \[def:priority\]) and the SDN policy, for any $\mathbf{x} \in {\mathcal{P}}$, the contribution of volunteer $v \in [V]$, i.e., the expected number of tasks she completes, is at least $\frac{1}{2-q} f_v(\mathbf{x})$, with $f_v(\cdot)$ defined in . To prove Lemma \[lem:contributionsalg1\], we first show that a volunteer $v \in [V]$ responds to a notification about task ${s}\in [D]$ in period $t \in [T]$ with probability $\frac{x^*_{v,{s},t}}{2-q}$. We then place an upper bound on the probability that a volunteer with a smaller index also responds to a notification about that same task. A full proof can be found in Appendix \[proof:contributionsalg1\]. The last steps in the proof [of Theorem \[thm:alg1\]]{} are to compare the aggregate contribution of volunteers with the benchmark utilizing Lemma \[lem:falg\] and Proposition \[prop:fxstar\]. The detailed proof of Theorem \[thm:alg1\] is presented in Appendix \[proof:thmalg1\]. Sparse Notification Policy {#subsec:alg2} -------------------------- In this section, we present our second policy, the sparse notification (SN) policy, which relies on a different modification of the ex ante solution. Before describing the policy, we briefly discuss our motivation for designing a second policy. Though simple and intuitive, the SDN policy only relies on the ex ante solution to resolve the trade-off between the immediate reward of notifying a volunteer and saving her for a future arrival. To see this, note that even in the last period $T$, the SDN policy [follows a scaled-down version of $\mathbf{x^*}$]{}. [To more accurately resolve this trade-off, in designing the SN policy, we utilize]{} the ex ante solution and the index-based priority scheme (see Definition \[def:priority\]) [to]{} formulate a sequence of one-dimensional DPs whose optimal value will serve as a lower bound on the contribution of each volunteer according to her priority (as shown in Lemma \[lem:contributionsalg2\]). The solution of the these DPs is a sparsified version of the ex ante solution $\mathbf{x^*}$. Namely, let us denote $\mathbf{{\tilde{x}}}$ as the solution of the sequence of DPs. For any $v$, ${s}$, and $t$, $\tilde{x}_{v,{s}, t}$ is either $0$ or $x^*_{v,{s}, t}$. Equipped with $\mathbf{{\tilde{x}}}$, [which we compute in advance,]{} the SN policy simply follows $\mathbf{{\tilde{x}}}$ in the online phase. Our DP formulation and its analysis follows the framework developed in [@alaei2012online] and [@alaei2014bayesian], which is also used in [@Rad2019]. Next we describe the DP formulation. Consider volunteer $v \in [V]$ and suppose we have already solved the first $(v-1)$ DPs. Thus we have $\{\tilde{x}_{u,{s},t}: u \in [v-1], {s}\in [{S}], t \in [T]\}$. Let us denote the value-to-go of the DP at time $t$ by $J_{v,t}$. Clearly $J_{v,T+1} =0$. We set $v$’s reward at time $t$ for task ${s}$ to be $$r_{v, {s}, t} := p_{v,{s}} \prod_{u =1}^{v -1}(1-{\tilde{x}}_{u,{s},t}p_{u,{s}})\footnote{We emphasize that this is not the actual reward, i.e., it is not the probability that volunteer $v$ completes task ${s}$ under the index-based priority scheme. However, it is a lower bound, as shown in the proof of Lemma \ref{lem:contributionsalg2}.} \label{eq:rewards}$$ The actions available when task ${s}$ arrives at time $t$ are to notify $v$ with probability $x^*_{v,{s},t}$ or to not notify $v$. Thus when deciding on the optimal action (which can be either $0$ or $x^*_{v,{s},t}$), we compare the (current and future) reward of notifying $v$ now to the reward of saving her for the next period. Formally, $$\begin{aligned} \label{eq:DP:sol} \tilde{x}_{v, {s}, t} = x^*_{v, {s}, t} {\mathbb{I}\Big(r_{v,{s},t} + \sum_{\tau = t+1}^T g(\tau - t){J}_{v, \tau} \geq {J}_{v,t+1}\Big)}\end{aligned}$$ The term in the indicator on the left hand side is the reward of notifying $v$ in the current period $t$, which consists of two parts: (1) the immediate reward we get from notifying $v$—which will make her inactive for $Z$ periods—and (2) the future reward once she becomes active again. The right hand side within the indicator simply represents the reward when $v$ is not notified [and remains active in period $t+1$]{}. Given , , [and $J_{v, T+1} = 0$]{}, we can iteratively compute $\{J_{v,t}; t \in [T] \}$ as follows: $$\begin{aligned} J_{v,t} = \sum_{{s}=1}^{{S}} \lambda_{{s}, t}\Big((1-{\tilde{x}}_{v, {s}, t}){J}_{v,t+1} + {\tilde{x}}_{v,{s},t}\Big( r_{v,{s},t} + \sum_{\tau = t+1}^{T} g(\tau - t){J}_{v, \tau}\Big) \Big) \label{eq:jv}\end{aligned}$$ The formal definition of our policy is presented in Algorithm \[alg:two\]. In the rest of this section, we analyze the competitive ratio of the SN policy. Our main result is the following theorem: **Offline Phase**: 1. Compute $\mathbf{x^*}$ according to 2. **For** all $v \in [V]$: 1. **For** all ${s}\in [{S}]$ and $t \in [T]$, [compute $r_{v,{s}, t}$ according to ]{} 2. Set ${J}_{v,T+1} = 0$ 3. **For** $t =T$ to $t = 1$ : 1. **For** all ${s}\in [{S}]$, [compute $\tilde{x}_{v, {s}, t}$ according to ]{} 2. [Compute $J_{v,t}$ according to ]{} **Online Phase**: 1. **For** $t$ from $1$ to $T$: 1. **If** task ${s}$ arrives in time $t$, then **for** $v$ in $V$: 1. Notify $v$ with probability ${\tilde{x}}_{v,{s}, t}$ \[thm:alg2\] Suppose the MDHR of the inter-activity time distribution is $q$. Then the sparse notification policy, defined in Algorithm \[alg:two\], is $\frac{1}{2-q}(1-\frac{1}{e})$-competitive. A few remarks are in order: (1) The competitive ratio of our two policies are identical, implying that in the worst case they guarantee the same performance. However, for practical instances, the SN policy performs significantly better (as shown by our test results in Section \[sec:data\]). Intuitively, this is because the design of the SN policy explicitly aims to [optimally]{} resolve the trade-off between notifying a volunteer now or keeping her active for later [based on $\mathbf{x^*}$]{}. On the other hand, the design of the SDN policy only aims to proportionally follow $\mathbf{x^*}$. As a result, the SDN policy’s numerical performance is not substantially better than its worst-case guarantee. On the other hand, the SN policy can perform much better than its worst-case guarantee [(see Appendix \[ex:SNvsSDN\] for an illustrative example)]{}. (2) Similar to the SDN policy, the competitive ratio of the SN policy improves when $q$ increases. However, the design of the SN policy does not directly make use of $q$. The proof of Theorem \[thm:alg2\] consists of two main lemmas. First, in the following lemma, we lower bound the contribution of each volunteer $v$ by $J_{v,1}$: \[lem:contributionsalg2\] Under the index-based priority scheme (in Definition \[def:priority\]) and the SN policy, the contribution of volunteer $v \in [V]$, i.e., the expected number of tasks she completes, is at least $J_{v,1}$, where $J_{v,1}$ is defined in . The proof of this lemma follows from the DP formulation as well as the observation that for any $v \in [V]$, ${s}\in [{S}]$, and $t \in [T]$, the probability that a higher-priority volunteer completes the task is upper bounded by $1-\prod_{u=1}^{v -1}(1-p_{u,{s}}\tilde{x}_{u, {s}, t})$. A full proof can be found in Appendix \[proof:contributionsalg2\]. The second main step of the proof is to compare $J_{v,1}$ to the benchmark $\mathbf{LP}$. In order to do so, we follow the dual-fitting approach of [@alaei2012online]. In particular, given the inter-activity time distribution, we set up a linear program to find the “worst” possible combination of per-stage rewards that give rise to the minimum possible value of $J_{v,1}$. Finding the optimal solution to this LP proves to be difficult. Instead we find a feasible solution to its dual, which enables us to lower bound $J_{v,1}$. The LP and its dual are presented in Table \[table:lpdual\]. In the LP formulation, the first two sets of constraints follow from the DP definition. [Note that the value of $J_{v,1}$ will crucially depend on $\sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} r_{v, {s}, t} x^*_{v,{s},t}$, e.g., if $r_{v,{s},t} = 0$ for all $v \in [V]$, ${s}\in [{S}]$, and $t \in [T]$, then $J_{v,1} = 0$. This motivates the final constraint, which provides a constant against which we can compare $J_{v,1}$.]{} The following lemma establishes a lower bound on $J_{v,1}$. \[lemma:factorrevealingLP\] Under the index-based priority scheme (see Definition \[def:priority\]), for any $\mathbf{x} \in {\mathcal{P}}$ and volunteer $v \in [V]$, we have $J_{v,1} \geq \frac{1}{2-q}f_v(\mathbf{x})$ where $f_v(\mathbf{x})$ is defined in . The proof of Lemma \[lemma:factorrevealingLP\] (presented in Appendix \[proof:factorrevealingLP\]) amounts to confirming that setting $\mu = \frac{1}{2-q}$ and defining all other dual variables such that the constraints hold with equality is a feasible solution to (Dual). Given Lemma \[lemma:factorrevealingLP\], we complete the proof of Theorem \[thm:alg2\] by applying Lemma \[lem:falg\] and Proposition \[prop:fxstar\]. The complete proof is presented in Appendix \[proof:thmalg2\]. Upper Bound on Competitive Ratio {#sec:hardness} ================================ In this section, we provide an upper bound on the competitive ratio of any online policy in the online volunteer notification problem. Like the lower bound achieved by our policies in Section \[sec:algos\], the upper bound is parameterized by the MDHR of the inter-activity time distribution, $q$. The main result of this section is the following theorem: \[thm:hardness\] Suppose the MDHR of the inter-activity time distribution is $q$ where $q \in [1/16,1] \cup \{1/n, n \in \mathbb{N} \} \cup \{0 \} $. Then no online algorithm can achieve a competitive ratio greater than $\kappa$, where for $q>0$ $$\label{eq:kappa} \kappa = \min \Big\{\frac{1}{2-q}, 1 + q - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)}(1-e^{-1}) \Big \}.$$ and for $q = 0$, we have $\kappa = \frac{1}{2}$.[^15] Figure \[fig:hardness\] provides a summary of our lower and upper bounds on the achievable competitive ratio for the online volunteer notification problem as a function of $q$. We make the following observations based on the [theorem and accompanying]{} plot: [(1) the upper bound applies to all policies, even those that cannot be computed in polynomial time,]{} (2) both the upper and lower bounds improve as $q$ increases, and (3) the competitive ratio of our online policies are fairly close to the upper bound when $q$ is small but positive. However, the gap grows for larger values of $q$. The proof of Theorem \[thm:hardness\] relies on analyzing the following two instances, each giving one of the terms in the definition of $\kappa$ as shown in . Instance ${\mathcal{I}}_1$ attains the minimum when $q \in \{0\} \cup [1/16,1]$ whereas Instance ${\mathcal{I}}_2$ attains it when $q \in \{1/n, n > 16, n \in \mathbb{N} \}$. Suppose $V = 1$, ${S}= 2$, $T=2$, and $g(1) = q$, where $q \in [0,1]$. The arrival probabilities are given by $\lambda_{1, 1} = 1$ and $\lambda_{2, 2} = \frac{\epsilon}{1-q}$, where $\epsilon << 1-q$. The volunteer match probabilities are given by $p_{1, 1} = \epsilon$ and $p_{1,2} = 1$. The left panel of Figure \[fig:exhardness\] visualizes Instance ${\mathcal{I}}_1$. The following lemma—which we prove in Appendix \[proof:exprophet\]—states that no online policy can complete more than a $\frac{1}{2-q}$ fraction of $\mathbf{LP}_{{\mathcal{I}}_1}$. \[lem:exprophet\] In instance ${\mathcal{I}}_1$, The expected number of completed tasks under any online policy is at most $\frac{1}{2-q} LP_{{\mathcal{I}}_1}$. Before proceeding to the second instance, we make two remarks: (1) If $q = 0$, the above instance is equivalent to the [canonical]{} instance used in the prophet inequality to establish an upper bound of $1/2$ (see, e.g., [@hill1992survey]). (2) The term $(1-1/e)$ in the competitive ratio of both policies corresponds to the gap between $f(\mathbf{x^*})$ (defined in ) and the benchmark $\mathbf{LP}$, whereas the $\frac{1}{2-q}$ corresponds to the gap between the performance of our online policy and $f(\mathbf{x^*})$ due to the loss in the online phase.In Instance ${\mathcal{I}}_1$, there is only one volunteer and consequently $f(\mathbf{x^*}) = \mathbf{LP}_{{\mathcal{I}}_1}$. Therefore, Instance ${\mathcal{I}}_1$ shows that the lower bound achieved in the online phase of our policies is tight, as they both attain at least $\frac{1}{2-q}f(\mathbf{x^*})$. The construction of our second instance is more delicate as it aims to find an instance for which both the loss in the offline phase (i.e., the gap between $f(\mathbf{x^*})$ and $\mathbf{LP}_{{\mathcal{I}}}$) and the loss in the online phase (i.e., the gap between the performance of the online policy and $f(\mathbf{x^*})$) are large. Suppose $V = \frac{1}{q} = n$, ${S}= 1$, $T=n^2+1$, and $g(\cdot)$ is the geometric distribution with parameter $q$, e.g. $g(\tau) = q(1-q)^{\tau-1}$. The arrival probabilities are given by $\lambda_{1, 1} = 1$ and $\lambda_{1, t} = q$ for $t \in [T] \setminus [1]$. The volunteers are homogeneous with $p_{v, 1} = q$ for all $v \in [V]$. The right panel of Figure \[fig:exhardness\] visualizes Instance ${\mathcal{I}}_2$. The following lemma—which is proven in Appendix \[proof:extightsmallq\]—states that no online policy can complete more than a $1 + q - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)}(1-e^{-1})$ fraction of $LP_{{\mathcal{I}}_2}$. \[lem:extightsmallq\] In instance ${\mathcal{I}}_2$, the expected number of completed tasks under any online policy is at most $\Big[1 + q - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)}(1-e^{-1})\Big] \mathbf{LP}_{{\mathcal{I}}_2}$. The proof of this lemma involves three steps: (1) placing a lower bound on $\mathbf{LP}_{{\mathcal{I}}_2}$ by finding a feasible solution, (2) establishing that always notifying every volunteer is the best online policy, and (3) assessing the performance of this policy relative to $\mathbf{LP}_{{\mathcal{I}}_2}$. A full proof can be found in Appendix \[proof:extightsmallq\]. ![*Left:* Visualization of instance ${\mathcal{I}}_1$. *Right:* Visualization of instance ${\mathcal{I}}_2$.[]{data-label="fig:exhardness"}](instances_12.pdf){width=".9\textwidth"} Evaluating Policy Performance on FRUS Data {#sec:data} ========================================== In this section, we use data from FRUS to evaluate the performance of the two online policies described in Section \[sec:algos\]. First, we briefly explain how we use data to determine the model primitives. Then we exhibit the superior performance of our policies compared to policies that resemble the strategies used at various FRUS locations. [**Estimating model primitives:**]{} As explained in Section \[sec:model\], in order to define an instance of the online volunteer notification problem, we must determine the match probabilities, i.e., $\{p_{v, {s}}: v \in [V], {s}\in [{S}]\}$; the arrival rates of tasks, i.e., $\{\lambda_{{s}, t}: {s}\in [{S}], t \in [T] \}$; and the inter-activity time distribution $g(\cdot)$. - [**Match probabilities:**]{} As evidenced in Figure \[fig:pcatotal\], volunteer preferences over tasks are heterogeneous and predictable. To come up with estimates $\{\hat{p}_{v,{s}}: v \in [V], {s}\in [{S}] \}$ for each FRUS location, we first create a feature vector for each task. We then build a $k$-Nearest Neighbors classification model, tuning the parameter $k$ using cross-validation. The AUCs of such classification models range between [0.89 and 0.95]{} across tested locations. - [**Arrival Rates:**]{} Recall that for FRUS, a task is a food rescue (donation) that remains available on the day of delivery. Most food rescues are repeated on a weekly cycle; therefore we define a type ${s}$ for each recurring rescue. Empirically, we observe a relationship between the last minute availability of a rescue of type ${s}$ and its status over the past six weeks [(the correlation coefficient is between $0.4$ and $0.75$ across all tested locations)]{}. Therefore, we estimate $\hat{\lambda}_{{s}, t} \in [0,1]$ as the proportion of times in the past six weeks that a rescue of type ${s}$ was a last-minute availability. - [**Inter-activity time distribution:**]{} At FRUS, [many]{} site directors follow a policy of waiting at least a week before notifying the same volunteer about another last-minute food rescue. Consequently, we assume the inter-activity time is deterministic and equal to seven days, e.g. $g(7) = 1$. In the following, we compare the performance of our online policies to strategies that simulate the current practice at various FRUS locations using instances constructed with data from two different locations as described above. First, we compare our policies against ‘notify-1’ and ‘notify-3’ policies that, respectively, notify one and three volunteer(s) chosen uniformly at random among “eligible” volunteers. Note that here a volunteer is eligible if she has not been notified for at least 6 days. The top panels of Figure \[fig:numerics\] display the ratio between each policy and $\mathbf{LP}_{\mathcal{I}}$ across 50 simulations. We highlight that the SN policy significantly outperforms all other policies. Further note that the SN policy’s performance far exceeds its competitive ratio of $\frac{1}{2}(1-\frac{1}{e})$, as given in Theorem \[thm:alg2\], while the SDN policy performs only slightly above its competitive ratio.[^16] Next, we compare our policies against a ‘notify-all’ policy that sends a notification to all volunteers. [This policy]{} clearly does not respect the 7-day gap between two successive notifications. Therefore, here we assume that the inter-activity time distribution is geometric with an expected duration of 7 days. The bottom panels of Figure \[fig:numerics\] display the ratio between each policy and $\mathbf{LP}_{\mathcal{I}}$ across 50 simulations. Here, we also observe that the SN policy significantly outperforms all other policies as well as its worst-case guarantee. \[1.0\][ ![The top (resp. bottom) left plot shows the fraction of $\mathbf{LP}$ achieved in Location (a) using a deterministic (resp. geometric) inter-activity time distribution. The plots on the right do the same for Location (b).[]{data-label="fig:numerics"}](images/numerical_performance.pdf "fig:"){width=".9\textwidth"}]{} Conclusion {#sec:conclude} ========== In this paper, we take an algorithmic approach to a commonly faced challenge on volunteer-based crowdsourcing platforms: how to utilize volunteers for time-sensitive tasks at the “right” pace while [maximizing the number of completed]{} tasks. We introduce the online volunteer notification problem [to model]{} volunteer behavior as well as the trade-off that the platfrom faces in this online decision making process. We develop two online policies that achieve constant-factor guarantees [parameterized by the MDHR of the volunteer inter-activity time distribution, which gives insight into the impact of volunteers’ activity level. The guarantees provided by our policies]{} are close to the upper-bound we establish for the performance of any online policy. In this paper, we measure the performance of an online policy by comparing it to an LP-based benchmark which upper bounds a clairvoyant solution. From a theoretical perspective, considering other benchmarks (perhaps less strong) is an interesting future direction. This work is motivated by our collaboration with FRUS, a leading volunteer-based food recovery platform, analysis of whose data confirms that, by and large, volunteers have persistent preferences. Leveraging on historical data, we estimate the match probability between volunteer-task pairs as well as the arrival rate of tasks. This enables us to test our policies on FRUS data from different locations and illustrate their effectiveness compared to common practice. From an applied perspective, studying the robustness of our policies as well as developing decision tools that can be integrated with the FRUS app are immediate next steps that we plan to pursue. Finding other platforms that can benefit from our work is another direction for future work. Proofs for Section \[sec:model\] ================================ Proof of Proposition \[prop:LP\] {#proof:LP} -------------------------------- To show that $\mathbf{LP}$ is an upper bound on the clairvoyant solution, we will construct a feasible solution $\mathbf{x} \in \mathcal{P}$ based on the clairvoyant solution. We will then prove that the value of this solution is an upper bound on the value of the clairvoyant solution. Let us define the random realizations of inter-activity times as $\vec{Z} \in \vec{\mathcal{Z}} = \mathbb{N}^{[V] \times [T]}$, where $Z_{v,t}$ is the inter-activity time of volunteer $v$ if notified at time $t$. In addition, we denote the random arrival sequence as $\vec{{S}} \in \vec{\mathcal{{S}}} = [{S}]^{[T]}$, where ${S}_t$ is the arrival at time $t$. Finally, suppose we have an indicator variable $\omega_{v,t}(\vec{{s}}, \vec{z})$, which is equal to one if and only if the clairvoyant solution contacts volunteer $v$ at time $t$ when the arrival order is given by $\vec{{s}}$ and the inter-activity times are given by $\vec{z}$. Because the clairvoyant solution does not know $Z_{v,t}$ until after time $t$, $\omega_{v,t}(\vec{{s}}, \vec{z})$ cannot depend on $z_{v,t'}$ for $t' \geq t$. For any volunteer $v$, task $j$, and time $t$, we define $$\hat{x}_{v, j, t} = \sum_{\vec{{s}} \in \vec{\mathcal{{S}}}}\sum_{\vec{z} \in \vec{\mathcal{Z}}} {\mathbb{P}\left(\vec{{S}} = \vec{{s}}|{S}_t = j\right)} {\mathbb{P}\left(\vec{Z} = \vec{z}\right)} \omega_{v,t}(\vec{{s}}, \vec{z}).$$ To show that $\mathbf{\hat{x}} \in \mathcal{P}$ (see Definition \[def:P\]), we immediately note that $\hat{x}_{v,j,t} \in [0,1]$, since we are summing indicator variables over probability distributions. We now need to show that constraint $\eqref{eq:lpcon2}$ is met, namely that $1 \geq \sum_{t' =1}^t \sum_{j = 1}^{{S}} \lambda_{j,t} \hat{x}_{v,j,t} (1-G(t-t'))$. Note that for a given sequence of arrivals $\vec{{s}}$ and inter-activity times given by $\vec{z}$, we must have $$\begin{aligned} 1 &\geq \sum_{t'=1}^t \omega_{v,t'}(\vec{{s}},\vec{z}) {\mathbb{I}\Big(z_{v,t'} > t-t'\Big)}\end{aligned}$$ This is because both $\omega_{v,t'}(\vec{{s}},\vec{z})$ and ${\mathbb{I}\Big(z_{v,t'} > t-t'\Big)}$ are indicator variables, and if both equal $1$ at time $t'$, then the volunteer $v$ must be inactive until after time $t$. Since the clairvoyant solution only notifies active volunteers, if volunteer $v$ is inactive from $t'$ until after $t$, then $\omega_{v,t''}(\vec{{s}},\vec{z}) = 0$ for all $t'' \in [t'+1, t]$. Thus, the sum from $t'=1$ to $t'= t$ of the product of these indicator variables cannot exceed $1$. We now take a weighted sum over all possible arrival sequences and inter-activity times: $$\begin{aligned} 1 \geq& \sum_{t'=1}^t \sum_{\vec{{s}} \in \vec{\mathcal{{S}}}} {\mathbb{P}\left(\vec{{S}} = \vec{{s}}\right)}\sum_{\vec{z} \in \vec{\mathcal{Z}}}{\mathbb{P}\left(\vec{Z} = \vec{z}\right)} \omega_{v,t'}(\vec{{s}},\vec{z}) {\mathbb{I}\Big(z_{v,t'} > t-t'\Big)} \\ =&\sum_{t'=1}^t \sum_{\vec{{s}} \in \vec{\mathcal{{S}}}} {\mathbb{P}\left(\vec{{S}} = \vec{{s}}\right)}\left(\sum_{\vec{z} \in \vec{\mathcal{Z}}}{\mathbb{P}\left(\vec{Z} = \vec{z}\right)} \omega_{v,t'}(\vec{{s}},\vec{z})\right) \left(\sum_{\vec{z} \in \vec{\mathcal{Z}}}{\mathbb{P}\left(\vec{Z} = \vec{z}\right)} {\mathbb{I}\Big(z_{v,t'} > t-t'\Big)} \right) \label{eq:lpproof1} \\ =&\sum_{t'=1}^t \sum_{\vec{{s}} \in \vec{\mathcal{{S}}}} {\mathbb{P}\left(\vec{{S}} = \vec{{s}}\right)}\left(\sum_{\vec{z} \in \vec{\mathcal{Z}}}{\mathbb{P}\left(\vec{Z} = \vec{z}\right)} \omega_{v,t'}(\vec{{s}},\vec{z})\right) (1-G(t-t')) \label{eq:lpproof2} \\ =&\sum_{t'=1}^t \sum_{j=1}^{{S}} \lambda_{j,t'} \sum_{\vec{{s}} \in \vec{\mathcal{{S}}}} {\mathbb{P}\left(\vec{{S}} = \vec{{s}} | {s}_{t'} = j\right)}\left(\sum_{\vec{z} \in \vec{\mathcal{Z}}}{\mathbb{P}\left(\vec{Z} = \vec{z}\right)} \omega_{v,t'}(\vec{{s}},\vec{z})\right) (1-G(t-t')) \label{eq:lpproof3} \\ =& \sum_{t'=1}^t \sum_{j=1}^{{S}} \lambda_{j,t'}\hat{x}_{v,j,t'}(1-G(t-t')) \label{eq:lpproof4}\end{aligned}$$ In line , we use the independence of $\omega_{v,t'}(\vec{{s}},\vec{z})$ and ${\mathbb{I}\Big(z_{v,t'} > t-t'\Big)}$ to rewrite the expected value of their product as the product of their expectations. We substitute in the expected value of ${\mathbb{I}\Big(z_{v,t'} > t-t'\Big)}$ in line . In line , we use the law of total probability to sum over all possible arriving tasks in time $t'$. We then substitute in the definition of $\mathbf{\hat{x}}$ in line . This proves that $\mathbf{\hat{x}} \in {\mathcal{P}}$. It remains to be shown that $\sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \sum_{v =1}^{V} \min\{ \hat{x}_{v,{s},t}p_{v,{s}}, 1\}$ exceeds the value of the clairvoyant solution. Let $\mathcal{C}_{{s}, t}$ be the event that task ${s}$ arrives at time $t$ and is completed when following the clairvoyant solution. We must have ${\mathbb{P}\left( \mathcal{C}_{{s}, t}\right)} \leq \lambda_{{s},t}$. In addition, since a volunteer must respond in order to complete a task, we must have $${\mathbb{P}\left(\mathcal{C}_{{s}, t}\right)} \leq \lambda_{{s},t}\sum_{\vec{{s}} \in \vec{\mathcal{{S}}}}\sum_{\vec{z} \in \vec{\mathcal{Z}}} {\mathbb{P}\left(\vec{{S}} = \vec{{s}}|{S}_t = j\right)} {\mathbb{P}\left(\vec{Z} = \vec{z}\right)} \sum_{v =1}^{V} \omega_{v,t}(\vec{{s}}, \vec{z})p_{v,{s}} = \lambda_{{s},t} \sum_{v =1}^{V} \hat{x}_{v, {s}, t}p_{v,{s}}.$$ Combining these two bounds and summing over all tasks and time periods, we see that the clairvoyant solution must be less than $\sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \sum_{v =1}^{V} \min\{ \hat{x}_{v,{s},t}p_{v,{s}}, 1\}$ Since $\mathbf{\hat{x}} \in {\mathcal{P}}$ and achieves a larger value than the clarivoyant solution, we have shown that $\mathbf{LP}$ is an upper bound on the clairvoyant solution. Proofs for Section \[sec:algos\] ================================ Proof of Lemma \[lem:falg\] {#proof:falg} --------------------------- Let $$\hat{f}(\mathbf{x}) := \sum_{v=1}^V f_v(\mathbf{x}) = \sum_{v=1}^V \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \left(\prod_{u < v}(1-p_{u, {s}} x_{u, {s}, t})\right) p_{v, {s}} x_{v, {s}, t}.$$ We prove by induction on $V$ that $f(\mathbf{x}) = \hat{f}(\mathbf{x})$, where $f(\mathbf{x})$ is defined in $\eqref{eq:fdef1}$. As a base case, suppose $V=1$. In this case, $f(\mathbf{x}) = \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \left(1 - \prod_{v=1}^{V} (1- x_{v,{s},t}p_{v,{s}}) \right) = f_1(\mathbf{x})$ so $f(\mathbf{x})$ and $\hat{f}(\mathbf{x})$ are equivalent. Now suppose this holds for $V = k$. We will show $f(\mathbf{x}) = \hat{f}(\mathbf{x})$ when $V = k+1$. $$\begin{aligned} f(\mathbf{x}) =& \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \left(1 - \prod_{v =1}^{k+1} (1- x_{v,{s},t}p_{v,{s}}) \right) \\ =& \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \left(1 - (1-x_{k+1,{s},t}p_{k+1,{s}})\prod_{v =1}^{k} (1- x_{v,{s},t}p_{v,{s}}) \right) \\ =&\sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \left(1 - \prod_{v =1}^{k} (1- x_{v,{s},t}p_{v,{s}}) \right) \nonumber \\ &+ \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} x_{k+1,{s},t}p_{k+1,{s}}\prod_{v =1}^{k} (1- x_{v,{s},t}p_{v,{s}}) \\ =&\sum_{v =1}^k f_v(\mathbf{x}) + \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} x_{k+1,{s},t}p_{k+1,{s}}\prod_{v =1}^{k} (1- x_{v,{s},t}p_{v,{s}}) \label{eq:proof:falg1}\\ =&\sum_{v =1}^{k+1} f_v(\mathbf{x}) \\ =&\hat{f}(\mathbf{x})\end{aligned}$$ All steps are algebraic except for Line , which makes use of the inductive hypothesis. This completes the proof by induction that the two formulas are algebraically equivalent. Proof of Proposition \[prop:fxstar\] {#proof:fxstar} ------------------------------------ To prove this proposition, we first focus on a particular task ${s}$ at a particular time $t$ and prove that for any $\mathbf{x} \in {\mathcal{P}}$, $$1-\prod_{v=1}^{V}(1-x_{v,{s},t}p_{v,{s}}) \geq (1-\frac{1}{e})\min \{\sum_{v =1}^{V} x_{v,{s},t}p_{v,{s}}, 1\}.$$ To prove the above inequality, we find the minimum possible value of $1-\prod_{v=1}^{V}(1-x_{v,{s},t}p_{v,{s}})$ when $\min \{\sum_{v =1}^{V} x_{v,{s},t}p_{v,{s}}, 1\}$ is fixed and equal to $c \in [0,1]$. This is equivalent to solving the program: $$\begin{aligned} \text{maximize}_{\mathbf{x} \in [0,1]^{V}} \quad \ &\prod_{v=1}^{V}(1-x_{v,{s},t}p_{v,{s}}) \label{eq:fxstarproof} \\ \text{subject to} \quad \quad \quad c\geq& \sum_{v =1}^{V} x_{v,{s},t}p_{v,{s}} \nonumber\end{aligned}$$ \[claim:prooflemfxstar1\] For a fixed number of volunteers $V = n$, the value of is less than or equal to $(1-\frac{c}{n})^n$. [Proof:]{} First, we make a change of variables $y_{v,{s},t} = x_{v,{s},t}p_{v,{s}}$, where $y_{v,{s},t} \in [0, p_{v,{s}}]$. Relaxing this constraint to $y_{v,{s}, t} \in [0, 1]$ provides an upper bound on . We now prove by induction on $n$ that the solution to this relaxed problem is $y_{v,{s},t} = \frac{c}{n}$ for all $v$. In the base case with one volunteer, the objective becomes maximizing $y_{1,{s},t}$ subject to $y_{1,{s},t} \leq c$, which has a clear solution. We now assume this holds for $n=k$. If there are $k+1$ volunteers, we consider the problem $$\begin{aligned} \text{maximize}_{\mathbf{y} \in [0,1]^{k+1}} \quad \ &(1-y_{k+1,{s}, t})\left(\prod_{v \leq k}(1-y_{v,{s},t})\right) \nonumber \\ \text{subject to} \quad \quad \quad c\geq& y_{k+1,{s},t} + \sum_{v=1}^k y_{v,{s},t} \nonumber\end{aligned}$$ For any given $y_{k+1, {s}, t} \in [0,1]$, we can apply the inductive hypothesis to solve $y_{v,{s},t} = \frac{c-y_{k+1, {s}, t}}{k}$ for $v \in [k]$. This yields a single variable maximization problem with objective function $(1-y_{k+1,{s},t})(1-\frac{c-y_{k+1, {s}, t}}{k})^k$. Taking the derivative, this has first order condition of $$(1-\frac{c-y_{k+1, {s}, t}}{k})^{k-1}(1-y_{k+1,{s},t}-1+\frac{c-y_{k+1, {s}, t}}{k}) = 0.$$ One can verify that the solution $y_{k+1,{s},t} = \frac{c}{k+1}$ is the maximum. This implies that $y_{v,{s},t} = \frac{c-y_{k+1, {s}, t}}{k} = \frac{c}{k+1}$ for all $v \in [k+1]$, which completes the proof by induction. Plugging these values for $y_{v,{s},t}$ into the objective function, we get a value of $(1-\frac{c}{n})^n$, which completes the proof of Claim \[claim:prooflemfxstar1\]. Based on this claim, $1-\prod_{v=1}^{V}(1-x_{v,{s},t}p_{v,{s}})$ must be greater than $1-(1-\frac{c}{n})^n$ when $\sum_{v =1}^{V}x_{v,{s},t}p_{v,{s}} \geq c$. This means that the ratio between the two must be at least $\frac{1-(1-\frac{c}{n})^n}{c}$. For any $n \in \mathbb{N}$ and $c \in [0,1]$, the function $\frac{1-(1-\frac{c}{n})^n}{c}$ is greater than $1-\frac{1}{e}$. [Proof:]{} We first show that $n \log(1-\frac{c}{n})$ is increasing in $n$, which implies that the numerator is decreasing in $n$. The derivative of that expression with respect to $n$ is given by $\log(1-\frac{c}{n}) + \frac{c}{n-c} \geq \frac{-c}{n-c} + \frac{c}{n-c} = 0$, using the inequality $\log(1-x) \geq \frac{-x}{1-x}$. This means that the function is decreasing in $n$, regardless of $c$. Taking the limit as $n$ gets large, the ratio can be written as $\frac{1-e^{-c}}{c}$. The derivative of this expression with respect to $c$ is given by $\frac{(1+c)e^{-c} - 1}{c^2} \leq \frac{(1+c)(1-c) - 1}{c^2} \leq 0$. Thus, this expression is decreasing in $c$. It is minimized at $c=1$, where it attains a value of $1-\frac{1}{e}$. Proving this claim establishes that for any $\mathbf{x} \in {\mathcal{P}}$, any task ${s}\in [{S}]$, and any time $t \in [T]$, $1-\prod_{v=1}^{V}(1-x_{v,{s},t}p_{v,{s}}) \geq (1-\frac{1}{e}) \min\{\sum_{v =1}^{V} x_{v,{s},t}p_{v,{s}}, 1 \}$. If we apply this inequality to $\mathbf{x^*_{LP}}$ and take a weighted sum over all tasks and time periods, this completes the proof of the proposition, e.g. $f(\mathbf{x^*_{LP}}) \geq (1-\frac{1}{e})\mathbf{LP}$. Proof of Lemma \[lem:beta\] {#proof:beta} --------------------------- We begin by proving that $\beta_{v,t} \geq \frac{1}{2-q}$. Since we set $\beta_{v,1} =1$, this clearly holds for all $v \in [V]$ when $t=1$. Without loss of generality, this proof will now focus on a particular $v \in [V]$ and $t \in [T]\setminus [1]$. Starting from the definition, we have: $$\begin{aligned} \beta_{v,t} &= 1 - \sum_{t'=1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} \frac{x^*_{v,{s}, t'}}{2-q}(1-G(t - t')) \\ &=1-\frac{1}{2-q}\left(\sum_{t'=1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} x^*_{v,{s}, t'}(1-G(t - t'-1)-g(t-t'))\right) \\ &=1-\frac{1}{2-q}\left(\sum_{t'=1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} x^*_{v,{s}, t'}(1-G(t - t'-1))(1-\frac{g(t-t')}{1-G(t-t'-1)})\right) \\ &\geq 1-\frac{1}{2-q}\left(\sum_{t'=1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} x^*_{v,{s}, t'}(1-G(t - t'-1))(1-q)\right) \label{eq:betaline1} \\ &\geq 1-\frac{1-q}{2-q} \label{eq:betaline2} \\ &=\frac{1}{2-q}\end{aligned}$$ Line comes from applying the definition of the minimum hazard rate. Because $\mathbf{x^*} \in {\mathcal{P}}$ (see Definition \[def:P\]), in line we apply the bound given by constraint for $t-1$. This holds for any $v \in [V]$ and any $t \in [T]\setminus [1]$, which implies that $\beta_{v,t} \geq \frac{1}{2-q}$ for all $v \in [V]$ and $t \in [T]$. We now use total induction to prove that ${\mathbb{P}\left(\mathcal{E}_{v,t}\right)} = \beta_{v,t}$. For notation, we will use $\mathcal{E}^C$ to refer to the complement of event $\mathcal{E}$. At $t=1$, we have ${\mathbb{P}\left(\mathcal{E}_{v,t}\right)}=1$ and $\beta_{v,t} = 1$ by definition. Now we assume that $\beta_{v,t} = {\mathbb{P}\left(\mathcal{E}_{v,t}\right)}$ for all $t \in [k]$. A volunteer can only be inactive at time $t$ if she was notified in a prior period and does not become active again by time $t$. Since these events are disjoint, we can sum their probabilities to compute the probability that a volunteer is inactive at time $k+1$: $${\mathbb{P}\left(\mathcal{E}_{v,k+1}^C\right)} = \sum_{t'=1}^{k} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} {\mathbb{P}\left(\mathcal{E}_{v,t'}\right)}\frac{x^*_{v,{s}, t'}}{(2-q)\beta_{v,t'}}(1-G(t-t'))$$ Because notifications are independent from each volunteer’s status, the notification probability conditional on a volunteer being active is still $\frac{x^*_{v,{s}, t'}}{(2-q)\beta_{v,t'}}$ when following Algorithm \[alg:one\]. Plugging in the inductive hypothesis that ${\mathbb{P}\left(\mathcal{E}_{v,t'}\right)} = \beta_{v,t'}$ for $t' \in [k]$, we see that these terms cancel, leaving us with $${\mathbb{P}\left(\mathcal{E}_{v,k+1}^C\right)} = \sum_{t'=1}^{k} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} \frac{x^*_{v,{s}, t'}}{2-q}(1-G(t-t'))$$ Noting that this sum is definitionally equivalent to $1-\beta_{v, k+1}$ completes the proof. Proof of Lemma \[lem:contributionsalg1\] {#proof:contributionsalg1} ---------------------------------------- Without loss of generality, this proof will focus on a particular arrival ${s}\in [{S}]$ at a particular time $t \in [T]$. Under an index-based priority scheme, the probability that a volunteer $v \in [V]$ completes this task is equal to $${\mathbb{P}\left(v \text{ responds}\right)}{\mathbb{P}\left(\text{no volunteer with lower index responds}| v \text{ responds}\right)}$$ We start by numerically defining the first term. By Lemma \[lem:beta\], volunteer $v$ will be active at time $t$ with probability $\beta_{v,t}$, and according to Algorithm \[alg:one\], she will be notified with probability $\frac{x^*_{v, {s}, t}}{\beta_{v,t}(2-q)}$. If active, the volunteer will respond to a notification with probability $p_{v,{s}}$. Since these events are all independent, the volunteer will respond to a notification about task ${s}$ at time $t$ with probability $\frac{x^*_{v, {s}, t}}{(2-q)} p_{v,{s}}$. We now bound the second term. First note that volunteers only respond if they are active, notified, and match with the task. Thus, for any other volunteer $u$, $${\mathbb{P}\left(u \text{ does not respond }| v \text{ responds}\right)} \geq {\mathbb{P}\left(u \text{ not notified or does not match }| v \text{ responds}\right)}$$ Volunteer $u$ will be notified with probability $ \frac{x^*_{u, {s}, t}}{\beta_{u,t}(2-q)} \leq x^*_{u, {s}, t}$ and will match independently with probability $p_{u,{s}}$. Since these events are both independent of $v$ responding, $${\mathbb{P}\left(u \text{ does not respond}| v \text{ responds}\right)} \geq 1 - x^*_{u, {s}, t}p_{u,{s}}$$ Thus, the probability that no other volunteers respond *conditional on $v$ responding* is lower bounded by $${\mathbb{P}\left(\text{no volunteer with lower index responds}| v \text{ responds}\right)} \geq \prod_{u < v} (1-x^*_{u, {s}, t}p_{u,{s}})$$ We have shown that for any arrival ${s}\in [{S}]$ at time $t \in [T]$, volunteer $v \in [V]$ completes the task with probability greater than $\frac{1}{2-q}\left(\prod_{u < v} (1-x^*_{u, {s}, t}p_{u,{s}})\right)x^*_{v, {s}, t} p_{v,{s}}$. Using linearity of expectations, the expected number of tasks completed by $v$ is given by $$\frac{1}{2-q} \sum_{t=1}^T \sum_{{s}=1}^{S}\lambda_{{s},t} \left(\prod_{u < v} (1-x^*_{u, {s}, t}p_{u,{s}})\right)x^*_{v, {s}, t} p_{v,{s}} = f_v(\mathbf{x^*}).$$ Proof of Theorem \[thm:alg1\] {#proof:thmalg1} ----------------------------- Based on Lemma \[lem:contributionsalg1\], we know that each volunteer completes at least $\frac{1}{2-q}f_v(\mathbf{x^*})$ tasks in expectation. By linearity of expectations and Lemma \[lem:falg\], the expected total number of tasks completed by volunteers must be at least $\frac{1}{2-q}f(\mathbf{x^*})$. Since $f(\mathbf{x^*}) \geq (1-\frac{1}{e})\mathbf{LP}_{\mathcal{I}}$ (see Proposition \[prop:fxstar\]), it immediately follows that the SDN policy is $\frac{1}{2-q}(1-\frac{1}{e})$-competitive. Proof of Lemma \[lem:contributionsalg2\] {#proof:contributionsalg2} ---------------------------------------- The proof of Lemma \[lem:contributionsalg2\] consists of two parts. First, we prove that $r_{v, {s}, t}$ is a lower bound on the probability that a volunteer $v \in [V]$ completes a task ${s}\in [{S}]$ when it arrives at time $t \in [T]$, conditional on being notified and active under the index-based priority scheme. Then, we show that $J_{v,t}$ represents the value-to-go of volunteer $v$ when active at $t$ under the SN policy with rewards $\{r_{v,{s}, t}:{s}\in [{S}], t \in [T] \}$. Without loss of generality, we focus on a particular arrival ${s}$ at a particular time $t$. When notified and active, a volunteer $v \in [V]$ responds with probability $p_{v,{s}}$. Any other volunteer $u \in [v-1]$ is notified with probability ${\tilde{x}}_{u, {s}, t}$ under the SN policy. If active, she will respond with probability $p_{u,{s}}$. Since these are both independent from $v$’s response, the probability that $u$ responds conditional on $v$ responding must be less than ${\tilde{x}}_{u, {s}, t}p_{u, {s}}$. Therefore, the probability that $v$ completes the task when active and notified—which happens when she is the lowest indexed volunteer to respond—must exceed $p_{v,{s}}\prod_{u =1}^{v-1}(1-{\tilde{x}}_{u, {s}, t}p_{u, {s}})$. Noting that this is equivalent to the definition of $r_{v,{s}, t}$ completes the first part of the proof. We now show via total induction that $J_{v,t}$ represents the value-to-go of volunteer $v$ when active at $t$ under the SN policy with rewards $\{r_{v,{s}, t}:{s}\in [{S}], t \in [T] \}$. Clearly, this is true for $J_{v,T+1}$. Now suppose it is true for all $t \geq \tau+1$. We will show that this is true for $t=\tau$. From the definition in , $$\begin{aligned} J_{v,\tau} = \sum_{{s}=1}^{{S}} \lambda_{{s}, \tau}\left((1-{\tilde{x}}_{v, {s}, \tau}){J}_{v,\tau+1} + {\tilde{x}}_{v,{s},\tau}( r_{v,{s},\tau} + \sum_{t' = t+1}^{T} g(t' - t){J}_{v, t'}) \right) \end{aligned}$$ For any task ${s}\in [{S}]$, the SN policy notifies $v$ with probability ${\tilde{x}}_{v,{s},\tau}$. Assuming that $v$ is active and using the inductive hypothesis, the value-to-go if she is notified is given by $r_{v,{s},\tau} + \sum_{t' = t+1}^{T} g(t' - t){J}_{v, t'}$. If she is not notified, her value-to-go is $J_{v,\tau+1}$. This shows that $J_{v,\tau}$ represents the value-to-go of volunteer $v$ when active at $\tau$ under the SN policy with rewards $\{r_{v,{s}, t}:{s}\in [{S}], t \in [T] \}$, which completes the proof by induction. We note that $J_{v,t}$ as defined in is weakly increasing in each $r_{v,{s}, t}$. Since we have shown that the rewards are weakly greater than $r_{v,{s}, t}$, we have completed the proof of Lemma \[lem:contributionsalg2\]. Proof of Lemma \[lemma:factorrevealingLP\] {#proof:factorrevealingLP} ------------------------------------------ We prove this lemma with an LP described in Table \[table:lpdual\]. For ease of reference, we have copied the LP and its dual below. We aim to show that ${J}_{v,1} \geq \frac{1}{2-q} c$, where $c$ is an arbitrary constant imposed in the final constraint of (J-LP). The other constraints in our LP come from the definition of ${J}_{v,t}$: $$\begin{aligned} {J}_{v,t} &= \sum_{{s}=1}^{{S}} \lambda_{{s}, t} \max_{x_{v,{s},t} \in \{0, x^*_{v, {s}, t}\}} \{ (1-x_{v,{s}, t}){J}_{v,t+1} + x_{v,{s}, t}(r_{v,{s},t} + \sum_{\tau = t+1}^{T+1} g(\tau - t){J}_{v, \tau}) \} \\ &\geq \max\{{J}_{v,t+1}, \sum_{{s}=1}^{{S}} \lambda_{{s}, t}(1-x^*_{v,{s}, t}){J}_{v,t+1}+ x^*_{v,{s}, t}(r_{v,{s},t} + \sum_{\tau = t+1}^{T+1} g(\tau - t){J}_{v, \tau})\} \\ &=\max\{{J}_{v,t+1}, {J}_{v,t+1}+\sum_{{s}=1}^{{S}} \lambda_{{s}, t} x^*_{v,{s}, t}(r_{v,{s},t} -{J}_{v,t+1} + \sum_{\tau = t+1}^{T+1} g(\tau - t){J}_{v, \tau})\}\end{aligned}$$ Together there are $2T + 1$ constraints, which will become the dual variables identified by the labels above. This leads to the dual program in (Dual). A feasible solution to this dual problem is when $\mu = \frac{1}{2-q}$ and all constraints are tight, i.e. $\gamma_t = \mu$ for all $t$, $\alpha_1 = 1-\mu$, and for $t \geq 2$, $$\begin{aligned} \alpha_t &= \alpha_{t-1} - \gamma_{t-1}\sum_{{s}=1}^{{S}} \lambda_{{s},t}x^*_{v,{s},t} + \sum_{t' =1}^{t-1}\beta_{t'} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} x^*_{v,{s},t'}g(t-t') \\ &=\alpha_{t-1} - \mu \left( \sum_{{s}=1}^{{S}} \lambda_{{s},t}x^*_{v,{s},t} - \sum_{t' =1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} x^*_{v,{s},t'}g(t-t') \right) \\ &= \alpha_1 - \mu \sum_{t' =1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} x^*_{v,{s},t'}(1-G(t-t')) \label{eq:flp1}\\ &=\alpha_1 - \mu \sum_{t' =1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} x^*_{v,{s},t'}(1-G(t-1-t'))(1-\frac{g(t-t')}{1-G(t-1-t')})\label{eq:flp2} \\ &\geq \alpha_1 - \mu \sum_{t' =1}^{t-1} \sum_{{s}=1}^{{S}} \lambda_{{s},t'} x^*_{v,{s},t'}(1-G(t-1-t'))(1-q) \label{eq:flp3}\\ &\geq \alpha_1 - (1-q)\mu \label{eq:flp4} \\ &= 1 - (2-q)\mu\end{aligned}$$ Line comes from recursively plugging in the definition for $\alpha_{t-1}$ and rearranging terms. Line comes from applying the definition of the minimum hazard rate. Line uses the fact that $\mathbf{x^*} \in {\mathcal{P}}$, which means it must satisfy at time $t-1$. Since $\mu = \frac{1}{2-q}$, we must have $\alpha_t \geq 0$, which means this solution is feasible in (Dual). Therefore, by weak duality, we have shown that the primal problem has a value of at least $\frac{1}{2-q} c \geq \frac{1}{2-q} \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} r_{v, {s}, t} x^*_{v,{s},t}$. We conclude by noting that $\left(\prod_{u \leq v}(1-p_{u, {s}} x_{u, {s}, t})\right) p_{v, {s}}$ is decreasing in $x_{u,{s},t}$. Since ${\tilde{x}}_{u, {s}, t} \leq x^*_{u,{s},t}$, this implies $r_{v,{s},t} \geq \left(\prod_{u \leq v}(1-p_{u, {s}} x^*_{u, {s}, t})\right) p_{v, {s}}$. Thus, $$J_{v,1} \geq \frac{1}{2-q} \sum_{t=1}^T \sum_{{s}=1}^{{S}} \lambda_{{s},t} \left(\prod_{u \leq v}(1-p_{u, {s}} x^*_{u, {s}, t})\right) p_{v, {s}} x^*_{v, {s}, t} = f_v(\mathbf{x^*}),$$ which completes the proof of Lemma \[lemma:factorrevealingLP\]. Proof of Theorem \[thm:alg2\] {#proof:thmalg2} ----------------------------- Based on Lemma \[lemma:factorrevealingLP\], we know that each volunteer completes at least $\frac{1}{2-q}f_v(\mathbf{x^*})$ tasks in expectation. By linearity of expectations and Lemma \[lem:falg\], the expected total number of tasks completed by volunteers must be at least $\frac{1}{2-q}f(\mathbf{x^*})$. Since $f(\mathbf{x^*}) \geq (1-\frac{1}{e})\mathbf{LP}_{\mathcal{I}}$ (see Proposition \[prop:fxstar\]), it immediately follows that the SN policy is $\frac{1}{2-q}(1-\frac{1}{e})$-competitive. Proofs for Section \[sec:hardness\] =================================== Proof of Lemma \[lem:exprophet\] {#proof:exprophet} -------------------------------- In instance ${\mathcal{I}}_1$, a feasible solution to (LP) is to always notify the volunteer when task $2$ arrives in period $2$ and to notify the volunteer in period $1$ as much as possible subject to constraint , e.g. $\hat{x}_{1, 2, 2} = 1$ and $\hat{x}_{1, 1, 1} = 1-\epsilon$. This strategy achieves $(1-\epsilon)\epsilon + \frac{\epsilon}{1-q} = \epsilon(\frac{2-q-(1-q)\epsilon}{1-q})$ completed tasks in expectation. Since it is clearly optimal to notify $v$ in period $2$, an online policy only has one choice to make: whether or not to notify $v$ in period $1$. If notified in period $1$, $v$ will complete $\epsilon + q\frac{\epsilon}{1-q} = \frac{\epsilon}{1-q}$ tasks in expectation. Otherwise, $v$ will complete $\frac{\epsilon}{1-q}$ tasks in expectation. Thus, no online policy can achieve a value greater than $\frac{\epsilon}{1-q}$. This represents a competitive ratio of no more than $\frac{1}{2-q-(1-q)\epsilon}$, which approaches $\frac{1}{2-q}$ as $\epsilon$ gets small. Proof of Lemma \[lem:extightsmallq\] {#proof:extightsmallq} ------------------------------------ We prove Lemma \[lem:extightsmallq\] in three steps. First, we show $\mathbf{LP} \geq n$. Then, we establish that always notifying every volunteer is the best online policy. Finally, we assess the performance of this policy relative to $\mathbf{LP}$. In ${\mathcal{I}}_2$, $\mathbf{LP} \geq n$ [Proof:]{} To prove this claim, we first show that solution $\hat{x}_{v, {s}, t} = 1$ for all $v \in [n]$ and $t \in [T]$ is feasible. Clearly, it satisfies constraint . Now consider for an arbitrary $v \in [n]$ and $t \in [T]$: $$\begin{aligned} \sum_{\tau =1}^{t} \lambda_{1, \tau} \hat{x}_{v,{s},t} (1-q)^{t-\tau} &= (1-q)^{t-1} + \sum_{\tau = 2}^t q(1-q)^{t-\tau} \\ &= (1-q)^{t-1} + q \sum_{\tau' = 0}^{t-2} (1-q)^{\tau'} \\ &= (1-q)^{t-1} + q \frac{1 - (1-q)^{t-1}}{q} \\ &= 1\end{aligned}$$ The first step comes from plugging in $\lambda_{1,1} = 1$ and $\lambda_{1,\tau} = q$ for $\tau \in [T]\setminus [1]$. Now that we have established the feasibility of $\mathbf{\hat{x}}$, we can calculate the value of (LP) at that solution, which is given by $$\sum_{t=1}^{T} \lambda_{{s},t} \min \{\sum_{v =1}^{V} \hat{x}_{v,{s},t} q,1\} = \sum_{t=1}^{T} \lambda_{{s},t} = 1 + n \geq n.$$ Now that we have an lower bound on $\mathbf{LP}$, we turn our attention to placing an upper bound on any online policy. We do so with the following claim. Notifying every active volunteer whenever task $1$ arrives achieves a higher expected value than any online policy. [Proof:]{} We first note that the best online policy cannot do better in expectation than the best online policy that also knows the status of each volunteer $v \in [V]$ at each time $t \in [T]$ because designing a policy without using that additional information is always an option. Thus, it is sufficient to show that notifying every active volunteer whenever task $1$ arrives achieves a higher expected value than any online policy that knows each volunteer’s status. We proceed via total induction, with a base case at time $T$. Suppose an arrival occurs at time $T$. If there are ${\alpha_1}$ active volunteers, notifying ${\alpha_2}$ of them achieves a value-to-go of $1-(1-q)^{{\alpha_2}}$. This is increasing in ${\alpha_2}$, which means that the optimal online policy is to contact all active volunteers. Let ${\hat{J}}_{\tau+1}$ represent the expected value-to-go when an arrival occurred in period $\tau$ given an online policy that notifies all active volunteers at $\tau$ and all future arrivals. Note that we can only make this representation because the inter-activity times are geometrically distributed, which implies that volunteers’ transitions from inactive to active are memoryless. Thus the expected payoff is the same regardless of the choices made before period $\tau$. By convention, we set ${\hat{J}}_{T+1} = 0$. Now we make the inductive hypothesis that contacting all active volunteers whenever task $1$ arrives is the best online policy for $t \in [T]\setminus [k]$. We will show that if an arrival occurs at time $k$, an optimal online policy is to contact all active volunteers. The expected payoff of contacting ${\alpha_2}$ volunteers when ${\alpha_1}$ are active is given by $$\begin{aligned} {h}_{k, {\alpha_1}}({\alpha_2}) =& {\mathbb{P}\left(\text{task completed at time $k$}\right)} + {\mathbb{E}\left(\text{tasks completed from $k+1$ to $T$}\right)} \\ =& {\mathbb{P}\left(\text{task completed at time $k$}\right)} + \nonumber \\ &\sum_{\tau = k+1}^T {\mathbb{P}\left(\text{next arrival at $\tau$}\right)}{\mathbb{E}\left(\text{tasks completed from $k+1$ to $T$}|\text{next arrival at $\tau$}\right)} \label{eq:hk1} \\ =&1 - (1-q)^{{\alpha_2}} + \label{eq:hk2} \\ &\sum_{\tau = k+1}^T q(1-q)^{\tau - k - 1} \left(1 - (1-q)^{{\alpha_1}-{\alpha_2}}(1 - q (1 - (1-q)^{\tau - k}))^{n - {\alpha_1}+{\alpha_2}} \right) + \label{eq:hk3} \\ &\sum_{\tau = k+1}^T q(1-q)^{\tau - k - 1} {\hat{J}}_{\tau+1} \label{eq:hk4}\end{aligned}$$ In line , we use the law of total probability. Line represents the probability of completing the task at time $k$. Each term in the summation in line represents the probability that the next task arrives at $\tau$ and that task gets completed. To compute this probability, first note that we know that at time $\tau$ there will be ${\alpha_2}-{\alpha_1}$ volunteers who are definitely active. The remaining volunteers will be independently active with probability $1 - (1-q)^{\tau - k}$. Thus, each of these remaining volunteers will respond to a notification with probability $q(1 - (1-q)^{\tau - k})$. Since the inductive hypothesis assumes that the online policy will contact every active volunteer at $\tau > k$, the probability of any volunteer completing the next task (conditional on an arrival at $\tau$) is given by $1 - (1-q)^{{\alpha_1}-{\alpha_2}}(1 - q (1 - (1-q)^{\tau - k}))^{n - {\alpha_1}+{\alpha_2}}$. In line , we add the remaining expected number of completed tasks from $\tau+1$ to $T$ after an arrival in period $\tau$, which does not depend on the choice of ${\alpha_2}$ due to the memorylessness of the transitions from inactive to active. We now define $\Delta_{k,{\alpha_1}}({\alpha_2}) = {h}_{k,{\alpha_1}}({\alpha_2}+1) - {h}_{k,{\alpha_1}}({\alpha_2})$ for $0 \leq {\alpha_2}\leq {\alpha_1}-1$. This is the incremental benefit of notifying one additional active volunteer. We have $$\begin{aligned} \Delta_{k,{\alpha_1}}({\alpha_2}) =& (1-q)^{{\alpha_2}} - (1-q)^{{\alpha_2}+1} + \sum_{\tau = k+1}^T q(1-q)^{\tau - k - 1} \cdot \nonumber \\ &[(1-q)^{{\alpha_1}-{\alpha_2}}(1 - q (1 - (1-q)^{\tau - k}))^{n - {\alpha_1}+{\alpha_2}} \nonumber \\&- (1-q)^{{\alpha_1}-{\alpha_2}-1}(1 - q (1 - (1-q)^{\tau - k}))^{n - {\alpha_1}+{\alpha_2}+1} ] \label{longeq:1} \\ =& q(1-q)^{{\alpha_2}} + \sum_{\tau = k+1}^T q(1-q)^{\tau - k - 1}(1-\frac{1 - q (1 - (1-q)^{\tau - k})}{1-q}) \cdot \nonumber \\ &[(1-q)^{{\alpha_1}-{\alpha_2}}(1 - q (1 - (1-q)^{\tau - k}))^{n - {\alpha_1}+{\alpha_2}}] \label{longeq:2} \\ =& q(1-q)^{\alpha_2}- \sum_{\tau = k+1}^T q(1-q)^{\tau - k - 1}(q (1-q)^{\tau - k-1}) \cdot \nonumber \\ &[(1-q)^{{\alpha_1}-{\alpha_2}}(1 - q (1 - (1-q)^{\tau - k}))^{n - {\alpha_1}+{\alpha_2}}] \label{longeq:3} \\ \geq& q(1-q)^{{\alpha_1}-1} - \sum_{\tau = k+1}^T q^2(1-q)^{2(\tau - k - 1)}\cdot \nonumber \\ &[(1-q)(1 - q (1 - (1-q)^{\tau - k}))^{n-1}] \label{longeq:4} \\ \geq& q(1-q)^{n-1} - \sum_{\tau' = 1}^\infty q^2(1-q)^{2\tau' - 1}(1 - q (1 - (1-q)^{\tau'}))^{n-1}\label{longeq:5} \\ =& q(1-q)^{n-1} \left( 1 - \sum_{\tau' = 1}^\infty q(1-q)^{2\tau' - 1}(1 + q(1-q)^{\tau'-1}))^{n-1} \right)\label{longeq:6} \\ \geq& q(1-q)^{n-1} \left( 1 - \sum_{\tau' = 1}^\infty q(1-q)^{2\tau' - 1}e^{(1-q)^{\tau'-1}} \right)\label{longeq:7} \end{aligned}$$ false $$\begin{aligned} \geq& q(1-q)^{n-1} - q^2(1-q)(1-q^2)^{n-1} \nonumber \\ &- \int_{t = 1}^\infty q^2(1-q)^{2t - 1}(1 - q (1 - (1-q)^{t}))^{n-1} \ dt \label{longeq:8} \\ \geq& q(1-q)^{n-1} + \frac{q^2}{(1-q)\log(1-q)} \int_{u = q}^1 (1-u)(1 - qu )^{n-1} \ du \label{longeq:9} \\ \geq& q(1-q)^{n-1} + \frac{q^2}{(1-q)\log(1-q)}\left(\frac{(u-q)(1-qu)^n}{1+q} \right)|_{u=q}^1 \label{longeq:10} \\ \geq & q(1-q)^{n-1} + \frac{q^2}{(1-q)\log(1-q)}\left(\frac{(1-q)^{n+1}}{1+q} \right) \label{longeq:11}\end{aligned}$$ In line we factor like terms. In line we simplify the fraction. In line we combine powers in the top line and lower bound the expression by replacing ${\alpha_2}$ with ${\alpha_1}-1$, its maximum value. Note that this decreases the positive term and increases the negative term. In line , we simplify the bounds of the summation and provide a lower bound (an upper bound on a negative term) by summing all the way to infinity. We also provide a lower bound by setting ${\alpha_1}= n$, which decreases the first term and otherwise has no impact. Numerically we can verify that is greater than 0 for all $n \geq 1$ (recall we define $q = \frac{1}{n}$). To prove this algebraically, we first factor out terms and simplify to get . We then use the fact that $(1+\frac{r}{n})^{n-1} \leq e^r$ to get the bound in . We can show term by term that the summation is decreasing in $q$. This implies that the inner term is increasing in $q$. We can show via an integral bound that the inner term approaches 0 as $q \rightarrow 0$. Putting these facts together imply that as $q$ increases, the inner term is weakly positive. This would complete an algebraic proof that is non-negative. Since $\Delta_{k,{\alpha_1}}({\alpha_2}) \geq 0$ for all ${\alpha_2}\leq {\alpha_1}-1 \leq n-1$, we have proved the inductive hypothesis that notifying all active volunteers in period $k$ is optimal. This completes the proof of the claim. We now provide an upper bound on the value of this optimal online policy. The value of notifying every active volunteer whenever task ${s}$ arrives is at most $q+\left(1 - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)}(1-e^{-1}) \right)$. [Proof:]{} If we notify every volunteer at every arrival, then in the first period, we achieve a payoff of $1-(1-q)^n \leq 1$. Now suppose every volunteer was most recently notified at time $\tau-1$. The expected reward for the remainder of the time horizon is given by ${\hat{J}}(\tau)$. By definition, the value of this policy is given by $1-(1-q)^n +{\hat{J}}(2) \leq 1 +{\hat{J}}(2)$. Note that ${\hat{J}}(T+1) = 0$, and we can recursively compute: $$\begin{aligned} {\hat{J}}(\tau) &= \sum_{t=\tau}^{T} q(1-q)^{t-\tau}\left( 1-(1-q(1-(1-q)^{t-\tau+1}))^n + {\hat{J}}(t+1) \right) \end{aligned}$$ We now proceed to place a bound on ${\hat{J}}(2)$. Let ${\zeta}$ be the expected probability of completing the next task unconditional on when it arrives, i.e. $${\zeta}:=\sum_{t=1}^\infty q(1-q)^{t-1}(1-(1-q(1-(1-q)^t))^n).$$ We will prove by total induction that ${\hat{J}}(\tau) \leq q(T+1-\tau){\zeta}$, i.e. the expected number of remaining arrivals $q(T+1-\tau)$, times the (unconditional) expected probability of a completed task. Clearly this is true for $\tau = T+1$. We now assume this is true for $\tau = k+1$ and try to show it for $\tau = k$. We start by bounding $\sum_{t=k}^{T} q(1-q)^{t-k}\left( 1-(1-q(1-(1-q)^{t-k+1}))^n\right)$. To do so, we claim $$\frac{\sum_{t=k}^{T} q(1-q)^{t-k}\left( 1-(1-q(1-(1-q)^{t-k+1}))^n\right)}{\sum_{t=k}^{T} q(1-q)^{t-k}} \leq$$$$\frac{\sum_{t=T+1}^{\infty} q(1-q)^{t-k}\left( 1-(1-q(1-(1-q)^{t-k+1}))^n\right)}{\sum_{t=T+1}^{\infty} q(1-q)^{t-k}}$$ Both sides of the inequality represent the expected value of a random variable, and all possible values of the random variable on the right side of the inequality are larger than every possible value of the random variable on the left side of the inequality because $1-(1-q(1-(1-q)^{t}))^n$ is increasing in $t$. Now we use the algebraic fact that if $\frac{a}{c} \leq \frac{b}{d}$, then $\frac{a}{c} \leq \frac{a+b}{c+d}$ to yield $$\frac{\sum_{t=k}^{T} q(1-q)^{t-k}\left( 1-(1-q(1-(1-q)^{t-k+1}))^n\right)}{\sum_{t=k}^{T} q(1-q)^{t-k}} \leq$$$$\frac{\sum_{t=k}^{\infty} q(1-q)^{t-k}\left( 1-(1-q(1-(1-q)^{t-k+1}))^n\right)}{\sum_{t=k}^{\infty} q(1-q)^{t-k}} = \frac{{\zeta}}{\sum_{t=k}^{\infty} q(1-q)^{t-k}} = {\zeta}$$ This gives us the bound $$\begin{aligned} \sum_{t=k}^{T} q(1-q)^{t-k}\left( 1-(1-q(1-(1-q)^{t-k+1}))^n\right) &\leq {\zeta}\sum_{t=k}^{T}q(1-q)^{t-k} \nonumber \\ &= {\zeta}(1-(1-q)^{T-k+1}) \label{ex5bound1}\end{aligned}$$ We now bound $\sum_{t=k}^{T} q(1-q)^{t-k} {\hat{J}}(t+1)$. Using the inductive hypothesis, we have $$\begin{aligned} \sum_{t=k}^{T} q(1-q)^{t-k}{\hat{J}}(t+1) &\leq q\sum_{t=k}^{T} q(1-q)^{t-k}(T-t){\zeta}\nonumber \\ &= q\sum_{t'=0}^{T-k} q(1-q)^{t'}(T-k-t'){\zeta}\nonumber \\ &= q\left(\sum_{t'=0}^{T-k} q(1-q)^{t'}(T-k){\zeta}-\sum_{t'=0}^{T-k} q(1-q)^{t'}t'{\zeta}\right) \nonumber \\ &= q(T-k)(1-(1-q)^{T-k+1}){\zeta}\nonumber \\&- (1-q)(1-(1-q)^{T-k}-(T-k)q(1-q)^{T-k}){\zeta}\nonumber \\ &=q(T-k+1){\zeta}- (1 - (1-q)^{T-k+1}){\zeta}\label{ex5bound2}\end{aligned}$$ Combining and gives us, as desired, ${\hat{J}}(k) \leq q(T-k+1)Z$, which completes the proof by induction. All that remains is to bound ${\zeta}$, which we do below: $$\begin{aligned} {\zeta}& = \sum_{t=1}^\infty q(1-q)^{t-1}(1-(1-q(1-(1-q)^t))^n) \\ &\leq 1 - \int_{t=1}^\infty q(1-q)^{t-1}(1-q(1-(1-q)^t))^n dt \\&= 1 + \frac{q}{\log(1-q)(1-q)} \int_{u=q}^1 (1-qu)^n du \label{ex5int2} \\&= 1 + \frac{q}{\log(1-q)(1-q)(1+q)} \left((1-q^2)^{n+1} - (1-q)^{n+1}\right) \\&= 1 + \frac{q(1-q)}{\log(1-q)(1+q)} (1-q)^{n-1}\left((1+q)^{n+1} - 1\right) \\ &= 1 - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)} (1-q)^{n-1}\left((1+q)^{n+1} - 1\right) \\&\leq 1 - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)}(1-e^{-1}) \label{ex5expbounds}\end{aligned}$$ Line comes from a substitution of $u = 1-(1-q)^t$. Line comes from first recalling that $q = \frac{1}{n}$ by definition, and $(1-\frac{1}{n})^{n-1} \geq \frac{1}{e}$ while $(1+\frac{1}{n})^{n+1} \geq e$ for all $n \geq 1$. This implies that $J(2) \leq q(T-1)(1 - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)}(1-e^{-1}))$. Since $q(T-1) = n$ by definition, and since the total expected number of successful tasks is bounded by $1+J(2)$, we have proven the claim that the value of notifying every active volunteer whenever task $1$ arrives is at most $1 + n\left(1 - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)}(1-e^{-1}) \right)$. Putting all three claims together (e.g. taking the upper bound on the expected number of completed tasks by the best online policy and dividing by a lower bound on $\mathbf{LP}$), we have shown that in ${\mathcal{I}}_2$, no online algorithm can achieve a competitive ratio of more than $1+q - \frac{q(1-q)}{\log(\frac{1}{1-q})(1+q)}(1-e^{-1})$. Additional Examples =================== Comparing Ex Ante Candidate Solutions {#ex:xstarcandidates} ------------------------------------- In the following instance, we present evidence that $f(\mathbf{x^*_{LP}})$ can be significantly less than $f(\mathbf{x^*})$, where $f(\cdot)$ is defined in , $\mathbf{x^*_{LP}}$ is the solution to (LP), and $\mathbf{x^*}$ is defined in . [**Instance ${\mathcal{I}}_3$:**]{} Suppose $V = 2$, ${S}= 2$, $T=2$, and $g(1) = q$, where $q \in [0,1]$. The arrival probabilities are given by $\lambda_{1, 1} = 1$ and $\lambda_{2, 2} = 1$. The volunteer match probabilities are given by $p_{1, 1} = p_{2,1} = 0.5$, $p_{1,2} = 0$, and $p_{2,2} = 0.5-\epsilon$, where $\epsilon << 1$. The left panel of Figure \[fig:exappendix\] visualizes Instance ${\mathcal{I}}_3$. The solution to (LP) for instance ${\mathcal{I}}_3$ is $x_{1,1,1} = x_{2,1,1} = 1$ and $x_{2, 2, 2} = q$. Thus, $f(\mathbf{x}^*_{LP}) = 0.75 + 0.5q - \epsilon q$. Alternatively, $\mathbf{x^*_{AA}}$, which is result of the Frank-Wolfe variant described in Section \[subsec:offline\], has a solution which depends on the step size. As one example, with a step size of $0.5$, the solution is $x_{1,1,1} = 0.5$, $x_{2,1,1} = 1$, and $x_{2, 2, 2} = 0.5(1+q)$ if $q < 0.5$, which achieves a value of $f(\mathbf{x^*_{AA}}) = 0.875 + 0.25q - \epsilon(0.5)(1+q)$. If $q \geq 0.5$, $\mathbf{x^*_{AA}} = \mathbf{x^*_{LP}}$. The solution to $(SQ$-$1)$ is $x_{1,1,1} = 1$. If $q < 0.5$, the solution to $(SQ$-$2)$ is $x_{2,1,1} = 0$ and $x_{2,2,2}=1$ and $f(\mathbf{x^*_{SQ}}) = 1 - \epsilon$. Otherwise, the solution to $(SQ$-$2)$ is $x_{2,1,1} = 1$ and $x_{2,2,2} = q$ and $\mathbf{x^*_{SQ}} = \mathbf{x^*_{LP}}$. If $1 >> q >> \epsilon$, $f(\mathbf{x^*}) \approx 1$, which represents a 33% improvement over $f(\mathbf{x^*_{LP}}) \approx 0.75$. ![*Left:* Visualization of instance ${\mathcal{I}}_3$. *Right:* Visualization of instance ${\mathcal{I}}_4$.[]{data-label="fig:exappendix"}](instances_34.pdf){width=".95\textwidth"} Directly Following Ex Ante Solution {#ex:followingxstar} ----------------------------------- In the following instance, we present evidence that following an unadjusted ex ante solution can result in poor performance. [**Instance ${\mathcal{I}}_4$:**]{} Suppose $V = 1$, ${S}= 2$, $T=2$, and $g(1) = q$, where $q \in [0,1]$. The arrival probabilities are given by $\lambda_{1, 1} = 1$ and $\lambda_{2, 2} = q$. The volunteer match probabilities are given by $p_{1, 1} = \epsilon$ and $p_{1,2} = 1$, where $\epsilon << 1$. The right panel of Figure \[fig:exappendix\] visualizes Instance ${\mathcal{I}}_4$. The ex ante solution to instance ${\mathcal{I}}_4$ is $x^*_{1,1,1} = 1$ and $x^*_{1, 2, 2} = 1$, since this solution is feasible in (LP) and achieves the highest possible value of any feasible solution as a consequence of monotonicity. Therefore, $\mathbf{LP}_{{\mathcal{I}}_4} = \epsilon + q$. However, an online policy of following $\mathbf{x^*}$, e.g. always notifying volunteer $1$ when there is an arrival, achieves a payoff of $\epsilon + q^2$. For $q>>\epsilon$, this policy achieves a competitive ratio of $q$. Gap in Numerical Performance Between Policies {#ex:SNvsSDN} --------------------------------------------- In instance ${\mathcal{I}}_4$ from Appendix \[ex:followingxstar\] (which is visualized in the right panel of Figure \[fig:exappendix\]), the SDN policy will notify volunteer $1$ at time $1$ with probability $\frac{1}{2-q}$. When donor $2$ arrives at time $2$, the SDN policy will notify volunteer $1$ with probability $\frac{1}{(2-q)\beta_{1,2}}$. Thus, the SDN policy achieves a value of $\frac{\epsilon}{2-q} + q\beta_{1,2}\frac{1}{(2-q)\beta_{1,2}} = \frac{\epsilon+q}{2-q}$. We remark that this is exactly equal to $\frac{1}{2-q}f(\mathbf{x^*})$, which is exactly the guarantee of the SDN policy. Meanwhile, the SN policy solves a DP starting from $J_{1,3} = 0$. Working backwards, the DP solution for period $2$ is ${\tilde{x}}_{1,2,2} = 1$ and $J_{1,2} = q$. To evaluate the DP solution for period 1, we note that $r_{1,1,1} + q^2 = \epsilon + q^2 \leq {J}_{1,2}$, assuming $q<1$ and $\epsilon$ is chosen to be sufficiently small. Thus, ${\tilde{x}}_{1,1,1} = 0$, so the SN policy achieves a value of $q$. For $1 >> q >> \epsilon$, the SN policy performs roughly twice as well as the SDN policy. [^1]: Here, by organic, we mean volunteers sign up for those rescues without the platform’s involvement. [^2]: According to our analysis of FRUS data, a missed rescue increases the probability of donor dropout by a factor of approximately 2.5. [^3]: We remark that FRUS’s current practice in [many]{} locations is to notify a volunteer at most once a week. Further, we note that FRUS is hesitant to demand prompt responses from volunteers, which renders the option of sequentially notifying volunteers impractical. [^4]: Characteristics or features of a rescue includes its origin-destination location, day, time, etc. [^5]: For FRUS, a task represents a scheduled rescue (food donation) which has not been claimed in advance. [^6]: By convention, if the fraction is $\frac{0}{0}$, we define it to be equal to 1. [^7]: As explained in the introduction, the platform cannot expect a prompt response from volunteers and therefore sequential notification is impractical. [^8]: For ease of notation, for any $a \in \mathbb{N}$, we use $[a]$ to refer to the set $\{1, 2, \dots, a\}$. [^9]: Since a task can only be completed if one arrives, we limit all sums to task types indexed from $1$ to ${S}$. [^10]: We remark that we design our online policies such that they achieve a constant factor of $f(x)$ [as defined in .]{} [^11]: We remind that $f(\cdot)$ is the objective in the hypothetical case where all volunteers are always active. [^12]: Note that this priority scheme is without loss of generality, since in the online volunteer notification problem, *all* volunteers who respond to a notification become inactive for a duration drawn from an identical distribution. [^13]: Some of the ideas used in our SDN policy are similar to the adaptive algorithm of [@dickerson2018allocation]. [^14]: We also highlight that computing $\beta_{v,t}$ for all $v$ and $t$ can be done in polynomial time. [^15]: We remark that the condition imposed on $q$ when $0 < q < 1/16$ is added for ease of presentation of the theorem statement as well as its proof. Relaxing the aforementioned condition amounts to modifying the second term in $\kappa$ by rounding any $q$ up to the closest $\hat{q} \in \{1/n, n \in \mathbb{N} \}$ and slightly modifying the instance in the proof. We omit these details for the sake of brevity. [^16]: [Part of why our policies outperform their competitive ratio is that in the FRUS locations studied, using $\mathbf{x^*}$ as an ex ante solution improves on using $\mathbf{x^*_{LP}}$ by an average of $5\%$, up to a maximum of $23\%$.]{}
{ "pile_set_name": "ArXiv" }
--- bibliography: - 'Biblio2.bib' --- Le problème inverse de Galois sur les corps des fractions tordus à indéterminée centrale Bruno DESCHAMPS et François LEGRAND [ In this article, we show that, if $H$ denotes a finite dimensional field over its center $k$ and if $k$ contains an ample field, then the traditional inverse Galois problem has a positive answer over the skew field $H(t)$ of rational fractions with central indeterminate.\ [**Résumé.—**]{} Dans cet article, nous montrons que, si $H$ désigne un corps de dimension finie sur son centre $k$ et si $k$ contient un corps ample, alors le traditionnel Problème Inverse de Galois admet une réponse positive sur le corps $H(t)$ des fractions rationnelles tordu à indéterminée centrale.]{} Le problème inverse de la théorie de Galois sur un corps $K$ s’énonce de la manière suivante : [**$\hbox{\bf PIG}_K$ (problème inverse de Galois sur $K$)**]{} : [*est-ce que tout groupe fini $G$ est groupe de Galois d’une extension galoisienne $L/K$ ?*]{} Lorsque $K=k(t)$ désigne un corps de fractions rationnelles sur un corps commutatif $k$ et que l’on considère une extension commutative finie $H/k$, le moyen le plus naturel de regarder le lien qui existe entre le problème inverse de Galois sur $k(t)$ et celui sur $H(t)$ consiste à étendre les scalaires : on se donne un groupe $G$, une extension galoisienne $L/k(t)$ de groupe $G$ et l’on considère le produit tensoriel $H\otimes_k L$. Dès que ce produit est un corps, l’extension $(H\otimes_k L)/H(t)$ est galoisienne de groupe $G$. La condition nécessaire et suffisante pour que $H\otimes_k L$ soit un corps est que les extensions $H/k$ et $L/k$ soient linéairement disjointes sur $k$. Un moyen suffisant d’assurer que c’est bien le cas est de demander que $L/k$ soit linéairement disjointe sur $k$ de l’extension $\overline{k}/k$. Cette remarque amène à considérer une version plus élaborée du problème inverse de Galois : [**$\hbox{\bf PIGR}_k$ (problème inverse de Galois régulier)**]{} : [*est-ce que tout groupe fini est groupe de Galois d’une extension galoisienne $L/k(t)$ régulière sur $k$ (i.e. les extensions $L/k$ et $\overline{k}/k$ sont linéairement disjointes sur $k$) ?*]{} Dès lors, on voit que, pour toute extension finie $H/k$ de corps commutatifs, on a $$\hbox{\rm PIGR}_k\Longrightarrow \hbox{\rm PIGR}_H \Longrightarrow \hbox{\rm PIG}_{H(t)}$$ Si l’on s’intéresse juste au $\hbox{\rm PIG}_{H(t)}$, la condition de régularité de $L/k(t)$ n’est en fait pas nécessaire puisque ce qui importe est la linéaire disjonction des extensions. Puisque l’on suppose l’extension $H/k$ finie, la linéaire disjonction peut s’interpréter de la manière suivante : on considère $(e_1,\dots ,e_n)$, une $k$-base de $H$ et l’on considère la forme $$P_H (x_1,\dots ,x_n)=N_{H/k}(x_1e_1+\cdots +x_ne_n)\in k[x_1,\dots ,x_n]$$ où $N_{H/k}$ désigne la norme de l’extension $H/k$. Les extensions $H/k$ et $L/k$ sont alors linéairement disjointes sur $k$ si et seulement si la forme $P_H$ ne possède que le zéro trivial sur $L$. On peut donc considérer une autre variante du problème inverse de Galois : [**$\hbox{\bf PIGCP}_k$ (problème inverse de Galois à contrainte polynomiale)**]{} : [*est-ce que, pour toute forme $F$ à coefficients dans $k$ ne possédant que le zéro trivial sur $k$ et tout groupe fini $G$, il existe une extension galoisienne $L/k(t)$ de groupe $G$ telle que $F$ ne possède que le zéro trivial sur $L$ ?*]{} En considérant pour $F$ la forme $P_H$ décrite plus haut, on voit alors que $$\hbox{\rm PIGCP}_k\Longrightarrow \hbox{\rm PIG}_{H(t)}\ \hbox{\rm pour toute extension finie $H/k$}$$ Traditionnellement, les arithméticiens considèrent le(s) problème(s) inverse(s) de Galois pour les corps commutatifs. Nonobstant, la théorie de Galois possède une version non commutative et il est donc possible de regarder le problème inverse de Galois dans le cas des corps gauches. La définition la plus générale d’extension galoisienne est celle donnée par Artin : une extension $L/k$ est dite galoisienne si le corps des invariants de $L$ sous l’action de $\hbox{\rm Aut}(L/k)$ est égal à $k$. C’est avec cette définition que s’entend la théorie de Galois des corps gauches. Dans cet article, nous allons nous intéresser au cas de certains corps de fractions rationnelles tordus. Rappelons que, si $H$ désigne un corps (commutatif ou non), on définit [*l’anneau des polynômes tordu $H[t]$ à indéterminée centrale et à coefficients dans $H$*]{} comme le $H$-espace vectoriel de base $\{t^n\}_{n\geq 0}$ sur lequel on considère le produit qui vérifie $at=ta$ pour tout $a\in H$. Il s’agit en fait, pour la construction de Ore (voir [@Ore33]), de l’anneau de polynômes tordu $K[t,\alpha,\delta]$ obtenu en prenant pour $\delta$ la dérivation nulle et pour $\alpha$ l’endomorphisme identité. Cet anneau est un anneau de Ore et, en particulier, il possède un unique corps de fractions que l’on note $H(t)$ et que l’on appelle [*le corps des fractions rationnelles tordu à indéterminée centrale et à coefficients dans $H$*]{}. En tant que corps des fractions d’un anneau de Ore, le corps $H(t)$ jouit d’une propriété très forte : tous ses éléments peuvent s’écrire sous la forme $p(t)q(t)^{-1}$ avec $p(t), \, q(t)\in H[t]$. Lorsque $H$ est commutatif, le corps $H(t)$ est le classique corps des fractions rationnelles à coefficients dans $H$. Le résultat central de cet article est le [**Théorème principal.—**]{} [*Soient $k$ un corps commutatif et $H$ un corps de dimension finie sur son centre $k$. Si le $\hbox{\rm PIGCP}_k$ admet une réponse positive, alors il en est de même du $\hbox{\rm PIG}_{H(t)}$.*]{} C’est-à-dire que l’implication obtenue plus haut dans le cas commutatif reste vraie si l’on prend pour $H$ un corps gauche de dimension finie sur son centre $k$. Dans la deuxième partie de cet article, nous montrons qu’il existe une famille importante de corps commutatifs pour lesquels le $\hbox{\rm PIGCP}$ admet une réponse positive : celle des corps amples. Rappelons qu’un corps commutatif $k$ est dit [*ample*]{} si, toute courbe lisse définie sur $k$ et géométriquement irréductible possède une infinité de points $k$-rationnels dès qu’elle en possède au moins un. Parmi ces corps, on trouve les corps séparablement clos, les corps Pseudo Algébriquement Clos, les corps valués complets (${\mathbb R}$, ${\mathbb Q}_p$, $\kappa((x))$, etc.), les corps ${\mathbb Q}^{\hbox{\rm \scriptsize tp}}$ des nombres algébriques totalement $p$-adiques, le corps ${\mathbb Q}^{\hbox{\rm \scriptsize tr}}$ des nombres algébriques totalement réels, etc. Nous renvoyons aux articles de survol [@DD97b] et [@BSF13] pour plus de détails sur cette notion. En fait, nous montrons plus précisément que, si $k$ désigne un corps commutatif contenant un corps ample, alors le $\hbox{\rm PIG}_{H(t)}$ admet une réponse positive pour tout corps $H$ de dimension finie sur son centre $k$ (théorème 7). Une première application de ces résultats, sans doute la plus simple, s’obtient en considérant $k={\mathbb R}$ : [*tout groupe fini est groupe de Galois sur ${\mathbb H}(t)$*]{}, où ${\mathbb H}$ désigne le corps des quaternions d’Hamilton. Mais le théorème 7 s’applique aussi à des corps ayant des gros groupes de Brauer (e.g. ${\mathbb Q}^{\hbox{\rm \scriptsize tr}}$, $\overline{k_0}(x_1,\dots ,x_n)$), ce qui montre l’étendue du champ d’application de ces résultats, pour le choix du corps gauche $H$. La première partie de ce texte est consacrée à l’étude galoisienne de l’extension des scalaires dans le cas des corps gauches. Elle constitue l’outil central de l’article et se conclut par la preuve du théorème principal. [**1.— Extension des scalaires.**]{} Dans le cas non commutatif, la présence d’automorphismes intérieurs perturbe un peu les choses. Nous rappelons ici un résultat fondamental de théorie de Galois qui mesure cette perturbation : si $L$ est un corps de centre $C$ et si $L/K$ désigne une extension galoisienne de groupe $G$ telle que l’une des dimensions de $L$ en tant que $K$-espace vectoriel droite ou gauche soit finie, alors 1/ Les dimensions droite et gauche de $L$ en tant que $K$-espace vectoriel sont égales (et l’on peut donc parler du degré $[L:K]$ de l’extension sans se soucier de quel coté l’on considère le $K$-espace vectoriel $L$). 2/ L’ensemble $A=\{a\in L^*/\ I(a)\in G\}\cup \{0\}$ est un corps (ici $I(a)$ désigne l’automorphisme de conjugaison intérieur associé à l’élément $a\in L^*$). L’ensemble $A$ est en fait le commutant du corps $K$ dans $L$. 3/ Si l’on considère $G_0=\{I(a)/\ a\in A\}$ le sous-groupe de $G$ composé des automorphismes intérieurs, alors on a $$[L:K]=[G:G_0][A:C]$$ On voit qu’il peut donc exister des situations où $[L:K]$ est fini et où $G$ est infini (c’est par exemple le cas lorsque que l’on considère un corps $L$ de dimension finie sur son centre $K$), on peut même trouver des exemples où $G$ est fini mais où $|G|>[L:K]$ (voir [@Des18]). Pour autant, une conséquence de la propriété 3/ est que l’on a $[L:K]=|G|$, dès que $G_0=1$ (ou de manière équivalente dès que le commutant $A$ de $K$ dans $L$ est égal au centre $C=Z(L)$). Dans cette situation, on dit que l’extension $L/K$ est extérieure. On pourra trouver dans [@Coh95 §3.3] un exposé complet de la théorie de Galois dans le cas non commutatif et le détail des preuves des propriétés que nous venons d’énoncer. Pour l’étude de l’extension des scalaires, nous aurons besoin des deux lemmes techniques suivants : [**Lemme A.—**]{} [*Soient $A$ et $B$ deux $k$-algèbres, $C$ et $D$ des sous-algèbres respectives de $A$ et $B$, et $\widetilde{C}$ et $\widetilde{D}$ les commutants de $C$ et $D$ dans $A$ et $B$. Dans l’algèbre $A\otimes_k B$, le commutant $\widetilde{C\otimes_k D}$ de la sous-algèbre $C\otimes_k D$ est égal à $\widetilde{C}\otimes_k\widetilde{D}$. En particulier, on a $Z(A\otimes_k B)=Z(A)\otimes_k Z(B)$.*]{} [**Preuve :**]{} Il est déjà clair que $\widetilde{C}\otimes_k\widetilde{D}\subset \widetilde{C\otimes_k D}$. Considérons un élément $\sum_i x_i\otimes y_i \in A\otimes_k B$ où l’on a pris soin de prendre la famille $\{y_i\}_i$, $k$-libre. Si cet élément commute avec tous les éléments de $C\otimes_k D$, alors il commute en particulier avec les $z\otimes 1$, $z\in C$. Ainsi, pour tout $z\in C$ on a $$\sum_i(zx_i-x_iz)\otimes y_i=0$$ Comme les $y_i$ sont supposés $k$-linéairement indépendants, on en déduit que $zx_i-x_iz=0$, c’est-à-dire $x_i\in \widetilde{C}$ pour tout $i$. Ainsi, on a $\widetilde{C\otimes_k D}\subset \widetilde{C}\otimes_k B$. Un raisonnement analogue montre que $\widetilde{C\otimes_k D}\subset A\otimes_k \widetilde{D}$. Si l’on choisit une base de $A$ (resp. $B$) contenant une base de $\widetilde{C}$ (resp. $\widetilde{D}$), on constate que $(\widetilde{C}\otimes_k B)\cap(A\otimes_k \widetilde{D})=\widetilde{C}\otimes_k\widetilde{D}$ et donc que $\widetilde{C\otimes_k D}\subset\widetilde{C}\otimes_k\widetilde{D}$. Le centre d’une algèbre n’est rien d’autre que son commutant dans elle-même. Le relation $Z(A\otimes_k B)=Z(A)\otimes_k Z(B)$ découle alors du choix $C=A$ et $D=B$ dans ce qui précède. [ $\Box$]{} [**Lemme B.—**]{} [*Soient $k$ un corps commutatif, ${\mathscr A}$ une $k$-algèbre simple centrale et $A$ une $k$-algèbre. Si $A$ est simple, alors la $k$-algèbre ${\mathscr A}\otimes_k A$ l’est aussi.*]{} [**Preuve :**]{} Commençons par montrer ce lemme lorsque ${\mathscr A}=H$ est un corps. Soit $\{e_i\}_{i\in I}$ une $k$-base de $A$. Si l’on considère $H\otimes_k A$ comme $H$-espace vectoriel gauche ($x\in H$ opérant sur $H\otimes_k A$ par multiplication par $x\otimes 1$), la famille $\{1\otimes e_i\}_{i\in I}$ est alors une $H$-base de $H\otimes_k A$. Si $J$ est un idéal bilatère de $H\otimes_k A$ alors $J$ est un sous-$H$-espace vectoriel de $H\otimes_k A$, de sorte que $(H\otimes_k A)/J$ est aussi un $H$-espace vectoriel à gauche. Ainsi, il existe une partie $I_0\subset I$ telle que les images modulo $J$ de $\{1\otimes e_i\}_{i\in I}$ forment une $H$-base de $(H\otimes_k A)/J$. Pour tout indice $j\notin I_0$, il existe une unique famille $\{x_{i,j}\}_{i\in I_0}$ d’éléments de $H$ telle que $$1\otimes e_j=\sum_{i\in I_0} x_{i,j}\otimes e_i\ \hbox{\rm mod}\ (J)$$ On pose alors $$\varepsilon_j=1\otimes e_j-\sum_{i\in I_0} x_{i,j}\otimes e_i$$ de sorte que la famille $\{\varepsilon_j\}_{j\notin I_0}$ forme une $H$-base de $J$. Considérons $x\in H$, pour tout $j\notin I_0$, on a $$(x\otimes 1)\varepsilon_j(x^{-1}\otimes 1)=1\otimes e_j-\sum_{i\in I_0} xx_{i,j}x^{-1}\otimes e_i\in J$$ et donc, $(x\otimes 1)\varepsilon_j(x^{-1}\otimes 1)-\varepsilon_j=\sum_{i\in I_0} (x_{i,j}-xx_{i,j}x^{-1})\otimes e_i\in J$. On en déduit que $x_{i,j}=xx_{i,j}x^{-1}$, et ce, pour tout $x\in H$. Ainsi, $x_{i,j}\in Z(H)=k$ et l’idéal $J$ qui est engendré par les $\varepsilon_j$ est donc engendré par l’idéal bilatère $J_0$ de $k\otimes_k A=A$ engendré par les tenseurs $x_{i,j}\otimes e_i$. Puisque $A$ est supposée simple, on en déduit que $J_0=\{0\}$ ou $k\otimes_k A$ et donc $J=\{0\}$ ou $H\otimes_k A$. Ceci prouve que l’algèbre $H\otimes_k A$ est simple. Passons au cas général : puisque ${\mathscr A}$ une $k$-algèbre simple centrale, c’est une algèbre de matrices et il existe donc un corps $H$ de centre $k$ et un entier $n\geq 1$ tel que ${\mathscr A}\simeq{\mathscr M}_n(H)$ où ${\mathscr M}_n(H)$ désigne la $H$-algèbre des matrices carrées $n\times n$ à coefficients dans $H$. On a alors $${\mathscr A}\otimes_k A\simeq{\mathscr M}_n(H)\otimes_k A\simeq{\mathscr M}_n(k)\otimes_k H\otimes_k A\simeq{\mathscr M}_n(H\otimes_k A)$$ D’après ce qui précède, la $k$-algèbre $H\otimes_k A$ est simple et donc ${\mathscr M}_n(H\otimes_k A)$ l’est aussi. En effet, de manière générale on a la propriété suivante : [*si $A$ est une k-algèbre simple alors ${\mathscr M}_n(A)$ l’est aussi*]{}. Cette dernière propriété peut être obtenue de la manière suivante : pour $a\in A$ et $i,j\in \{1,\cdots ,n\}$, l’on note $\Gamma_{i,j}(a)$ la matrice dont tous les coefficients sont nuls sauf celui de ligne $i$ et de colonne $j$, qui vaut $a$. On considère un idéal bilatère non nul $J$ de ${\mathscr M}_n(A)$. Pour toute matrice $M=(m_{i,j})_{i,j}\in J$, tout coefficient non nul $a=m_{i_0,j_0}$ de $M$ et tout couple d’indice $(i,j)$, on a $$\Gamma_{i,j}(a)=\Gamma_{i,i_0}(1)M\Gamma_{j_0,j}(1)\in J$$ On voit donc que $J$ est le $k$-espace vectoriel engendré par les matrices $\Gamma_{i,j}(a)$ où $a$ parcourt l’ensemble des coefficients non nuls des matrices de $J$. On vérifie facilement que l’ensemble $J_0$ de ces coefficients, auquel on a rajouté $0$, est un idéal bilatère non nul de $A$. Comme $A$ est simple, on a $J_0=A$ et donc $J={\mathscr M}_n(A)$. [ $\Box$]{} On se donne maintenant un corps commutatif infini $k$ et une $k$-algèbre simple centrale ${\mathscr A}$. On note $n^2=[{\mathscr A}:k]$ (la dimension d’une algèbre simple centrale sur son centre étant toujours un carré parfait) et l’on se donne, une fois pour toute, une $k$-base $(e_1,\cdots ,e_{n^2})$ de ${\mathscr A}$ vérifiant $e_1=1$, de sorte que le corps $k$ s’identifie dans ${\mathscr A}$ à $k.e_1$. On note $\left\{\lambda^{(i,j)}\right\}_{(i,j)}$ les constantes de structure de la $k$ algèbre $H$, c’est-à-dire, pour tout couple d’indices $(i,j)$, $\lambda^{(i,j)}=(\lambda^{(i,j)}_1,\cdots ,\lambda^{(i,j)}_{n^2})\in k^{n^2}$ désigne l’unique vecteur tel que $$e_ie_j=\sum_{h=1}^{n^2} \lambda^{(i,j)}_h e_h$$ Si $K/k$ désigne une extension de corps commutatifs, l’algèbre obtenue de ${\mathscr A}$ par [*extension des scalaires à $K$*]{} est par définition la $k$-algèbre tensorisée $\Omega={\mathscr A}\otimes_k K$. On peut plonger $K$ dans $\Omega$ en l’identifiant à $k\otimes_k K$. Il est alors clair que $K$ est inclus dans le centre, $Z(\Omega)$, de $\Omega$ de sorte que l’algèbre $\Omega$ peut être considérée comme une $K$-algèbre. Si l’on note $\tilde{e_i}=e_1\otimes 1$ pour tout $i=1,\cdots ,n$, on voit que $(\tilde{e_1},\cdots ,\tilde{e_{n^2}})$ est une $K$-base de $\Omega$ vérifiant $\tilde{e_1}=1\otimes 1=1$. Il est alors clair que les constantes de structure de la $K$-algèbre $\Omega$ sont les mêmes que celles de la $k$-algèbre ${\mathscr A}$. On note ${\cal F}_{{\mathscr A}}(x_1,\cdots ,x_{n^2})\in k[x_1,\cdots ,x_{n^2}]$ la forme associée à la norme réduite $\hbox{\rm Nrd}_{{\mathscr A}/k}$ relativement au choix de la base $(e_1,\cdots ,e_{n^2})$. Rappelons que la norme réduite est définie de la manière suivante : on commence par se donner un corps neutralisant $D$ de la $k$-algèbre ${\mathscr A}$. Par définition, il existe un isomorphisme $\varphi : {\mathscr A}\otimes_k D\longrightarrow {\mathscr M}_n(D)$. La norme réduite $\hbox{\rm Nrd}_{{\mathscr A}/k}$ est alors la composée des applications : $$\xymatrix{\hbox{\rm Nrd}_{{\mathscr A}/k}:{\mathscr A}\ar[r]^-{a\mapsto a\otimes 1}&{\mathscr A}\otimes_k D\ar[r]^-\varphi&{\mathscr M}_n(D)\ar[r]^-{\hbox{\rm \footnotesize det}}&D}$$ Cette application ne dépend, ni du corps neutralisant $D$, ni de l’isomorphisme $\varphi$ et elle est en fait à valeurs dans $k$ (toutes ces propriétés sur les algèbres simples centrales sont présentées dans [@Bou12]). La forme associée ${\cal F}_{\mathscr A}$ est par définition l’unique forme telle que, pour tout $(x_1,\cdots ,x_{n^2})\in k^{n^2}$, on ait $${\cal F}_{\mathscr A}(x_1,\cdots ,x_{n^2})=\hbox{\rm Nrd}_{{\mathscr A}/k}(x_1e_1+\cdots+x_{n^2}e_{n^2})$$ [**Exemple :**]{} Pour $k={\mathbb R}$ et ${\mathscr A}={\mathbb H}$ le corps des quaternions d’Hamilton, la norme réduite d’un élément $x=a+bi+cj+dk$ (où $i^2=j^2=k^2=ijk=-1$) vaut la traditionnelle norme des quaternions $q(x)=a^2+b^2+c^2+d^2$, de sorte que, pour la choix de la base $\{1,i,j,k\}$ de $\mathbb H$, on a $${\cal F}_{\mathbb H}(x_1,x_2,x_3,x_4)=x_1^2+x_2^2+x_3^2+x_4^2$$ [**Proposition 1.—**]{} [*Avec les notations précédentes, on a : a) $Z(\Omega)=K$ et $\Omega$ est une $K$-algèbre simple centrale de même dimension que la $k$-algèbre simple centrale ${\mathscr A}$. b) Les formes ${\cal F}_{\mathscr A}$ et ${\cal F}_{\Omega}$ sont égales.*]{} [**Preuve :**]{} a) Grâce au lemme A, on peut écrire $$Z(\Omega)=Z({\mathscr A}\otimes_k K)=Z({\mathscr A})\otimes_k Z(K)=k\otimes_k K=K$$ Maintenant, puisque le centre de ${\mathscr A}$ est $k$, et que ${\mathscr A}$ et $K$ sont des $k$-algèbres simples, le lemme B assure que l’algèbre ${\mathscr A}\otimes_k K$ est simple. Il est clair que $[{\mathscr A}:k]=[{\mathscr A}\otimes_k K:K]$, et puisque $Z(\Omega)=K$, on en déduit bien que $\Omega$ est une $K$-algèbre simple centrale de même dimension que la $k$-algèbre simple centrale ${\mathscr A}$. b) On considère la clôture algébrique $\overline{K}$ de $K$. Il existe un $k$-isomorphisme $$\varphi :\Omega\otimes_K \overline{K}=\left({\mathscr A}\otimes_k K\right)\otimes_K\overline{K}\longrightarrow {\mathscr A}\otimes_k \overline{K}$$ que l’on peut choisir avec la propriété que $\varphi((a\otimes 1)\otimes 1)=a\otimes 1$. Le corps $\overline{K}$ est un corps neutralisant de la $k$-algèbre ${\mathscr A}$ et de la $K$-algèbre $\Omega$, si bien qu’il existe un isomorphisme $\psi: {\mathscr A}\otimes_k \overline{K}\longrightarrow {\mathscr M}_n(\overline{K})$. Le diagramme suivant $$\xymatrix{\relax {\mathscr A} \ar[r]^-{f}_-{a\mapsto a\otimes 1}\ar[d]^-{\theta}_-{a\mapsto a\otimes 1}&{\mathscr A}\otimes_k \overline{K}\ar[r]^-{\psi}&{\mathscr M}_n(\overline{K})\ar[r]^-{\hbox{\rm \footnotesize det}}&\overline{K}\\ {\mathscr A}\otimes_k K\ar[r]^-{g}_-{x\mapsto x\otimes 1}&({\mathscr A}\otimes_k K)\otimes_K \overline{K}\ar[u]^-{\varphi}\ar[ru]_-{\psi\circ \varphi}}$$ est alors commutatif. Comme $\hbox{\rm Nrd}_{{\mathscr A}/k}=\hbox{\rm \small det}\circ \psi\circ f$ et $\hbox{\rm Nrd}_{\Omega/K}=\hbox{\rm \small det}\circ (\psi\circ \varphi) \circ g$, on en déduit que $$\hbox{\rm Nrd}_{{\mathscr A}/k}=\hbox{\rm Nrd}_{\Omega/K}\circ \theta$$ Pour tout $(x_1,\cdots ,x_{n^2})\in k^{n^2}$, on a $\theta(x_1e_1+\cdots +x_{n^2}e_{n^2})=x_1\tilde{e_1}+\cdots +x_{n^2}\tilde{e_{n^2}}$ et donc $${\cal F}_{\mathscr A}(x_1,\cdots ,x_{n^2})={\cal F}_{\Omega}(x_1,\cdots ,x_{n^2})$$ Puisque le corps $k$ est infini, on en conclut que ${\cal F}_{\mathscr A}={\cal F}_{\Omega}$, par Zariski-densité. [ $\Box$]{} [**Corollaire 2.—**]{} [*Avec les notations précédentes, si ${\mathscr A}=H$ est un corps, alors on a l’équivalence $$\Omega\ \hbox{\rm est un corps}\Longleftrightarrow \hbox{\rm la forme ${\cal F}_{H}$ ne poss\`ede que le z\'ero trivial sur $K$}$$*]{} [**Preuve :**]{} Puisque le centre de $H$ est $k$ et que $H$ et $K$ sont des $k$-algèbres simples, l’algèbre $\Omega=H\otimes_k K$ est simple. Ainsi, $\Omega$ est une $K$-algèbre simple centrale, en vertu de la proposition 1.a. Un élément $x\in \Omega$ est inversible si et seulement si on a $\hbox{\rm Nrd}_{\Omega/K}(x)\ne 0$. Puisque ${\cal F}_{\Omega}={\cal F}_{H}$, on en déduit bien le corollaire. [ $\Box$]{} [**Exemple :**]{} Si $H={\mathbb H}$, alors ${\mathbb H}\otimes_{\mathbb R}K$ est un corps, si et seulement si $K$ est de niveau[^1] $\nu(K)\geq 4$. En effet, dire que la forme ${\cal F}_{\mathbb H}(x_1,x_2,x_3,x_4)=x_1^2+x_2^2+x_3^2+x_4^2$ possède un zéro non trivial sur $K$ équivaut à dire que $-1$ est somme de trois carrés dans $K$. On voit, en particulier, que $\Omega$ est un corps dès que $K$ est ordonnable. [**Corollaire 3.—**]{} [*L’algèbre $H\otimes_k k(t)$ est un corps, isomorphe au corps $H(t)$ des fractions tordues à indéterminée centrale.*]{} [**Preuve :**]{} Supposons que la forme ${\cal F}_{H}$ possède un zéro non-trivial $(r_1(t),\cdots ,r_{n^2}(t))\in k(t)^{n^2}$. Quitte à chasser les dénominateurs et à factoriser par la puissance de $t$ correspondant à la plus petite des valuations, on peut supposer que $(r_1(t),\cdots ,r_{n^2}(t))\in k[t]^{n^2}$ et qu’au moins un des $r_i$ est de valuation nulle. En posant $t=0$, on voit que $(r_1(0),\cdots ,r_{n^2}(0))\in k^{n^2}$ est un zéro non-trivial de ${\cal F}_{H}$, ce qui est impossible. On en déduit que $H\otimes_k k(t)$ est bien un corps. Par propriété universelle du produit tensoriel, l’application $$(a,r(t))\in H\times k(t)\longmapsto a.r(t)\in H(t)$$ définit un morphisme de $k$-espaces vectoriels $\psi:H\otimes_k k(t)\longrightarrow H(t)$. Par définition, dans $H(t)$ on a $a.r(t)=r(t).a$ et l’on voit donc que $\psi$ est un morphisme de $k$-algèbres et ce dernier est alors injectif puisque $H\otimes_k k(t)$ est un corps. Il est clair que $\psi$ identifie les anneaux $H\otimes_k k[t]$ et $H[t]$. Comme on l’a rappelé dans l’introduction, les éléments de $H(t)$ s’écrivent sous la forme $PQ^{-1}$ avec $P,Q\in H[t]$. Puisque $H\otimes_k k(t)$ est un corps, on en déduit finalement que $\psi$ est aussi surjectif. [ $\Box$]{} [**Lemme 4.—**]{} [*Soient $H$ un corps de dimension finie sur son centre $k$ et $M/L/k$ une tour d’extensions commutatives telle que la forme ${\cal F}_{H}$ ne possède que le zéro trivial sur $M$. a) Les algèbres $H\otimes_k L$ et $H\otimes_k M$ sont des corps et l’on a une extension $(H\otimes_k M)/(H\otimes_k L)$. b) Le commutant de $H\otimes_k L$ dans $H\otimes_k M$ est égal à $M=k\otimes_k M=Z(H\otimes_k M)$.*]{} [**Preuve :**]{} a) C’est une conséquence immédiate de ce qui précède. b) Le lemme A permet d’écrire $$\widetilde{H\otimes_k L}=\widetilde{H}\otimes_k\widetilde{L}=k\otimes_k M=Z(H)\otimes_k Z(M)=Z(H\otimes_k M)$$ -2mm [ $\Box$]{} [**Proposition 5.—**]{} [*Soient $H$ un corps de dimension finie sur son centre $k$ et $L/k(t)$ une extension galoisienne finie de groupe $G$, telle que la forme ${\cal F}_{H}$ ne possède que le zéro trivial sur $L$. L’algèbre $\Omega=H\otimes_k L$ est un corps et l’extension $\Omega/H(t)$ est galoisienne de groupe $G$.*]{} [**Preuve :**]{} D’après le lemme 4, l’algèbre $\Omega$ est bien un corps, et puisque $L$ est une extension de $k(t)$, le corps $\Omega=H\otimes_k L$ est extension du corps $H\otimes_k k(t)$($\simeq H(t)$, d’après le corollaire 3). La proposition 1 montre que $[\Omega :L]=[H(t):k(t)]=[H:k]$. On note $\Gamma={\rm Aut}(\Omega/H(t))$. $$\xymatrix{&&\Omega&\\ &H(t)\ar@{-}[ur]^{\Gamma}&&L\ar@{-}[ul]_{n^2}\\ H\ar@{-}[ur]&&k(t)\ar@{-}[ur]_G\ar@{-}[ul]_{n^2}&\\ &k\ar@{-}[ul]_{n^2}\ar@{-}[ur]&&\\}$$ Pour $\sigma\in {\rm Gal}(L/k(t))$, on considère l’application $\tilde{\sigma}:H\otimes_k L\longrightarrow H\otimes_k L$ définie sur les tenseurs $x\otimes t$ par $$\tilde{\sigma}(x\otimes t)=x\otimes \sigma(t)$$ Puisque $\sigma$ est un $k(t)$-automorphisme de $L$, on voit que $\tilde{\sigma}\in \Gamma$. L’application $\sigma\longmapsto \tilde{\sigma}$ est visiblement un morphisme de groupes et, comme $L$ s’identifie à $k\otimes_k L$ dans $H\otimes_k L$, on voit que $\tilde{\sigma}=\hbox{\rm Id}$ seulement si $\sigma=\hbox{\rm Id}$. Ainsi, $G$ se plonge dans $\Gamma$. Il est clair que le sous-corps des éléments invariants de $\Omega$ par l’action de l’image de $G$ dans $\Gamma$ est égal à $H\otimes_k k(t)=H(t)$. Il s’ensuit que l’extension $\Omega/H(t)$ est bien galoisienne. Il reste à montrer que $\Gamma$ n’est pas un groupe plus gros que l’image de $G$. Puisque $[\Omega :L]=[H(t):k(t)]$, par transivité des degrés (ici droites ou gauches indistinctement, puisque l’extension $\Omega/H(t)$ est galoisienne) on a que $[\Omega:H(t)]=[L:k(t)]$. Le lemme 4 assure que le commutant de $H(t)$ dans $\Omega$ est égal à $L=k\otimes_k L=Z(\Omega)$. Il s’ensuit que l’extension $\Omega/H(t)$ est extérieure et, en vertu des propriétés générales de la théorie de Galois des corps gauches rappelées au début de ce paragraphe, on a donc $$|\Gamma|=[\Omega:H(t)]=[L:k(t)]=|G|$$ Puisque $G$ se plonge dans $\Gamma$, on en déduit finalement que $\Gamma=G$. [ $\Box$]{} Le théorème principal énoncé dans l’introduction découle immédiatement de la proposition 5. [**2.— Applications.**]{} Pour commencer cette partie, remarquons que, si $k$ désigne un corps commutatif et si $F$ est une forme à coefficients dans $k$, alors, si $F$ ne possède que le zéro trivial sur $k$, il en est de même sur le corps des séries de Laurent $k((t))$. En effet, si $F$ possède un zéro non trivial sur $k((t))$, alors, quitte à chasser les dénominateurs et à factoriser par la puissance de $t$ égale au minimum des valuations des coordonnées du zéro, on peut supposer que $F$ possède un zéro non trivial, $(z_1(t),\dots ,z_{n}(t))\in k[[t]]^{n}$, dont au moins une des coordonnées est une séries de valuation nulle. En posant $t=0$, on voit alors que $(z_1(0),\dots ,z_{n}(0))\in k^{n}$ est un zéro non trivial de $F$. Cette remarque nous montre que, pour trouver une extension galoisienne $L/k(t)$ de groupe donné et telle qu’une certaine forme ne possède que le zéro trivial sur $L$, il suffit que $L$ se plonge dans $k((t))$. Ceci nous invite à considérer le problème suivant : [**$\hbox{\bf PIGFR}_k$ (problème inverse de Galois fortement régulier[^2])**]{} : [*est-ce que tout groupe fini $G$ est groupe de Galois d’une extension galoisienne $L/k(t)$ telle que $L$ se plonge dans le corps $k((t))$ ?*]{} Ce que nous venons d’expliquer montre que le $\hbox{\rm PIGFR}$ est un problème plus fort que le $\hbox{\rm PIGCP}$ pour un corps donné. Il est conjecturé que le $\hbox{\rm PIGR}$ admet une réponse positive pour tout corps, mais personne ne s’est encore aventuré à conjecturer qu’il en était de même pour le . Toutefois, mentionnons que la littérature ne contient, à l’heure actuelle, aucun exemple de corps commutatif ne satisfaisant pas au PIGFR. Tout comme le PIGR, le PIGFR est stable par extension : [**Proposition 6.—**]{} [*Si $K/k$ désigne une extension de corps commutatifs, alors $$\hbox{\rm PIGFR}_k\Longrightarrow \hbox{\rm PIGFR}_K$$*]{} [**Preuve :**]{} Soit $L/k(t)$ une extension galoisienne de groupe $G$ telle que $L$ se plonge dans $k((t))$. Par l’inclusion $L \subseteq k((t))$, on a $L \cap \overline{k} = k$. L’extension $LK/K(t)$ est alors galoisienne de groupe $G$ et l’on a $LK \subseteq K((t))$. [ $\Box$]{} L’intérêt de considérer ici le PIGFR est que plusieurs travaux importants ont été réalisés à son sujet, en relation avec les corps amples. Il est en effet montré que, si $k$ désigne un corps ample, alors le $\hbox{\rm PIGFR}_k$ admet une réponse positive (voir [@CT00 Theorem 1] pour le cas de la caractéristique nulle, [@MB01 théorème 1.1] et [@HJ07 Theorem 3.2] pour le cas général). On en déduit, par la proposition 6, que le $\hbox{\rm PIGFR}_k$ admet une réponse positive dès que $k$ contient un corps ample. Ceci nous permet finalement de montrer le [**Théorème 7.—**]{} [*Si $H$ désigne un corps de dimension finie sur son centre $k$ et si $k$ contient un corps ample, alors tout groupe fini est le groupe de Galois d’une extension galoisienne de $H(t)$.*]{} Le diagramme d’implications suivant résume les résultats établis dans ce texte : $$\xymatrix{\relax & &\hbox{\rm PIGCP}_k\ar@{=>}[r]\ar@{=>}[ddr]&\ \ \txt{$\hbox{\rm PIG}_{H(t)}$ pour $H$ de dimension\\ finie sur son centre $k$}\\ \txt{$k$ contient un\\corps ample}\ \ \ar@{=>}[r] &\hbox{\rm PIGFR}_k\ar@{=>}[rd]\ar@{=>}[ru]&&\\ &&\hbox{\rm PIGR}_k\ar@{=>}[r]&\ \ \txt{$\hbox{\rm PIG}_{H(t)}$ pour $H$ commutatif\\ de dimension finie sur $k$}\\}$$ [**Remarques :**]{} 1/ Soient $G$ un groupe fini, $k$ un corps commutatif et $H$ un corps de dimension finie sur son centre $k$. Dans chacun des cas suivants, $G$ est le groupe de Galois d’une extension galoisienne de $H(t)$ : $\bullet$ $G$ est abélien, $\bullet$ $G=S_n$ ($n \geq 3$) et $k$ est infini, $\bullet$ $G=A_n$ ($n \geq 4$) et $k$ est de caractéristique nulle. En effet, par un argument déjà vu, il suffit, dans chaque cas, de trouver un sous-corps $k_0$ de $k$ et une extension galoisienne $L/k_0(t)$ de groupe $G$ telle que $L$ se plonge dans le corps $k_0((t))$. Si $G$ est abélien, cette propriété est vraie sur le corps $k$, en tant que conséquence classique du [*[twisting lemma]{}*]{} (voir [@Deb99a] et [@Deb09 Proposition 3.2.4]) et de l’existence d’une extension galoisienne $L/k(t)$ de groupe $G$ telle que l’idéal $\langle t \rangle$ de $\overline{k}[t]$ ne se ramifie pas dans l’extension $L\overline{k}/\overline{k}(t)$ (voir [@Deb99e]). Supposons maintenant que $G=S_n$ ($n \geq 3)$ et $k$ soit infini. Dans ce cas, $k$ admet un sous-corps $k_0$ qui est soit ample, soit hilbertien[^3]. En effet, si $k$ est de caractéristique nulle, on peut prendre $k_0=\mathbb{Q}$. Si $k$ est de caractéristique $p>0$, le corps $k$ est soit une extension algébrique infinie de $\mathbb{F}_p$ et est en particulier ample (voir [@FJ08 Corollary 11.2.4]), soit une extension du corps de fractions rationnelles ${\mathbb F}_p(x)$ qui est hilbertien. Le cas ample ayant déjà été traité de manière plus générale, on peut supposer que $k_0$ est hilbertien. On se donne alors un polynôme unitaire séparable $P_0(x) \in k_0[x]$ de degré $n$ et dont toutes les racines sont dans $k_0$, et un polynôme unitaire $P_1(x) \in k_0[x]$ de degré $n$ et de groupe de Galois $S_n$ sur $k_0$ (voir par exemple [@FJ08 Corollary 16.2.7]). Par interpolation polynomiale, il existe un polynôme unitaire $P(t,x) \in k_0[t][x]$ de degré $n$ tel que $P(0,x)=P_0(x)$ et $P(1,x) = P_1(x)$. Notons $L$ le corps de décomposition sur $k_0(t)$ de $P(t,x)$. Puisque $P(0,x)$ est séparable et a toutes ses racines dans $k$, le corps $L$ se plonge dans $k((t))$. Enfin, puisque $P(1,x)$ possède $S_n$ comme groupe de Galois sur $k$, le groupe de Galois de $L/k(t)$ vaut également $S_n$. Supposons enfin que $G=A_n$ ($n \geq 4$) et $k$ soit de caractéristique nulle. Fixons un polynôme unitaire séparable $P(x)\in k[x]$ de degré $n$ et dont toutes les racines sont dans $k$. Par [@KM01 Theorem 3], il existe un polynôme unitaire $P(t,x)\in k[t][x]$ tel que, si $L$ est le corps de décomposition sur $k(t)$ de $P(t,x)$, alors ${\rm{Gal}}(L/k(t))=A_n$ et le corps de décomposition sur $k$ de $P(0,x)$ est égal à celui de $P(x)$, c’est-à-dire à $k$. Comme $P(x)$ est séparable, cela entraîne que $L$ se plonge dans $k((t))$. 2/ Puisqu’il existe une théorie de Galois pour les corps gauches, le PIG est tout à fait légitime pour ces corps. Pour autant, le problème régulier et toutes les autres variantes que nous avons introduites dans ce texte n’ont pas vraiment de sens. Il faut avoir à l’esprit que les notions d’éléments algébriques et de clôture algébrique n’ont pas de sens immédiat pour un corps gauche. Ce point rend beaucoup plus délicat l’approche au cas non commutatif de la théorie de Galois. Il est ainsi délicat de parler du compositum de deux extensions d’un même corps, sans pouvoir plonger ces corps dans un corps donné. Quand cela est possible, certaines pathologies peuvent se produire. Dans [@Des01b], il est par exemple montré que les extensions finies d’un corps algébriquement clos $\overline{k}$ de caractéristique $0$ sont toutes de degré $2$ et qu’il en existe une infinité non isomorphes deux à deux. On voit que, si l’on plonge deux telles extensions dans un corps, alors le compositum est nécessairement de degré infini sur $\overline{k}$. [**Bruno Deschamps**]{}\ [Laboratoire de Mathématiques Nicolas Oresme, CNRS UMR 6139]{}\ Université de Caen - Normandie\ BP 5186, 14032 Caen Cedex - France -1.5mm —————————— -1.5mm [Département de Mathématiques — Le Mans Université]{}\ Avenue Olivier Messiaen, 72085 Le Mans cedex 9 - France\ E-mail : Bruno.Deschamps@univ-lemans.fr [**François Legrand**]{}\ [Institut für Algebra, Fachrichtung Mathematik]{}\ TU Dresden, 01062 Dresden, Germany\ E-mail : francois.legrand@tu-dresden.de [^1]: Rappelons que le niveau, $\nu(K)$, d’un corps commutatif $K$ est, $+\infty$ si $-1$ n’est pas une somme de carrées dans $L$ (la théorie d’Artin-Schreier montre que cette propriété équivaut au fait d’être ordonnable pour $K$, voir [@Rib72]), et dans le cas contraire, le niveau est le plus petit entier $n$ tel que $-1$ soit la somme de $n$ carrés dans $K$. [^2]: La terminologie vient du fait que, si l’extension $L/k(t)$ est telle que $L$ se plonge dans $k((t))$, alors elle est nécessairement régulière. [^3]: c’est-à-dire le théorème d’irréductibilité de Hilbert est vrai sur le corps $k_0$. On renvoie par exemple à [@FJ08] pour plus de détails sur les corps hilbertiens.
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $\kappa$ be any regular cardinal. Assuming the existence of a huge cardinal above $\kappa$, we prove the consistency of $\binom{\kappa^{++}}{\kappa^+}\rightarrow \binom{\tau}{\kappa^+}^{1,1}_\kappa$ for every ordinal $\tau<\kappa^{++}$. Likewise, we prove that $\binom{\aleph_2}{\aleph_1}\rightarrow_{\mathcal{A}} \binom{\aleph_2}{\aleph_1}^{1,1}_2$ is consistent when $\mathcal{A}$ is strongly closed under countable intersections.' address: 'Institute of Mathematics, The Hebrew University of Jerusalem, Jerusalem 91904, Israel' author: - Shimon Garti bibliography: - 'arlist.bib' title: Amenable colorings --- \[\] Introduction ============ The strong polarized partition relation $\binom{\lambda}{\kappa}\rightarrow \binom{\lambda}{\kappa}^{1,1}_\theta$ means that for every coloring $c:\lambda\times\kappa\rightarrow\theta$ there exists a monochromatic product $A\times B$ so that $|A|=\lambda$ and $|B|=\kappa$. Two major problems stand in the front. For any given infinite cardinal $\kappa$ we ask about the pair $(\lambda, \kappa)$ with respect to $\lambda=\kappa^+$ and $\lambda=2^\kappa$. Outside the interval $[\kappa^+,2^\kappa]$ the question becomes uninteresting. Let us try to explain why. Firstly, we may always assume that $\kappa\leq\lambda$, since the notation of the strong polarized relation is symmetric. A simple coloring shows that $\binom{\kappa}{\kappa}\nrightarrow \binom{\kappa}{\kappa}^{1,1}_\theta$ for every $\kappa$ (actually, a stronger negation can be proved), so our investigation begins with $\lambda\geq\kappa^+$. The interval $[\kappa^+,2^\kappa]$ exhibits non-trivial demeanor, as positive and negative statements can be proved both for specific $\lambda\in[\kappa^+,2^\kappa]$ and for the behavior of the entire interval. If $\lambda={{\rm cf}}(\lambda)>2^\kappa$ then $\binom{\lambda}{\kappa}\rightarrow \binom{\lambda}{\kappa}^{1,1}_\theta$ follows from the fact that $|\mathcal{P}(\kappa)|=2^\kappa$, so the right-hand component of every coloring will be the same for $\lambda$-many ordinals. If $\lambda>2^\kappa$ is a singular cardinal then the behavior of $\lambda$ with respect to the relation $\binom{\lambda}{\kappa}\rightarrow \binom{\lambda}{\kappa}^{1,1}_\theta$ is determined by the behavior of ${{\rm cf}}(\lambda)$ with respect to the relation $\binom{{{\rm cf}}(\lambda)}{\kappa}\rightarrow \binom{{{\rm cf}}(\lambda)}{\kappa}^{1,1}_\theta$. Hence a knowledge of the pertinent relations for $\lambda\in[\kappa^+,2^\kappa]$ gives a full knowledge for every $\lambda$. As a reference to the facts mentioned in this paragraph we suggest Chapter 4 in [@williams] (in particular, Theorem 4.14 and Lemma 4.2.7). In this paper we focus on the pair $(\kappa^+,\kappa)$. A negative consistency relation can be forced for every $\kappa$ since $2^\kappa=\kappa^+$ implies $\binom{\kappa^+}{\kappa}\nrightarrow \binom{\kappa^+}{\kappa}^{1,1}_2$. For a positive consistency relation it seems natural to classify infinite cardinals into three categories. If $\kappa$ is a large cardinal (including the case $\kappa=\aleph_0$) then one can force $\binom{\kappa^+}{\kappa}\rightarrow \binom{\kappa^+}{\kappa}^{1,1}_2$ by increasing the splitting number $\mathfrak{s}_\kappa$. Assuming that $\kappa={{\rm cf}}(\kappa)$ it is known that $\mathfrak{s}_\kappa\geq\kappa$ iff $\kappa$ is strongly inaccessible, and $\mathfrak{s}_\kappa>\kappa$ iff $\kappa$ is weakly compact. So if one wishes to force $\binom{\kappa^+}{\kappa}\rightarrow \binom{\kappa^+}{\kappa}^{1,1}_2$ by increasing the splitting number, at least weak compactness must be assumed. Moreover, one needs $\mathfrak{s}_\kappa>\kappa^+$, and it is unknown if this setting is possible for mild large cardinals. It has been done for every supercompact cardinal (see [@MR3000439] and [@MR3201820]), and recently also when $\kappa$ is measurable with large enough Mitchell order (see [@MR3436372]). Of course, perhaps $\binom{\kappa^+}{\kappa}\rightarrow \binom{\kappa^+}{\kappa}^{1,1}_2$ can be forced without increasing $\mathfrak{s}_\kappa$, so it is still open for small large cardinals whether this strong relation is forceable (see [@1012], Question 4.4). The second category is singular cardinals. If $\kappa$ is a singular cardinal then we have a comprehensive answer, as $\binom{\kappa^+}{\kappa}\rightarrow \binom{\kappa^+}{\kappa}^{1,1}_2$ can be forced at every singular cardinal (see [@MR2987137] and [@1012]). The third category is successor cardinals. It is unknown whether $\binom{\kappa^+}{\kappa}\rightarrow \binom{\kappa^+}{\kappa}^{1,1}_2$ can be forced on such cardinals. In order to deal with this case we consider two different directions. The first one is based on the concept of amenable colorings, and the second is related to the concept of almost strong relations. Let us explain, shortly, the main idea of these concepts. Given a collection $\mathcal{A}\subseteq[\kappa]^\kappa$ we focus on a coloring $c:\lambda\times\kappa\rightarrow\theta$ such that every fiber $\{\gamma\}\times\kappa$ has a monochromatic subset of the form $\{\gamma\}\times A_\gamma$ for some $A_\gamma\in\mathcal{A}$. Notice that the usual polarized relation is just the special case of $\mathcal{A}=[\kappa]^\kappa$. In the next section we shall focus on the pair $(\aleph_2,\aleph_1)$ for which we shall prove the consistency of $\binom{\aleph_2}{\aleph_1}\rightarrow_{\mathcal{A}} \binom{\aleph_2}{\aleph_1}^{1,1}_2$ with respect to a suitable $\mathcal{A}$. The precise definitions and required properties are given at the beginning of this section, but the main point is that amenability may give consistency results even with full monochromatic products. In the last section we concentrate on the common polarized relation, but our monochromatic product is just *almost strong*. For colorings defined on $\lambda\times\kappa$ it means that the left-hand component can be of order type $\tau$ for every $\tau<\lambda$. Again, the precise defintion will be given at the beginning of the last section, but the theorem reads as follows: The relation $\binom{\kappa^{++}}{\kappa^+}\rightarrow \binom{\tau}{\kappa^+}^{1,1}_\kappa$ for every ordinal $\tau<\kappa^{++}$ can be forced at every regular cardinal $\kappa$ (by assuming the presence of a huge cardinal above $\kappa$ in the ground model). We use standard notation. If $A,B\subseteq\kappa$ then $A\subseteq^*B$ iff $|A\setminus B|<\kappa$. If $\kappa={{\rm cf}}(\kappa)<\lambda$ then $S_\kappa^\lambda=\{\alpha<\lambda:{{\rm cf}}(\alpha)=\kappa\}$. Notice that $S^\lambda_\kappa$ is a stationary subset of $\lambda$. We use the Jerusalem forcing notation, i.e. $p\leq q$ means that the condition $q$ is stronger than $p$. A forcing notion $\mathbb{P}$ is $\kappa$-centered iff $\mathbb{P}$ can be decomposed into $\kappa$-many subsets, each of which consists of pairwise compatible conditions. If $\mathcal{A}\subseteq[\kappa]^\kappa$ then $\mathcal{A}$ is strongly closed under intersections iff the cardinality of $a\cap b$ is $\kappa$ for every $a,b\in\mathcal{A}$. Similarly define the notion of $\mathcal{A}$ being strongly closed under countable intersections, and so on. Several generalizations of Martin’s Axiom for $\aleph_1$ are known in the literature. We shall make use of Shelah’s version (but the variants of Baumgartner and Laver can serve as well): \[sssshelah\] Martin’s Axiom for $\aleph_1$. One can force $2^{\aleph_0}=\aleph_1\wedge 2^{\aleph_1}>\aleph_2$, and if $\mathbb{P}$ is a forcing notion of size less than $2^{\aleph_1}$ satisfying the following three requirements: 1. Each pair of compatible conditions has a least upper bound in $\mathbb{P}$. 2. Every countable increasing sequence of conditions has a least upper bound in $\mathbb{P}$. 3. If $\{p_i:i<\aleph_2\}\subseteq\mathbb{P}$ then there is a club $C\subseteq\aleph_2$ and a regressive function $f:\aleph_2\rightarrow\aleph_2$ so that for $\alpha,\beta\in C\cap S^{\aleph_2}_{\aleph_1}$ if $f(\alpha)=f(\beta)$ then $p_\alpha\parallel p_\beta$. then there is a generic filter $G\subseteq\mathbb{P}$ which intersects any given collection of $\kappa$-many dense subsets, when $\kappa<2^{\aleph_1}$. We shall refer to the above statement as the generalized Martin’s axiom. The proof of the theorem appears in [@MR0505492]. We indicate that if $\kappa$ satisfies $\alpha<\kappa\Rightarrow\alpha^{\aleph_0}<\kappa$ then the assumption $|\mathbb{P}|<2^{\aleph_1}$ can be omitted (as shown in the above mentioned paper). Observe also that if $\mathbb{P}$ is $\aleph_1$-centered then requirement $(c)$ follows. A cardinal $\kappa$ is huge iff there exists an elementary embedding $\jmath:{\rm V}\rightarrow M$ so that $\kappa={\rm crit}(\jmath)$ and ${}^{\jmath(\kappa)}M\subseteq M$. An ideal $\mathcal{I}$ is $(\mu,\mu,\theta)$-saturated iff for every collection $\mathcal{A}=\{A_\alpha: \alpha<\mu\}\subseteq\mathcal{I}^+$ there exists a sub-collection $\mathcal{B}\in[\mathcal{A}]^\mu$ such that $\mathcal{C}\in[\mathcal{B}]^\theta \Rightarrow \bigcap\limits_{\alpha\in\mathcal{C}}A_\alpha\in\mathcal{I}^+$. The following theorem belongs to Laver, [@MR673792]: \[lavthm\] Assume there exists a huge cardinal, and $\theta$ is a regular cardinal below this huge cardinal. Then it is consistent that there is a $\theta^+$-complete and even normal ideal $\mathcal{I}$ over $\theta^+$ which is $(\theta^{++},\theta^{++},\theta)$-saturated. The existence of such an ideal can be forced also with $2^\theta=\theta^+$, and it preserves cardinalities and cofinalities in the interval $[\aleph_1,\theta]$. The idea behind the proof of the theorem is captured in the words of Prince Humperdinck: “Someone has beaten a giant" ([@pbride], p. 191). By collapsing a huge cardinal one can preserve some of its qualities, resulting in the existence of a sufficiently saturated ideal. By and large, good combinatorial theorems hold over large cardinals, since the existence of a complete ultrafilter gives large monochromatic sets. However, a saturated ideal can play the rôle of an ultrafilter under suitable circumstances. I wish to thank the referee of the paper for an extraordinary work, including both mathematical corrections and meaningful improvements of the presentation. This includes an elegant argument which simplified the proof of Theorem \[galthm\]. I also thank Yair Hayut for his help. Amenability =========== We begin with the concept of amenability: \[aaaa\] Amenable coloring. Let $c:\lambda\times\kappa\rightarrow\theta$ be a coloring, and assume $\mathcal{A}\subseteq\mathcal{P}(\kappa)$. We say that $c$ is $\mathcal{A}$-amenable if for every $\gamma<\lambda$ there are $i_\gamma<\theta$ and $A_\gamma\in\mathcal{A}$ so that $\delta\in A_\gamma \Rightarrow c(\gamma,\delta)=i_\gamma$. With the above definition we introduce the following notation: \[aanotation\] $\rightarrow_{\mathcal{A}}$. We say that $\binom{\lambda}{\kappa}\rightarrow_{\mathcal{A}} \binom{\lambda}{\kappa}^{1,1}_\theta$ holds iff for every $c:\lambda\times\kappa \rightarrow\theta$ which is $\mathcal{A}$-amenable there are $A\in[\lambda]^\lambda, B\in[\kappa]^\kappa$ and a color $\iota<\theta$ so that $c\upharpoonright(A\times B)=\{\iota\}$. The main theorem of this section establishes a positive consistency result of the strong relation for suitable amenability. In order to motivate the positive direction, we introduce the following: \[ggggch\] Negative relations and GCH. Assume $2^{\aleph_1}=\aleph_2$. There exists a collection $\mathcal{A}=\{C_\alpha:\alpha<\omega_2\}$ of club subsets of $\omega_1$ for which $\binom{\aleph_2}{\aleph_1}\nrightarrow_{\mathcal{A}} \binom{\aleph_2}{\aleph_1}^{1,1}_2$. *Proof*. We commence with a general assertion which does not depend on the assumption $2^{\aleph_1}=\aleph_2$. We claim that if $\{A_\beta:\beta\in\omega_1\}$ is any collection of unbounded subsets of $\aleph_1$ then there exists a club $C\subseteq\omega_1$ such that: 1. $\forall\beta<\omega_1, A_\beta\nsubseteq C$. 2. $\forall\beta<\omega_1, A_\beta\nsubseteq \aleph_1\setminus C$. We construct $C$ by induction on $\varepsilon<\omega_1$. At the stage $\varepsilon=0$ we choose $a_0,b_0\in A_0$ so that $a_0<b_0$. At the stage $\varepsilon+1$ we choose $a_{\varepsilon+1},b_{\varepsilon+1}\in A_{\varepsilon+1}$ such that $b_\varepsilon<a_{\varepsilon+1}< b_{\varepsilon+1}$. If $\varepsilon$ is a limit ordinal then we let $\gamma_\varepsilon=\bigcup\limits_{\delta<\varepsilon}b_\delta$ and we choose $a_\varepsilon,b_\varepsilon\in A_\varepsilon$ such that $\gamma_\varepsilon< a_\varepsilon<b_\varepsilon$. Finally, define $C$ as the closure of $\{b_\varepsilon:\varepsilon<\omega_1\}$ in the order topology. We first show that $\forall\beta<\omega_1, A_\beta\nsubseteq C$. Indeed, given any $\beta\in\omega_1$ we claim that $a_\beta\notin C$. For $\beta=0$, the first element of $C$ is $b_0$ and $a_0<b_0$, so $a_0\notin C$ and hence $A_0\nsubseteq C$. If $\beta=\eta+1$ then $b_\eta<a_\beta<b_\beta$ and by the construction of $C$ we can see that $C\cap(b_\eta,b_\beta)=\emptyset$ so $a_\beta\notin C$. Since $a_\beta\in A_\beta$ we infer that $A_\beta\nsubseteq C$. Similarly, if $\beta$ is a limit ordinal then $\gamma_\beta=\bigcup\limits_{\delta<\beta}b_\delta <a_\beta<b_\beta$, and $C\cap(\gamma_\beta,b_\beta)=\emptyset$ by the construction. It follows, again, that $a_\beta\notin C$ and hence $A_\beta\nsubseteq C$. Next we show that $\forall\beta<\omega_1, A_\beta\nsubseteq \aleph_1\setminus C$. Indeed, for every $\beta\in\omega_1$ we have an element $b_\beta\in A_\beta$ which belongs to $C$ by its definition, so $b_\beta\notin\aleph_1\setminus C$ and hence $A_\beta\nsubseteq \aleph_1\setminus C$. Let $\{A_\beta:\beta\in\omega_2\}$ enumerate all the members of $[\aleph_1]^{\aleph_1}$. Here we use the assumption $2^{\aleph_1}=\aleph_2$. By induction on $\alpha\in\omega_2$ we choose a club $C_\alpha\subseteq\omega_1$ such that $\forall\beta<\alpha, A_\beta\nsubseteq C_\alpha \wedge A_\beta\nsubseteq \aleph_1\setminus C_\alpha$. This can be done since $\{A_\beta:\beta<\alpha\}$ is a collection of $\aleph_1$ many sets. Let $\mathcal{A}$ be $\{C_\alpha:\alpha\in\omega_2\}$. We define a coloring $c:\aleph_2\times\aleph_1\rightarrow 2$ by $c(\alpha,\beta)=0 \Leftrightarrow \beta\in C_\alpha$. Clearly, $c$ is $\mathcal{A}$-amenable. We claim that the negative relation $\binom{\aleph_2}{\aleph_1}\nrightarrow_{\mathcal{A}} \binom{\aleph_2}{\aleph_1}^{1,1}_2$ is exemplified by $c$. Indeed, if $I\in[\aleph_2]^{\aleph_2}$ and $J\in[\aleph_1]^{\aleph_1}$ then $J=A_\beta$ for some $\beta<\omega_2$. Pick up any ordinal $\alpha\in I$ so that $\beta<\alpha$. Inasmuch as $J=A_\beta\nsubseteq C_\alpha \wedge J\nsubseteq \aleph_1\setminus C_\alpha$ we conclude that $c\upharpoonright(I\times J)$ is not monochromatic. But $I,J$ were arbitrary, so we are done. \[rrr\] We make the following comments: 1. The above claim works equally well for every infinite cardinal $\kappa$ with respect to $\kappa^+$ and $\kappa^{++}$. The pertinent assumption would be $2^{\kappa^+}=\kappa^{++}$. 2. The choice of club sets is just one example, and the method seems flexible enough to allow more instances of amenability. 3. The construction is taken from [@temp], with little modifications. A stronger theorem is proved there under the PFA, namely there exists a collection of $\omega_2$-many club subsets of $\omega_1$ such that the intersection of any sub-collection of size $\aleph_2$ of them is finite. This might give stronger negative relations in our context. 4. If $\mathcal{A}\subseteq[\kappa]^\kappa$ and $|\mathcal{A}|\leq\kappa$ then $\binom{\kappa^+}{\kappa}\rightarrow_{\mathcal{A}} \binom{\kappa^+}{\kappa}^{1,1}_2$ is virtually true, so we always concentrate on large enough families of $[\kappa]^\kappa$ with respect to amenable colorings. The opposite direction is the content of the following: \[mt\] Positive relation for $\aleph_2$. It is consistent that, for every $\mathcal{A}\subseteq[\omega_1]^{\omega_1}$ which is strongly closed under countable intersections, $\binom{\aleph_2}{\aleph_1}\rightarrow_{\mathcal{A}} \binom{\aleph_2}{\aleph_1}^{1,1}_2$ holds. *Proof*. We begin by forcing the generalized Martin’s axiom (Theorem \[sssshelah\]), so $2^{\aleph_0}=\aleph_1$ and $2^{\aleph_1}>\aleph_2$. Suppose $\mathcal{A}\subseteq[\omega_1]^{\omega_1}$ is strongly closed under countable intersections, and let $c:\aleph_2\times\aleph_1\rightarrow 2$ be any $\mathcal{A}$-amenable coloring. For every $\alpha<\aleph_2$ set $A_\alpha=\{\beta\in\omega_1: c(\alpha,\beta)=0\}$. By $\mathcal{A}$-amenability there is some $B_\alpha\in\mathcal{A}$ so that $(B_\alpha\subseteq A_\alpha)\vee(B_\alpha\subseteq \omega_1\setminus A_\alpha)$. As all we need is just $\aleph_2$-many sets from $\mathcal{A}$, we may assume without loss of generality that $B_\alpha\subseteq A_\alpha$ for every $\alpha<\aleph_2$. We define a forcing notion $\mathbb{P}$. A condition $(A,s)\in\mathbb{P}$ consists of $A\in[\omega_2]^{\aleph_0}$ and $s\in[\omega_1]^{\aleph_0}$. For the order, we say that $(A,s)\leq_{\mathbb{P}}(B,t)$ iff $A\subseteq B, s\subseteq t$ and $\alpha\in A\Rightarrow t\setminus s\subseteq B_\alpha$. Notice that the requirements of Theorem \[sssshelah\] are met (in particular, $\mathbb{P}$ is $\aleph_1$-centered as each pair of conditions $(A,s),(B,s)$ is compatible and $\aleph_1^{\aleph_0}=\aleph_1$). For every $\alpha<\omega_2$ let $D_\alpha=\{(A,s):\alpha\in A\}$. If $(A,s)\notin D_\alpha$ then $(A\cup\{\alpha\},s)\in D_\alpha$, and by the order definition we have $(A,s)\leq_{\mathbb{P}}(A\cup\{\alpha\},s)$ so $D_\alpha$ is dense. For every $\beta<\omega_1$ let $E_\beta=\{(A,s):s\nsubseteq\beta\}$. If $(A,s)\notin E_\beta$ then we let $x=\bigcap\{A_\gamma:\gamma\in A\}$, and recall that $A_\gamma$ contains a member of $\mathcal{A}$. Since $\mathcal{A}$ is closed under countable intersections, moreover, the intersection is uncountable, there is an ordinal $\delta>\beta$ so that $\delta\in x$. Consequently, $(A,s)\leq_{\mathbb{P}}(A,s\cup\{\delta\})$ and we infer that $E_\beta$ is dense. By Theorem \[sssshelah\] there exists a generic set $G\subseteq\mathbb{P}$ so that $G\cap D_\alpha\neq\emptyset$ for every $\alpha<\omega_2$ and $G\cap E_\beta\neq\emptyset$ for each $\beta<\omega_1$. Set: $$H=\bigcup\{s:\exists A,(A,s)\in G\}.$$ For every $\alpha\in\omega_2$ choose $(A_\alpha,s_\alpha)\in G$ such that $\alpha\in A_\alpha$. This can be done since $G\cap D_\alpha\neq\emptyset$. Recall that $2^{\aleph_0}=\aleph_1$, so for some $I\in[\omega_2]^{\omega_2}$ and a fixed $t\in[\omega_1]^{\aleph_0}$ we have $\alpha\in I\Rightarrow s_\alpha=t$. Set $J=H\setminus t$ and observe that the cardinality of $J$ is $\aleph_1$. By the construction, $c\upharpoonright(I\times J)=\{0\}$, so we are done. Almost strong relations ======================= In the former section we focused on colorings which are amenable with respect to some $\mathcal{A}$. We may ask what happens if $\mathcal{A}=[\omega_1]^{\omega_1}$, i.e. the usual polarized relation with no limitation on the colorings. It has been proved by Laver, [@MR673792], under the assumption that there is a huge cardinal, that the relation $\binom{\aleph_2}{\aleph_1}\rightarrow \binom{\aleph_1}{\aleph_1}^{1,1}_{\aleph_0}$ is consistent. Laver indicates that Galvin announced that the stronger relation $\binom{\aleph_2}{\aleph_1}\rightarrow \binom{\tau}{\aleph_1}^{1,1}_{\aleph_0}$ for every $\tau<\omega_2$ can also be proved to be consistent from the same assumption. However, Galvin did not publish the proof. Many years later, Jones [@MR2275863] used an unpublished result of Woodin in order to show the consistency of $\binom{\aleph_2}{\aleph_1}\rightarrow \binom{\tau}{\aleph_1}^{1,1}_{\aleph_0}$ for every $\tau<\omega_2$. The result of Woodin gives a special ideal over $\aleph_1$. It requires an instance of the rank-into-rank axiom I1, and it is strongly connected to the specific case of $\aleph_1$. Here we prove a general result in the spirit of Laver’s proof, based only on the existence of a huge cardinal. Let us begin with the following: \[almdef\] Almost strong polarized relations. Assume $\kappa\leq\lambda$ are infinite cardinals, and $\tau<\lambda$ is an ordinal. The relation $\binom{\lambda}{\kappa}\rightarrow \binom{\tau}{\kappa}^{1,1}_{\theta}$ means that for every coloring $c:\lambda\times\kappa\rightarrow\theta$ one can find $A\subseteq\lambda$ such that ${\rm otp}(A)=\tau$ and $B\in[\kappa]^\kappa$ for which $c\upharpoonright (A\times B)$ is constant. The relation $\binom{\lambda}{\kappa}\rightarrow \binom{\lambda\ \tau}{\kappa\ \kappa}^{1,1}_2$ means that for every coloring $c:\lambda\times\kappa\rightarrow 2$ one can find either $A\in[\lambda]^\lambda, B\in[\kappa]^\kappa$ such that $c\upharpoonright (A\times B)=\{0\}$ or $A\subseteq\lambda, {\rm otp}(A)=\tau$ and $B\in[\kappa]^\kappa$ such that $c\upharpoonright (A\times B)=\{1\}$. The first relation is called the balanced almost strong polarized relation if it holds for every $\tau<\lambda$. The second relation (in the above definition) is the unbalanced version. The consistency of the balanced relation for successors of regular cardinals is the main theorem of this section. \[galthm\] Almost strong relations. Suppose $\theta={{\rm cf}}(\theta)$ and there exists a huge cardinal above $\theta$. Then one can force the relation $\binom{\theta^{++}}{\theta^+}\rightarrow \binom{\tau}{\theta^+}^{1,1}_{\theta}$ for every $\tau<\theta^{++}$, while preserving all cardinals and cofinalities in the interval $[\aleph_1,\theta]$. *Proof*. By the existence of a huge cardinal one can force an ideal $\mathcal{I}$ which is $\theta^+$-complete and $(\theta^{++},\theta^{++},\theta)$-saturated over $\theta^+$, as shown in [@MR673792]. Thus, we may assume that there is a $\theta^+$-complete $(\theta^{++},\theta^{++},\theta)$-saturated ideal and $2^\theta=\theta^+$. Fix an ordinal $\tau<\theta^{++}$ (without loss of generality, $\theta^+<\tau$). Suppose we are given a coloring $c:\theta^{++}\times\theta^+\rightarrow \theta$. For every $\alpha<\theta^{++}$ we choose $n(\alpha)\in\theta$ so that $x_\alpha=\{\beta\in\theta^+:c(\alpha,\beta)=n(\alpha)\}\in \mathcal{I}^+$. The existence of $x_\alpha$ follows from the completeness of the ideal. Let $x$ be $\{x_\alpha:\alpha<\theta^{++}\}$. In order to control the order type of the big component in the monochromatic product, we choose a chain $(M_\eta:\eta\leq\tau)$ of elementary submodels of $\mathcal{H}(\chi)$ for some large enough regular cardinal $\chi$, satisfying the following properties for every $\eta\leq\tau$: 1. $|M_\eta|=\theta^+, \theta^+\cup\{\theta^+\}\subseteq M_\eta$. 2. $\mathcal{I},c,\tau,x\in M_\eta$. 3. ${}^\theta M_\eta\subseteq M_\eta$. 4. If $\zeta<\eta\leq\tau$ then $M_\zeta\in M_\eta$. For every $\eta\leq\tau$ let $\sigma_\eta=\sup(M_\eta\cap\theta^{++})$. By the regularity of $\theta^{++}$ we may assume, without loss of generality, that $n(\alpha)=\iota$ for some fixed $\iota<\theta$ and every $\alpha<\theta^{++}$. This is true since we have a subset of $\theta^{++}$ of size $\theta^{++}$ for which $n(\alpha)=\iota$, and we can thin out the coloring only to this subset. A monochromatic product for the thinned-out coloring would be also monochromatic for the original coloring. We may also assume that $\bigcap\limits_{\alpha\in \mathcal{C}}x_\alpha\in\mathcal{I}^+$ for every $\mathcal{C}\subseteq \theta^{++}$ of size $\theta$. The saturation of $\mathcal{I}$ ensures that this holds for some collection of $\theta^{++}$-many sets, and we may assume that this collection is all the $x_\alpha$-s. Fix a bijection $h:\theta^+\rightarrow\tau$. Let $S_0$ be $S^{\theta^{++}}_{\theta^+}\setminus \sigma_\tau$, so $S_0$ is a stationary subset of $\theta^{++}$. For every $\delta\in S_0$ we shall try to define two sequences of ordinals: 1. $\beta^\delta_0<\cdots<\beta^\delta_\gamma<\cdots<\theta^+$ for every $\gamma<\theta^+$. 2. $\langle\alpha^\delta_{h(\gamma)}:\gamma<\theta^+\rangle$, a sequence of ordinals below $\delta$. The construction is done by induction on $\gamma$. Notice that the second sequence need not be increasing. For the first stage of $\gamma=0$ we choose $\beta^\delta_0=\min(x_\delta)$. Then we ask whether there exists an ordinal $\epsilon>\sigma_\tau,\epsilon<\delta$ for which $\beta^\delta_0\in x_\epsilon$. If the answer is yes then there exists $\epsilon\in M_{h(0)+1}\setminus M_{h(0)}$ such that $\epsilon<\delta$ and $\beta^\delta_0\in x_\epsilon$, by elementarity. So we choose any ordinal in $M_{h(0)+1}\setminus M_{h(0)}$ which satisfies these requirements, and this is $\alpha^\delta_0$. If the answer is no, then the process is terminated. Assume now that $\gamma>0$, and let $\beta^\delta_\gamma$ be $\min(\bigcap\limits_{\gamma'<\gamma}x_{\alpha^\delta_{h(\gamma')}}\cap x_\delta\setminus \{\beta^\delta_{\gamma'}:\gamma'<\gamma\})$. This ordinal is well defined as the intersection is an element of $\mathcal{I}^+$ and we drop at most $\theta$-many ordinals from it, so the minimum is taken over a non-empty set. Next we ask whether there exists an ordinal $\epsilon<\delta, \epsilon>\sigma_\tau$ so that $\{\beta^\delta_0,\ldots,\beta^\delta_\gamma\}\subseteq x_\epsilon$. If the answer is yes then there exists $\epsilon\in M_{h(\gamma)+1}\setminus M_{h(\gamma)}$ for which $\{\beta^\delta_0,\ldots,\beta^\delta_\gamma\}\subseteq x_\epsilon$ (here we use the fact that ${}^\theta M_\eta\subseteq M_\eta$ for each $\eta$, and the fact that $\{\beta^\delta_0,\ldots,\beta^\delta_\gamma\}$ is of size at most $\theta$), and we choose such an ordinal as $\alpha^\delta_\gamma$. Notice that $\alpha^\delta_{h(\gamma)}\neq\alpha^\delta_{h(\gamma')}$ for every $\gamma'<\gamma$. If the answer is no then the process is terminated and we try again at the next ordinal $\delta\in S_0$. The induction process might be terminated, indeed, before accomplishing $\theta^+$ steps. However, we claim that for some $\delta\in S_0$ the induction holds along all the steps. For proving it, assume that for every $\delta\in S_0$ there exists an ordinal $\gamma=g(\delta)$ such that we cannot choose the required ordinals at stage $\gamma$. As mentioned above, the problem arises only for the choice of $\alpha^\delta_{h(\gamma)}$. Since $g$ is a regressive function on $S_0$, there is an ordinal $\gamma<\theta^+$ and a stationary set $S_1\subseteq S_0$ such that $\delta\in S_1\Rightarrow g(\delta)=\gamma$. The cofinality of every $\delta\in S_1$ is $\theta^+$, and the cardinality of each sequence is at most $\theta$, so all sequences are bounded. Applying Fodor’s lemma once more, there exist a stationary set $S_2\subseteq S_1$ and an ordinal $\xi<\theta^{++}$ such that all the chosen sequences for $\delta\in S_2$ are bounded below $\xi$. Recall that $2^\theta=(\theta^+)^\theta=\theta^+$, so there are only $\theta^+$ many sequences of the form $(\beta^\delta_{\gamma'},\alpha^\delta_{h(\gamma')}: \gamma'<\gamma)$. We may choose, therefore, two elements $\delta_0,\delta_1\in S_2$ such that $\delta_0<\delta_1$ and they share the same sequence. But then $\delta_0$ gives a positive answer to the question that we ask at the stage of choosing $\alpha^{\delta_1}_{h(\gamma)}$, so the induction can go on for $\delta_1$, a contradiction. We conclude that for some $\delta\in S_0$ we could define the above two sequences for every $\gamma<\theta^+$. Define $A=\{\alpha^\delta_{h(\gamma)}:\gamma<\theta^+\}$ and $B=\{\beta^\delta_\gamma: \gamma<\theta^+\}$. By the construction we have ${\rm otp}(A,<)=\tau$ and $c\upharpoonright(A\times B)=\{\iota\}$, so we are done. \[rreferee\] The referee of the paper suggested a clever simplification to the construction of the sequences. We fix any $\delta\in S_0$, and we choose $\beta^\delta_0=\min(x_\delta)$. Now for every $\gamma<\theta^+$ we construct the sequences as follows. The inductive assumption is that $\langle\beta^\delta_{\gamma'}:\gamma'\leq\gamma\rangle$ and $\langle\alpha^\delta_{h(\gamma')}:\gamma'<\gamma\rangle$ were chosen. Let $B$ be $\{\beta^\delta_{\gamma'}:\gamma'\leq\gamma\}$. By the closure of each $M_\eta$ we have $B\in M_\eta$. Moreover, $M_\eta\models$ there are unboundedly many $\alpha<\theta^{++}$ for which $B\subseteq x_\alpha$. It follows that we can find some $\alpha\in M_{h(\gamma)+1}\setminus M_{h(\gamma)}$ and define it as $\alpha^\delta_{h(\gamma)}$. It means that we don’t have to use Fodor’s lemma at the end of the proof, and every $\delta\in S_0$ yields a monochromatic product. The above theorem gives almost strong relations, as the order type of the first component can be any ordinal $\tau$ below $\theta^{++}$. There is, however, a conceptual discrepancy between almost strong relations and full strong relations. As mentioned in the introduction, the assumption $2^{\theta^+}=\theta^{++}$ rules out the strong relation $\binom{\theta^{++}}{\theta^+}\rightarrow \binom{\theta^{++}}{\theta^+}^{1,1}_2$. This is not the case when dealing with almost strong relations. The claim below generalizes an observation of Foreman (see Theorem 8.16 in [@MR2768681]): \[fforemanclm\] The relation $\binom{\theta^{++}}{\theta^+}\rightarrow \binom{\tau}{\theta^+}^{1,1}_{\theta}$ for every $\tau<\theta^{++}$ is consistent with $2^{\theta^+}=\theta^{++}$. *Proof*. First we force $\binom{\theta^{++}}{\theta^+}\rightarrow \binom{\tau}{\theta^+}^{1,1}_{\theta}$ for every $\tau<\theta^{++}$. Now we proceed to the power set of $\theta^+$. Let $\mathbb{P}$ be Lévy$(\theta^{++},2^{\theta^+})$. Our claim is that the above relation still holds in the generic extension by the collapse. For proving this fact, let $\name{f}$ be a name of a function from $\theta^{++}\times\theta^+$ into $\theta$. Choose a condition $p$ in $\mathbb{P}$ which forces $\name{f}$ to be a function. We shall define an increasing sequence of conditions $\langle p_j:j<\theta^{++}\rangle$, and a function $g:\theta^{++}\times\theta^+\rightarrow\theta$ so that $g$ belongs to the ground model. We commence with $p_0=p$. Arriving at $j<\theta^{++}$ we choose $p_j$ so that $i<j\Rightarrow p_i\leq p_j$ and $\forall\alpha\leq j,\forall\beta<\theta^+, p_j\Vdash\name{f}(\alpha,\beta)=g(\alpha,\beta)$. This can be done because $p=p_0$ forces that $\name{f}$ is a function, hence if any condition $q$ extends $p$ and forces a value to $\name{f}(\alpha,\beta)$ then this value is uniqe. Now we use the completeness of our forcing (it is $\theta^{++}$-complete) in order to cover $\theta^+$-many pairs at each stage of the induction. Since the forcing relation is definable in ${\rm V}$ we conclude that $g\in{\rm V}$, hence we can choose $A,B$ so that ${\rm otp}(A)=\tau,|B|=\theta^+$ and $g\upharpoonright(A\times B)$ is constant. Choose an ordinal $j<\theta^{++}$ such that $A\subseteq j$. By the construction, $p_j\Vdash\name{f}(\alpha,\beta)=g(\alpha,\beta)$ for all $\alpha\leq j$ and $\beta<\theta^+$, so $p_j$ forces that $\name{f}\upharpoonright(A\times B)$ is constant. However, $p\leq p_j$ and the choice of $p$ was arbitrary, so the empty condition forces that $\name{f}$ is constant on a product of the required size. What can be said about the strong polarized relation with respect to successor cardinals? Positive results in recent years demonstrated the importance of the splitting number for this issue. It turns out that the splitting number is relevant also for negative results. Suppose $B\in[\kappa]^\kappa$. We say that $S$ splits $B$ iff $|S\cap B|=|(\kappa-S)\cap B|=\kappa$. We say that $\mathcal{A}\subseteq[\kappa]^\kappa$ is a splitting family iff for every element $B\in[\kappa]^\kappa$ there exists some $S\in\mathcal{A}$ such that $S$ splits $B$. In the case of successor cardinals $\kappa=\theta^+$, there is always a splitting family over $\kappa$ of size $\kappa^+$. We need, however, an additional property: \[herdef\] Hereditary splitting family. Assume $\mathcal{A}\subseteq[\kappa]^\kappa$. We call $\mathcal{A}$ a hereditary splitting family iff $\mathcal{B}\subseteq\mathcal{A}$ is a splitting family whenever $|\mathcal{B}|=|\mathcal{A}|$. The following connects hereditary splitting with negative strong relations: \[mmt\] Assume $\kappa<\mu={{\rm cf}}(\mu)$. If there exists a hereditary splitting family in $[\kappa]^\kappa$ of size $\mu$ then $\binom{\mu}{\kappa}\nrightarrow \binom{\mu}{\kappa}^{1,1}_2$. Conversely, if $\binom{\mu}{\kappa}\nrightarrow \binom{\mu}{\kappa}^{1,1}_2$ and $\kappa={{\rm cf}}(\kappa)$ then there exists a hereditary splitting family in $[\kappa]^\kappa$ of size $\mu$. *Proof*. Let $\mathcal{A}=\{S_\alpha:\alpha<\mu\}$ be a hereditary splitting family, and define a coloring $c:\mu\times\kappa\rightarrow 2$ by $c(\alpha,\beta)=0$ iff $\beta\in S_\alpha$. We claim that $c$ exemplifies the negative relation $\binom{\mu}{\kappa}\nrightarrow \binom{\mu}{\kappa}^{1,1}_2$. Assume towards contradiction that $c\upharpoonright(A\times B)$ is constant for some $A\in[\mu]^\mu,B\in[\kappa]^\kappa$. If $c\upharpoonright(A\times B)=\{0\}$ then $B\subseteq S_\alpha$ for every $\alpha\in A$. Consequently, the sub-collection $\mathcal{B}=\{S_\alpha:\alpha\in A\}$ is not a splitting family in $[\kappa]^\kappa$, contradicting the hereditariness assumption. Similarly, if $c\upharpoonright(A\times B)=\{1\}$ then $B\subseteq\kappa-S_\alpha$ for every $\alpha\in A$ and the same $\mathcal{B}$ is non-splitting, a contradiction. For the opposite direction, let $c$ be a coloring which exemplifies the negative relation $\binom{\mu}{\kappa}\nrightarrow \binom{\mu}{\kappa}^{1,1}_2$. For every $\alpha<\mu$ let $S_\alpha$ be $\{\beta\in\kappa:c(\alpha,\beta)=0\}$. Set $\mathcal{A}=\{S_\alpha:\alpha<\mu\}$. We claim that $|\mathcal{A}|=\mu$. Indeed, without loss of generality $\alpha<\beta<\mu\Rightarrow S_\alpha\neq S_\beta$, since if some $S_\alpha$ appears $\mu$-many times then $\binom{\mu}{\kappa}\rightarrow \binom{\mu}{\kappa}^{1,1}_2$, so we may remove all the repetitions from $\mathcal{A}$ and still remain with a collection of size $\mu$. We claim that $\mathcal{A}$ is a hereditary splitting family. For proving this fact, assume $\mathcal{B}\subseteq\mathcal{A}$ and $|\mathcal{B}|=\mu$. Choose any $B\in[\kappa]^\kappa$ and let $A$ be $\{\alpha<\mu:S_\alpha\in\mathcal{B}\}$. Since $\binom{\mu}{\kappa}\nrightarrow \binom{\mu}{\kappa}^{1,1}_2$ as exemplified by $c$, we have $c\upharpoonright(A\times B)=\{0,1\}$. If $\mathcal{B}$ fails to split $B$ then $B\subseteq^* S_\alpha \vee B\subseteq^* \kappa-S_\alpha$ for every $S_\alpha\in\mathcal{B}$, so without loss of generality $B\subseteq^* S_\alpha$ for every $S_\alpha\in\mathcal{B}$. Recall that $\kappa<\mu$ are regular cardinals, so we can assume without loss of generality that $B\subseteq S_\alpha$ for every $S_\alpha\in\mathcal{B}$. This can be done by removing a fixed initial segment of $\kappa$ from $B$ over $\mu$-many elements of $\mathcal{B}$. Recall that $A=\{\alpha<\mu:S_\alpha\in\mathcal{B}\}$ and notice that $c\upharpoonright(A\times B)=\{0\}$, a contradiction. The above theorems invite further investigation, and we phrase several open problems. The strong relation $\binom{\mu}{\kappa}\rightarrow \binom{\mu}{\kappa}^{1,1}_\theta$ is balanced in the sense that the monochromatic product is of the same size for all colors. Likewise, the almost strong relation $\binom{\mu}{\kappa}\rightarrow \binom{\tau}{\kappa}^{1,1}_\theta$ for ordinals $\tau<\mu$ is balanced. One may wonder what happens at successor cardinals when dealing with the strongest unbalanced relation: \[q0\] Unbalanced relation for successor cardinals. Suppose $\kappa$ is successor cardinal. Is it consistent that $\binom{\kappa^+}{\kappa}\rightarrow \binom{\kappa^+\ \tau}{\kappa\ \kappa}^{1,1}_2$ for every $\tau<\kappa^+$? The second problem is motivated by the amenability result. We employed the generalization of Martin’s axiom, for the case of $\aleph_2$. Higher generalizations are problematic. The following is natural: \[q1\] Amenable positive relations above $\aleph_2$. Is it possible to prove the consistency of $\binom{\mu^+}{\mu}\rightarrow_{\mathcal{A}} \binom{\mu^+}{\mu}^{1,1}_2$ when $\mu>\aleph_1$, under the assumption that $2^\mu=\mu^+$ implies that $\binom{\mu^+}{\mu}\nrightarrow_{\mathcal{A}} \binom{\mu^+}{\mu}^{1,1}_2$? Finally, the existence of the special ideal over $\theta^+$ can be proved when $\theta={{\rm cf}}(\theta)$. One may wonder what happens at singular cardinals: \[q2\] Almost strong relations and singular cardinals. Assume $\theta>{{\rm cf}}(\theta)$. Is it consistent that $\binom{\theta^{++}}{\theta^+}\rightarrow \binom{\tau}{\theta^+}^{1,1}_2$ for every $\tau<\theta^{++}$? A possible direction will be to begin with a supercompact cardinal $\theta$ and a huge cardinal above it. The forcing of Laver is $\theta$-directed-closed, so if $\theta$ is Laver-indestructible then it remains supercompact after the forcing of Theorem \[lavthm\]. Now we would like to add either Prikry of Magidor seuquence to $\theta$. The problem is to keep the special saturation property of the ideal over $\theta^+$, or to replace it by a weaker property which will be preserved by Prikry and Magidor forcing.
{ "pile_set_name": "ArXiv" }
--- author: - 'Leonid Shilnikov, Andrey Shilnikov and Dmitry Turaev' title: Showcase of Blue Sky Catastrophes --- Introduction ============ In the pioneering works by A.A. Andronov and E.A. Leontovich [@AL1; @AL2] all main bifurcations of stable periodic orbits of dynamical systems in a plane had been studied: the emergence of a limit cycle from a weak focus, the saddle-node bifurcation through a merger of a stable limit cycle with an unstable one and their consecutive annihilation, the birth of a limit cycle from a separatrix loop to a saddle, as well as from a separatrix loop to a saddle-node equilibrium. Later, in the 50-60s these bifurcations were generalized for the multi-dimensional case, along with two additional bifurcations: period doubling and the birth of a two-dimensional torus. Apart from that, in [@lp1; @lp2] L. Shilnikov had studied the main bifurcations of saddle periodic orbits out of homoclinic loops to a saddle and discovered a novel bifurcation of homoclinic loops to a saddle-saddle[^1]. Nevertheless, an open problem still remained: could there be other types of codimension-one bifurcations of periodic orbits? Clearly, the emphasis was put on bifurcations of [*stable*]{} periodic orbits, as only they generate robust self-sustained periodic oscillations, the original paradigm of nonlinear dynamics. One can pose the problem as follows:\ [*In a one-parameter family $X_{\mu}$ of systems of differential equations, can both the period and the length of a structurally stable periodic orbit ${\cal L}_\mu$ tend to infinity as the parameter $\mu$ approaches some bifurcation value, say $\mu_0=0$?*]{}\ Here, structural stability means that none of the multipliers of the periodic orbit ${\cal L}_\mu$ crosses the unit circle, i.e. ${\cal L}_\mu$ does not bifurcate at $\mu\neq\mu_0$. Of particular interest is the case where ${\cal L}_\mu$ is stable, i.e. all the multipliers are strictly inside the unit circle. A similar formulation was given by J. Palis and Ch. Pugh [@PP] (notable Problem 37), however the structural stability requirement was missing there. Exemplary bifurcations of a periodic orbit whose period becomes arbitrarily large while the length remains finite as the bifurcation moment is approached are a homoclinic bifurcation of a saddle with a negative saddle value and that of a saddle-node [@lp0; @book2]. These were well-known at the time, so in [@PP] an additional condition was imposed, in order to ensure that the sought bifurcation is really of a new type: the periodic orbit ${\cal L}_\mu$ must stay away from any equilibrium states (this would immediately imply that the length of the orbit grows to infinity in proportion to the period). As R. Abraham put it, the periodic orbit must “disappear in the blue-sky” [@Ab]. In fact, a positive answer to “Problem 37” could be found in an earlier paper [@F]. In explicit form, a solution was proposed by V. Medvedev [@Me]. He constructed examples of flows on a torus and a Klein bottle with stable limit cycles whose lengths and periods tend to infinity as $\mu\to\mu_0$, while at $\mu=\mu_0$ both the periodic orbits disappear and new, structurally unstable saddle-node periodic orbits appear (at least two of them, if the flow is on a torus). The third example of [@Me] was a flow on a 3-dimensional torus whose all orbits are periodic and degenerate, and for the limit system the torus is foliated by two-dimensional invariant tori. Medvedev’s examples are not of codimension-1: this is obvious for the torus case that requires at least two saddle-nodes, i.e. $X_{\mu_0}$ is of codimension 2 at least. In case of the Klein bottle one may show [@book2; @AfS; @TSh3; @Li; @Il] that for a generic perturbation of the Medvedev family the periodic orbits existing at $\mu\neq\mu_0$ will not remain stable for all $\mu$ as they undergo an infinite sequence of forward and backward period-doubling bifurcations (this is a typical behavior of fixed points of a non-orientable diffeomorphism of a circle). A blue-sky catastrophe of codimension 1 was found only in 1995 by L. Shilnikov and D. Turaev [@TSh3; @TSh1; @TSh2; @ShT]. The solution was based on the study of bifurcations of a saddle-node periodic orbit whose entire unstable manifold is homoclinic to it. The study of this bifurcation was initiated by V. Afraimovich and L. Shilnikov [@AfS; @AfS1; @AfS2; @AfS3] for the case where the unstable manifold of the saddle-node is a torus or a Klein bottle (see Fig. \[fig1\]). As soon as the saddle-node disappears, the Klein bottle may persist, or it may break down to cause chaotic dynamics in the system [@AfS4; @NPT; @TSh; @Sync]. In these works, most of attention was paid to the torus case, as its breakdown provides a geometrical model of the quasiperiodicity-toward-chaos transition encountered universally in Nonlinear Dynamics, including the onset of turbulence [@Sh00]. ![Two cases of the unstable manifold $W^u_L$ homoclinic to the saddle-node periodic orbit $L$: a 2D torus (A) or a Klein bottle (B).[]{data-label="fig1"}](fig1.jpg){width="80.00000%"} In the hunt for the blue sky catastrophe, other distinct configurations of the unstable manifold of the saddle-node were suggested in [@TSh1]. In particular, it was shown that in the phase space of dimension 3 and higher the homoclinic trajectories may spiral back onto the saddle-node orbit in the way shown in Fig. \[fig2\]. If we have a one-parameter family $X_\mu$ of systems of differential equations with a saddle-node periodic orbit at $\mu=\mu_0$ which possesses this special kind of the homoclinic unstable manifold and satisfy certain additional conditions, then as the saddle-node disappears the inheriting attractor consists of a single stable periodic orbit ${\cal L}_\mu$ which undergoes no bifurcation as $\mu\to\mu_0$ while its length tends to infinity. Its topological limit, $M_0$, is the entire unstable manifold of the saddle-node periodic orbit. ![Original construction of the blue sky catastrophe from [@TSh1].[]{data-label="fig2"}](fig2.jpg){height="0.45\textheight"} The conditions found in [@TSh1] for the behavior of the homoclinic orbits ensuring the blue-sky catastrophe are open, i.e. a small perturbation of the one-parameter family $X_\mu$ does not destroy the construction. This implies that such a blue-sky catastrophe occurs any time a family of systems of differential equations crosses the corresponding codimension-1 surface in the Banach space of smooth dynamical systems. This surface constitutes a stability boundary for periodic orbits. This boundary is drastically new comparable to those known since the 30-60s and has no analogues in planar systems. There are reasons to conjecture that this type of the blue-sky catastrophe closes the list of main stability boundaries for periodic orbits (i.e. any new stability boundary will be of codimension higher than 1). In addition, another version of blue-sky catastrophe leading to the birth of a uniformly-hyperbolic strange attractor (the Smale-Williams solenoid [@Sm; @W]) was also discovered in [@TSh1; @TSh2]. This codimension-1 bifurcation of a saddle-node which corresponds yet to a different configuration of the homoclinic unstable manifold of the periodic orbit (the full classification is presented in [@book2]). Here, the structurally stable attractor existing all the way up to $\mu=\mu_0$ does not bifurcate so that the length of each and every (saddle) periodic orbit in it tends to infinity as $\mu\to\mu_0$. Initially we believed that the corresponding configuration of the unstable manifold would be too exotic for the blue-sky catastrophe to occur naturally in a plausible system. In contrast, soon after, a first explicit example of the codimension-1 blue-sky catastrophe was proposed by N. Gavrilov and A. Shilnikov [@GSh], in the form of a family of 3D systems of differential equations with polynomial right-hand sides. A real breakthrough came in when the blue-sky catastrophe has turned out to be a typical phenomenon for slow-fast systems. Namely, in [@book2; @mmj] we described a number of very general scenarios leading to the blue-sky catastrophe in such systems with at least two fast variables; for systems with one fast variable the blue-sky catastrophe was found in [@GKR]. In this way, the blue-sky catastrophe has found numerous applications in mathematical neuroscience, namely, it explains a smooth and reversible transition between tonic spiking and bursting in exact Hodgkin-Huxley type models of interneurons [@leech1; @leech2] and in mathematical models of square-wave bursters [@hr]. The great variability of the burst duration near the blue-sky catastrophe was shown to be the key mechanism ensuring the diversity of rhythmic patterns generated by small neuron complexes that control invertebrate locomotion [@DG1; @DG2; @DG3]. In fact, the term “blue sky catastrophe" should be naturally treated in a broader way. Namely, under this term we allow to embrace a whole class of dynamical phenomena that all are due to the existence of a stable (or, more generally, structurally stable) periodic orbit, ${\cal L}_\mu$, depending continuously on the parameter $\mu$ so that both, the length and the period of ${\cal L}_\mu$ tend to infinity as the bifurcation parameter value is reached. As for the topological limit, $M_0$, of the orbit ${\cal L}_\mu$ is concerned, it may possess a rather degenerate structure that does not prohibit $M_0$ from having equilibrium states included. As such, the periodic regime ${\cal L}_\mu$ could emerge as a composite construction made transiently of several quasi-stationary states: nearly constant, periodic, quasiperiodic, and even chaotic fragments. As one of the motivations (which we do not pursue here) one may think on slow-fast model where the fast 3D dynamics is driven by a periodic motion in a slow subsystem. Results ======= In this paper we focus on an infinitely degenerate case where $M_0$ is comprised of a saddle periodic orbit with a continuum of homoclinic trajectories. Namely, we consider a one-parameter family of sufficiently smooth systems of differential equations $X_\mu$ defined in $R^{n+1}$, $n\geq 2$, for which we need to make a number of assumptions as follows.\ [**(A)**]{} There exists a saddle periodic orbit $L$ (we assume the period equals $2\pi$) with the multipliers[^2] $\rho_1,\dots,\rho_n$. Let the multipliers satisfy $$\label{rh1} \max_{i=2,\dots,n-1} |\rho_i|<|\rho_1|<\;1\;<|\rho_n|.$$ Once this property is fulfilled at $\mu=0$, it implies that the saddle periodic orbit $L=L_\mu$ exists for all small $\mu$ and smoothly depends on $\mu$. Condition (\[rh1\]) also holds for all small $\mu$. This condition implies that the stable manifold $W^s_\mu$ is $n$-dimensional[^3] and the unstable manifold $W^u_\mu$ is two-dimensional. If the unstable multiplier $\rho_n$ is positive (i.e. $\rho_n>1$), then the orbit $L_\mu$ divides $W^u_\mu$ into two halves, $W^+_\mu$ and $W^-_\mu$, so $W^u_\mu=L_\mu\cup W^+_\mu\cup W^-_\mu$. If $\rho_n$ is negative ($\rho_n<-1$), then $W^u_\mu$ is a Möbius strip, so $L_\mu$ does not divide $W^u_\mu$; in this case we denote $W^+_\mu=W^u_\mu\backslash L_\mu$. Concerning the stable manifold, condition (\[rh1\]) implies that in $W^s_\mu$ there exists (at $n\geq 3$) an $(n-1)$-dimensional strong-stable invariant manifold $W^{ss}_\mu$ whose tangent at the points of $L_\mu$ contains the eigen-directions corresponding to the multipliers $\rho_2,\dots,\rho_{n-1}$, and the orbits in $W^s_\mu\backslash W^{ss}_\mu$ tend to $L_\mu$ along the direction which correspond to the leading multiplier $\rho_1$.\ [**(B)**]{} At $\mu=0$ we have $W^+_0\subset W^s_0\backslash W^{ss}_0$, i.e. we assume that [*all*]{} orbits from $W^+_0$ are homoclinic to $L$. Moreover, as $t\to +\infty$, they tend to $L$ along the leading direction.\ [**(C)**]{} We assume that the flow near $L$ contracts three-dimensional volumes, i.e. $$\label{contr} |\rho_1\rho_n| <1.$$ This condition is crucial, as the objects that we obtain by bifurcations of the homoclinic surface $W^+_0\cup L$ are meant to be attractors. Note that this condition is similar to the negativity of the saddle value condition from the theory of homoclinic loops to a saddle equilibrium [@AL1; @AL2; @lp0], see (\[sadl\]).\ [**(D)**]{} We assume that one can introduce linearizing coordinates near $L$. Namely, a small neighborhood $U$ of $L$ is a solid torus homeomorphic to $S^1\times R^n$, i.e. we can coordinatize it by an angular variable $\theta$ and by normal coordinates $u\in R^n$. Our assumption is that these coordinates are chosen so that the system in the small neighborhood of $L$ takes the form $$\label{lfr} \dot u=C(\theta,\mu) u, \qquad \dot \theta=1,$$ where $C$ is $2\pi$-periodic in $\theta$. The smooth linearization is not always possible, and our results can be obtained without this assumption. We, however, will avoid discussing the general case here, in order to make the construction more transparent. It is well-known that by a $4\pi$-periodic transformation of the coordinates $u$ system (\[lfr\]) can be brought to the time-independent form. Namely, we may write the system as follows $$\label{lcfr} \begin{array}{l} \dot x=-\lambda(\mu) x, \qquad \dot y=B(\mu) y,\\ \dot z=\gamma(\mu) z,\\ \dot \theta=1,\end{array}$$ where $x\in R^1$, $y\in R^{n-2}$, $z\in R^1$, and $\lambda=-\frac{1}{2\pi}\ln|\rho_1|>0$, $\gamma=\frac{1}{2\pi}\ln|\rho_n|>0$ and, if $n\geq 2$, $B(\mu)$ is an $(n-2)\times(n-2)$-matrix such that $$\label{nev} \|e^{Bt}\|=o(e^{-\lambda t}) \qquad (t\to+\infty).$$ Note also that condition [**(C)**]{} implies $$\label{sadl} \gamma-\lambda<0.$$ By (\[lcfr\]), the periodic orbit $L(\mu)$ is given by $x=0$, $y=0$, $z=0$, its local stable manifold is given by $z=0$, and the leading direction in the stable manifold is given by $y=0$; the local unstable manifold is given by $\{x=0,y=0\}$. Recall that the $4\pi$-periodic transformation we used to bring system (\[lfr\]) to the autonomous form (\[lcfr\]) is, in fact, $2\pi$-periodic or $2\pi$-antiperiodic. Namely, the points $(\theta,x,z,y)$ and $(\theta+2\pi,\sigma(x,z,y))$ are equal (they represent the same point in the solid torus $U$), where $\sigma$ is an involution which changes signs of some of the coordinates $x,z,y_1,\dots,y_{n-2}$. More precisely, $\sigma$ changes the orientation of each of the directions which correspond to the real negative multipliers $\rho$. In particular, if all the multipliers $\rho$ are positive, then $\sigma$ is the identity, i.e. our coordinates are $2\pi$-periodic in this case.\ ![Poincaré map $T_1$ takes a cross-section $S_1$ transverse to the unstable manifold $W^u$ to a cross-section $S_0$ transverse to the stable manifold $W^s$.[]{data-label="fig3"}](fig4.jpg){width="80.00000%"} [**(E)**]{} Consider two cross-sections $S_0:\{x=d,\quad \|y\|\leq \varepsilon_1,\quad |z|\leq \varepsilon_1\}$ and $S_1:\{z=d,\quad \|y\|\leq\varepsilon_2,\quad |x|\leq\varepsilon_2\}$ for some small positive $d$ and $\varepsilon_{1,2}$. Denote the coordinates on $S_0$ as $(y_0,z_0,\theta_0)$ and the coordinates on $S_1$ as $(x_1,y_1,\theta_1)$. The set $S_0$ is divided by the stable manifold $W^s$ into two regions, $S_0^+:\{z_0>0\}$ and $S_0^-:\{z_0<0\}$. Since $W^+_0\subset W^s_0$ by assumption 2, it follows that the orbits starting at $S_1$ define a smooth map $T_1:S_1\to S_0$ (see Fig. \[fig3\]) for all small $\mu$: $$\label{glom} \begin{array}{l} z_0 =f(x_1,y_1,\theta_1,\mu)\\ y_0 =g(x_1,y_1,\theta_1,\mu)\\ \theta_0 =m\theta_1 + h(\theta_1,\mu)+\tilde h(x_1,y_1,\theta_1,\mu), \end{array}$$ where $f,g,h,\tilde h$ are smooth functions $4\pi$-periodic in $\theta_1$, and the function $\tilde h$ vanishes at $(x_1=0,y_1=0)$. Condition $W^+_0\subset W^s_0$ reads as $$f(0,0,\theta_1,0)\equiv 0.$$ We assume that $$\label{qqfff} f(0,0,\theta_1,\mu)=\mu\alpha(\theta_1,\mu),$$ where $$\label{alpt} \alpha(\theta_1,\mu)>0$$ for all $\theta_1$, i.e. [*all the homoclinics are split simultaneously and in the same direction*]{}, and the intersection $W^+_\mu\cap S_0$ moves inside $S_0^+$ with a non-zero velocity as $\mu$ grows across zero. The coefficient $m$ in the last equation of (\[glom\]) is an integer. In order to see this, recall that two points $(\theta,x,z,y)$ and $(\hat\theta,\hat x,\hat z,\hat y)$ in $U$ are the same if and only if $\hat\theta=\theta+2\pi k, (\hat x,\hat z,\hat y)=\sigma^k (x,z,y)$ for an integer $k$. Thus, if we increase $\theta_1$ to $4\pi$ in the right-hand side of (\[glom\]), then the corresponding value of $\theta_0$ in the left-hand side may change only to an integer multiple of $2\pi$, i.e. $m$ must be an integer or a half-integer. Let us show that the half-integer $m$ are forbidden by our assumption (\[alpt\]). Indeed, if the multiplier $\rho_n$ is positive, then the involution $\sigma$ keeps the corresponding variable $z$ constant. Thus, $(z=d,\theta=\theta_1, x=0, y=0)$ and $(z=d,\theta=\theta_1+2\pi, x=0, y=0)$ correspond, in this case, to the same point on $W^+_\mu\cap S_1$, hence their image by (\[glom\]) must give the same point on $S_0$, i.e. the corresponding values of $\theta_0$ must differ on an integer multiple of $2\pi$, which means that $m$ must be an integer. If $\rho_n<0$, then $\sigma$ changes the sign of $z$, i.e. if two values of $\theta_0$ which correspond to the same point on $S_0$ differ on $2\pi k$, the corresponding values of $z$ differ to a factor of $(-1)^k$. Now, since the increase of $\theta_1$ to $4\pi$ leads to the increase of $\theta_0$ to $4\pi m$ in (\[glom\]), we find that $f(0,0,4\pi,\mu)=(-1)^{2m}f(0,0,0,\mu)$ in the case $\rho_n<0$. This implies that if $m$ is a half-integer, then $f(0,0,\theta)$ must have zeros at any $\mu$ and (\[alpt\]) cannot be satisfied. The number $m$ determines the shape of $W^+\cap S_0$. Namely, the equation of the curve $W^+_0\cap S_0$ is $$\theta_0 =m\theta_1 + h_1(\theta_1,0),\qquad y_0 =g(0,0,\theta_1,0), \qquad z_0=0,$$ so $|m|$ defines the homotopic type of this curve in $S_0\cap W^s_0$, and the sign of $m$ is responsible for the orientation. In the case $n=2$, i.e. when the system is defined in $R^3$, the only possible case is $m=1$. At $n=3$ (the system in $R^4$) the curve $W^+_0\cap S_0$ lies in the two-dimensional intersection of $W^s$ with $S_0$. This is either an annulus (if $\rho_1>0$), or a Möbius strip (if $\rho_1<0$). Since the smooth curve $W^+_0\cap S_0$ cannot have self-intersections, it follows that the only possible cases are $m=0,\pm1$ when $W^s\cap S_0$ is a two-dimensional annulus and $m=0,\pm1,\pm2$ when $W^+_0\cap S_0$ is a Möbius strip. At large $n$ (the system in $R^5$ and higher) all integer values of $m$ are possible.\  \ Now we can formulate the main results of the paper.\ [**Theorem.**]{} Let conditions [**(A-E)**]{} hold. Consider a sufficiently small neighborhood $V$ of the homoclinic surface $\Gamma=W^+_0\cap L$.\ 1. If $m=0$ and, for all $\theta$, $$\label{bsky} |h'(\theta,0)-\frac{\alpha'(\theta,0)}{\gamma \alpha(\theta,0)}|<1,$$ then a single stable periodic orbit ${\cal L}_\mu$ is born as $\Gamma$ splits. The orbit ${\cal L}_\mu$ exists at all small $\mu>0$; its period and length tend to infinity as $\mu\to+0$. All orbits which stay in $V$ for all positive times and which do not lie in the stable manifold of the saddle orbit $L_\mu$ tend to ${\cal L}_\mu$.\ 2. If $|m|=1$ and, for all $\theta$, $$\label{tor} 1+m \left[h'(\theta,0)-\frac{\alpha'(\theta,0)}{\gamma \alpha(\theta,0)}\right]>0,$$ then a stable two-dimensional invariant torus (at $m=1$) or a Klein bottle (at $m=-1$) is born as $\Gamma$ splits. It exists at all small $\mu>0$ and attracts all the orbits which stay in $V$ and which do not lie in the stable manifold of $L_\mu$.\ 3. If $|m|\geq 2$ and, for all $\theta$, $$\label{hypat} |m+h'(\theta,0)-\frac{\alpha'(\theta,0)}{\gamma \alpha(\theta,0)}|>1,$$ then, for all small $\mu>0$, the system has a hyperbolic attractor (a Smale-Williams solenoid) which is an $\omega$-limit set for all orbits which stay in $V$ and which do not lie in the stable manifold of $L_\mu$. The flow on the attractor is topologically conjugate to suspension over the inverse spectrum limit of a degree-$m$ expanding map of a circle. At $\mu=0$, the attractor degenerates into the homoclinic surface $\Gamma$.\ [*Proof.*]{} Solution of (\[lcfr\]) with the initial conditions $(x_0=d,y_0,z_0,\theta_0)\in S_0$ gives $$\begin{array}{l} x(t)=e^{-\lambda t} d, \qquad y(t)=e^{B t} y_0,\\ z(t)=e^{\gamma t} z_0,\\ \theta(t)=\theta_0+t.\end{array}$$ The flight time to $S_1$ is found from the condition $$d = e^{\gamma t} z_0,$$ which gives $\displaystyle t=-\frac{1}{\gamma}\ln\frac{z_0}{d}$. Thus the orbits in $U$ define the map $T_0: S_0^+\to S_1$: $$\begin{array}{l} x_1=d^{1-\nu} z_0^\nu, \qquad y_1=Q(z_0) y_0,\\ \theta_1=\theta_0-\frac{1}{\gamma}\ln\frac{z_0}{d}\end{array}$$ where $\nu=\lambda/\gamma>1$ and $\|Q(z_0)\|=o(z_0^\nu)$ (see (\[nev\]),(\[sadl\])). By (\[glom\]), we may write the map $T=T_0T_1$ on $S_1$ as follows (we drop the index “$1$”): $$\begin{array}{l} \bar x=d^{1-\nu} (\mu\alpha(\theta,\mu)+O(x,y))^\nu, \qquad \bar y=Q(\mu\alpha+O(x,y)) g(x,y,\theta,\mu),\\ \bar\theta=m\theta+h(\theta,\mu)-\frac{1}{\gamma}\ln(\frac{\mu}{d}\alpha(\theta,\mu)+O(x,y))+O(x,y).\end{array}$$ For every orbit which stays in $V$, its consecutive intersections with the cross-section $S_1$ constitute an orbit of the diffeomorphism $T$. Since $\nu>1$, the map $T$ is contracting in $x$ and $y$, and it is easy to see that all the orbits eventually enter a neighborhood of $(x,y)=0$ of size $O(\mu^\nu)$. We therefore rescale the coordinates $x$ and $y$ as follows: $$x=d^{1-\nu}\mu^\nu X,\qquad y=\mu^\nu Y.$$ The map $T$ takes the form $$\label{mapt} \begin{array}{l} \bar X= \alpha(\theta,0)^\nu +o(1), \qquad \bar Y=o(1),\\ \bar\theta=\omega(\mu)+m\theta +h(\theta,0)-\frac{1}{\gamma}\ln\alpha(\theta,0)+o(1), \end{array}$$ where $o(1)$ stands for terms which tend to zero as $\mu\to+0$, along with their first derivatives, and $\omega(\mu)=\frac{1}{\gamma}\ln(\mu/d)\to\infty$ as $\mu\to+0$. Recall that $\alpha>0$ for all $\theta$ and that $\alpha$ and $h$ are periodic in $\theta$. It is immediately seen from (\[mapt\]) that all orbits eventually enter an invariant solid torus $\{|x-\alpha(\theta,0)^\nu|< K_\mu,\;\|y\|<K_\mu\}$ for appropriately chosen $K_\mu$, $K_\mu\to 0$ as $\mu\to +0$ (see Fig. \[fig4\]). Thus, there is an attractor in $V$ for all small positive $\mu$, and it merges into $\Gamma$ as $\mu\to+0$. Our theorem claims that the structure of the attractor depends on the value of $m$, so we now consider different cases separately. ![Case $m=0$: the image of the solid torus is contractible to a point; case $m = 1$: contraction transverse to the longitude; case $m = 2$: the solid-torus is squeezed, doubly stretched and twisted within the original and so on, producing the solenoid in the limit.[]{data-label="fig4"}](fig5.jpg){width="80.00000%"} If $m=0$ and (\[bsky\]) holds, then map (\[mapt\]) is, obviously, contracting at small $\mu$, hence it has a single stable fixed point. This fixed point corresponds to the sought periodic orbit $A_\mu$. Its period tends to infinity as $\mu\to+0$: the orbit intersects both the cross-sections $S_0$ and $S_1$, and the flight time from $S_0$ to $S_1$ is of order $\frac{1}{\gamma}|\ln\mu|$. The length of the orbit also tends to infinity, since the phase velocity never vanishes in $V$. In the case $m=\pm 1$ we prove the theorem by referring to the “annulus principle” of [@AfS3]. Namely, consider a map $$\bar r=p(r,\theta),\qquad \bar\theta=q(r,\theta)$$ of a solid torus into itself (here $\theta$ is the angular variable and $r$ is the vector of normal variables). Let the map $r\mapsto p(r,\theta)$ be a contraction for every fixed $\theta$, i.e. $$\left\|\frac{\partial p}{\partial r}\right\|_\circ<1$$ (where by $\|\cdot\|_\circ$ we denote the supremum of the norm over the solid torus under consideration) and let the map $\theta\mapsto q(r,\theta)$ be a diffeomorphism of a circle for every fixed $r$. Then it is well-known [@AfS3; @book2] that if $$1-\left\|\left(\frac{\partial q}{\partial \theta}\right)^{-1}\right\|_\circ \cdot \left\|\frac{\partial p}{\partial r}\right\|_\circ > 2\sqrt{\left\|\left(\frac{\partial q}{\partial \theta}\right)^{-1}\right\|_\circ \cdot \left\|\frac{\partial q}{\partial r}\right\|_\circ \left\|\frac{\partial p}{\partial \theta}\left(\frac{\partial q}{\partial \theta}\right)^{-1}\right\|_\circ},$$ then the map has a stable, smooth, closed invariant curve $r=r^*(\theta)$ which attracts all orbits from the solid torus. These conditions are clearly satisfied by map (\[mapt\]) at $|m|=1$ if (\[tor\]) is true (here $r=(X,Y)$, $p=(\alpha(\theta,0)^\nu +o(1), o(1))$, $q=\omega(\mu)+m\theta +h(\theta,0)-\frac{1}{\gamma}\ln\alpha(\theta,0)+o(1)$). Thus, the map $T$ has a a closed invariant curve in this case. The restriction of $T$ to the invariant curve preserves orientation if $m=1$, while at $m=-1$ it is orientation-reversing. Therefore, this invariant curve on the cross-section corresponds to an invariant torus of the flow at $m=1$ or to a Klein bottle at $m=-1$. It remains to prove the theorem for the case $|m|\geq 2$. The proof is based on the following result.\ [**Lemma.**]{} Consider a diffeomorphism $T:(r,\theta)\mapsto (\bar r,\bar\theta)$ of a solid torus, where $$\label{maptr} \bar r=p(r,\theta),\qquad \bar\theta=m\theta+s(r,\theta)=q(r,\theta),$$ where $s$ and $p$ are periodic functions of $\theta$ Let $|m|\geq 2$, and $$\label{frc} \left\|\frac{\partial p}{\partial r}\right\|_\circ <1,$$ $$\label{cndir} \left(1-\left\|\frac{\partial p}{\partial r}\right\|_\circ\right) \left(1-\left\|\left(\frac{\partial q}{\partial \theta}\right)^{-1}\right\|_\circ\right)> \left\|\frac{\partial p}{\partial \theta}\right\|_\circ\; \left\|\left(\frac{\partial q}{\partial \theta}\right)^{-1} \frac{\partial q}{\partial r}\right\|_\circ.$$ Then the map has a uniformly-hyperbolic attractor, a Smale-Williams solenoid, on which it is topologically conjugate to the inverse spectrum limit of $\bar \theta=m\theta$, a degree-$m$ expanding map of the circle.\ [*Proof.*]{} It follows from (\[frc\]),(\[cndir\]) that $\|(\frac{\partial q}{\partial \theta})^{-1}\|$ is uniformly bounded. Therefore, $\theta$ is a uniquely defined smooth function of $(\bar\theta, r)$, so we may rewrite (\[maptr\]) in the “cross-form” $$\label{crmps} \bar r=p^\times(r,\bar\theta),\qquad \theta=q^\times(r,\bar\theta),$$ where $p^\times$ and $q^\times$ are smooth functions. It is easy to see that conditions (\[frc\]), (\[cndir\]) imply $$\label{frc0} \left\|\frac{\partial p^\times}{\partial r}\right\|_\circ <1,\qquad \left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ <1$$ $$\label{cncrs} \left(1-\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ\right) \left(1-\left\|\frac{\partial q^\times}{\partial \theta}\right\|_\circ\right)\geq \left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ\; \left\|\frac{\partial q^\times}{\partial r}\right\|_\circ.$$ These inequalities imply the uniform hyperbolicity of the map $T$ (note that (\[cndir\]) coincides with the hyperbolicity condition for the Poincare map for the Lorenz attractor from [@ABS]). Indeed, it is enough to show that there exists $L>0$ such that the derivative $T'$ of $T$ takes every cone $\|\Delta r\|\leq L\|\Delta \theta\|$ inside $\|\Delta \bar r\|\leq L\|\Delta \bar \theta\|$ and is uniformly expanding in $\theta$ in this cone, and that the inverse of $T'$ takes every cone $\|\Delta \bar\theta\|\leq L^{-1}\|\Delta \bar r\|$ inside $\|\Delta \theta\|\leq L^{-1}\|\Delta r\|$ and is uniformly expanding in $r$ in this cone. Let us check these properties. When $\|\Delta r\|\leq L\|\Delta \theta\|$, we find from (\[crmps\]) that $$\|\Delta\theta\|\leq \frac{\left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ} {1-L\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ} \|\Delta\bar\theta\|$$ and $$\|\Delta\bar r\|\leq \left\{\frac{L \left\|\frac{\partial p^\times}{\partial r}\right\|_\circ \left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ} {1-L\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ} + \left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ\right\} \|\Delta\bar\theta\|.$$ Similarly, if $\|\Delta \bar\theta\|\leq L^{-1}\|\Delta \bar r\|$, we find from (\[crmps\]) that $$\|\Delta\bar r\|\leq \frac{\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ} {1-L^{-1}\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ} \|\Delta r\|$$ and $$\|\Delta\theta\|\leq \left\{\frac{L^{-1} \left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ \left\|\frac{\partial p^\times}{\partial r}\right\|_\circ} {1-L^{-1}\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ} + \left\|\frac{\partial q^\times}{\partial r}\right\|_\circ\right\} \|\Delta r\|.$$ Thus, we will prove hyperbolicity if we show that there exists $L$ such that $$\left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ < 1- L\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ$$ and $$\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ < 1- L^{-1}\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ.$$ These conditions are solved by any $L$ such that $$\frac{\left\|\frac{\partial p^\times}{\partial \bar\theta}\right\|_\circ} {1-\left\|\frac{\partial p^\times}{\partial r}\right\|_\circ}<L< \frac{1-\left\|\frac{\partial q^\times}{\partial \bar\theta}\right\|_\circ} {\left\|\frac{\partial q^\times}{\partial r}\right\|_\circ}.$$ It remains to note that such $L$ exist indeed when (\[frc0\]),(\[cncrs\]) are satisfied. We have proved that the attractor $A$ of the map $T$ is uniformly hyperbolic. Such attractors are structurally stable, so $T|_A$ is topologically conjugate to the restriction to the attractor of any diffeomorphism which can be obtained by a continuous deformation of the map $T$ without violation of conditions (\[frc\]) and (\[cndir\]). An obvious example of such a diffeomorphism is given by the map $$\label{epd} \bar r=p(\delta r,\theta),\qquad \bar\theta=q(\delta r,\theta)$$ for any $0<\delta\leq 1$. Fix small $\delta>0$ and consider a family of maps $$\bar r=p(\delta r,\theta),\qquad \bar\theta=q(\varepsilon r,\theta),$$ where $\varepsilon$ runs from $\delta$ to zero. When $\delta$ is sufficiently small, every map in this family is a diffeomorphism (otherwise we would get that the curve $\{\bar r=p(0,\theta), \bar\theta= q(0,\theta)\}$ would have points of self-intersection, which is impossible since this curve is the image of the circle $r=0$ by the diffeomorphism $T$), and each satisfies inequalities (\[frc\]),(\[cndir\]). This family is a continuous deformation of map (\[epd\]) to the map $$\label{skd} \bar r=p(\delta r,\theta),\qquad \bar\theta=q(0,\theta)=m\theta+s(0,\theta).$$ Thus, we find that $T|_A$ is topologically conjugate to the restriction of diffeomorphism (\[skd\]) to its attractor. It remains to note that map (\[skd\]) is a skew-product map of the solid torus, which contracts along the fibers $\theta=const$ and, in the base, it is an expanding degree-$m$ map of a circle. By definition, the attractor of such map is the sought Smale-Williams solenoid [@Sm; @W]. This completes the proof of the lemma. Now, in order to finish the proof of the theorem, just note that map (\[mapt\]) satisfies the conditions of the Lemma when (\[hypat\]) is fulfilled. Acknowledgment {#acknowledgment .unnumbered} ============== This work was supported by RFFI Grant No. 08-01-00083 and the Grant 11.G34.31.0039 of the Government of the Russian Federation for state support of “Scientific research conducted under supervision of leading scientists in Russian educational institutions of higher professional education" (to L.S); NSF grant DMS-1009591, MESRF “Attracting leading scientists to Russian universities“ project 14.740.11.0919 (to A.S) and the Royal Society Grant ”Homoclinic bifurcations" (to L.S. and D.T.) [99]{} Andronov AA and Leontovich EA, Some cases of dependence of limit cycles on a parameter, Uchenye zapiski Gorkovskogo Universiteta (Research notes of Gorky University) 6, 3-24, 1937. Andronov AA, Leontovich EA, Gordon IE and Maier AG, The theory of bifurcations of dynamical systems on a plane, Wiley, New York, 1971. Shilnikov LP, Some cases of generation of periodic motion from singular trajectories, Math. USSR Sbornik 61, 443-466, 1963. Shilnikov LP, On the generation of a periodic motion from a trajectory which leaves and re-enters a saddlesaddle state of equilibrium, Sov. Math. Dokl. 7, 1155-1158, 1966. Shilnikov LP, On the generation of a periodic motion from trajectories doubly asymptotic to an equilibrium state of saddle type, Math. USSR Sbornik 6, 427-438, 1968. Palis J and Pugh Ch, Fifty problems in dynamical systems, Dynamical systems - Warwick, 1974, Springer Lecture Notes 468, 1975. Shilnikov LP, Shilnikov AL, Turaev DV and Chua LO, Methods of Qualitative Theory in Nonlinear Dynamics. Part II, World Scientific, 2001. Abraham RH, Catastrophes, intermittency, and noise, in Chaos, Fractals, and Dynamics, Lect. Notes Pure Appl. Math. 98, 3-22, 1985. Fuller F, An index of fixed point type for periodic orbits, Amer. J. Math. 89, 133-148, 1967. Medvedev VS, On a new type of bifurcations on manifolds, Math. USSR Sb. 41, 403-407, 1982. Afraimovich VS and Shilnikov LP, On bifurcation of codimension 1, leading to the appearance of a countable set of tori, Soviet Math. Dokl. 25, 101-105, 1982. Shilnikov LP and Turaev DV, A new simple bifurcation of a periodic orbit of blue sky catastrophe type, in Methods of qualitative theory of differential equations and related topics, AMS Transl. Series II, v.200, 165-188, 2000. Li W and Zhang ZF, The “blue sky catastrophe” on closed surfaces, Adv. Series Dynam. Syst. 9, 316-332, World Scientific, 1991. Ilyashenko Y and Li W, Nonlocal bifurcations, Math. Surveys and Monographs 66, AMS, 1999. Turaev DV and Shilnikov LP, On blue sky catastrophes, Dokl. Math. 51, 404-407, 1995. Shilnikov LP and Turaev DV, On simple bifurcations leading to hyperbolic attractors, Comput. Math. Appl. 34, 441-457, 1997. Shilnikov AL and Turaev DV, Blue Sky Catastrophe, Scholarpedia, 2006, 2(8):1889. Afraimovich VS and Shilnikov LP, On small periodic perturbations of autonomous systems, Sov. Math. Dokl. 15, 206-211, 1974. Afraimovich VS and Shilnikov LP, On some global bifurcations connected with the disappearance of a fixed point of saddle-node type, Sov. Math. Dokl. 15, 1761-1765, 1974. Afraimovich VS and Shilnikov LP, The annulus principle and problems of interaction between two self-oscillating systems, J. Appl. Math. Mech. 41(1977), 632-641, 1978. Afraimovich VS and Shilnikov LP, Invariant tori, their breakdown and stochasticity, Amer. Math. Soc. Transl. 149, 201211, 1991. Newhouse S, Palis J and Takens F, Bifurcations and stability of families of diffeomorphisms, Publ. Math. IHES 57, 5-71, 1983. Turaev DV and Shilnikov LP, Bifurcations of quasiattractors torus-chaos, in “Mathematical mechanisms of turbulence (modern nonlinear dynamics in application to turbulence simulation)”, 113-121, Kiev, 1986. Shilnikov AL, Shilnikov LP and Turaev DV, On some mathematical aspects of classical synchronization theory. Tutorial, Bifurcations and Chaos 14, 2143-2160, 2004. Shilnikov LP, The theory of bifurcations and turbulence, Selecta Math. Sovietica 10, 43-53, 1991. Smale S, Differentiable dynamical systems, Bull. AMS 73, 747-817, 1967. Williams RF, Expanding attractors, Publ. Math. IHES 43, 169-203, 1974. Gavrilov NK and Shilnikov AL, Example of a blue sky catastrophe, AMS Transl. Series II, v.200, 99-105, 2000. Shilnikov AL, Shilnikov LP and Turaev DV, Blue sky catastrophe in singularly perturbed systems, Moscow Math. J. 5, 205-218, 2005. Glyzin SD, Kolesov AYu and Rozov NKh, Blue sky catastrophe in relaxation systems with one fast and two slow variables, Differential Equations 44, 161-175, 2008. Shilnikov AL and Cymbalyuk G, Transition between tonic-spiking and bursting in a neuron model via the blue-sky catastrophe, Phys. Rev. Letters 94, 048101, 2005. Shilnikov AL and Cymbalyuk G, Homoclinic saddle-node orbit bifurcations en route between tonic spiking and bursting in neuron models, Regular & Chaotic Dynamics 3, 281-297, 2004. Shilnikov AL and Kolomiets ML, Methods of the qualitative theory for the Hindmarsh-Rose model: a case study, Bifurcations and Chaos 18, 1-27, 2008. Belykh IV and Shilnikov AL, \[David vs. Goliath:\] When weak inhibition synchronizes strongly desynchronizing networks of bursting neurons, Phys. Rev. Letters 101, 078102, 2008. Belykh I, Jalil S and Shilnikov AL, Burst-duration mechanism of in-phase bursting in inhibitory networks, Regular & Chaotic Dynamics 15, 148-160, 2010. Wojcik J, Clewley R and Shilnikov AL, Order parameter for bursting polyrhythms in multifunctional central pattern generators, Phys. Rev. E 83, 056209-6, 2011. Afraimovich VS, Bykov VV and Shilnikov LP, On attracting structurally unstable limit sets of Lorenz attractor type, Trans. Mosc. Math. Soc. 44(1982), 153-216 (1983). [^1]: an equilibrium state, alternatively called a Shilnikov saddle-node, due to a merger of two saddles of different topological types [^2]: the eigenvalues of the linearization of the Poincare map [^3]: the intersection of $W^s_\mu$ with any cross-section to $L_\mu$ is $(n-1)$-dimensional
{ "pile_set_name": "ArXiv" }
--- abstract: | Recurrence and explicit formulae for contractions (partial traces) of antisymmetric and symmetric products of identical trace class operators are derived. Contractions of product density operators of systems of identical fermions and bosons are proved to be asymptotically equivalent to, respectively, antisymmetric and symmetric products of density operators of a single particle, multiplied by a normalization integer. The asymptotic equivalence relation is defined in terms of the thermodynamic limit of expectation values of observables in the states represented by given density operators. For some weaker relation of asymptotic equivalence, concerning the thermodynamic limit of expectation values of product observables, normalized antisymmetric and symmetric products of density operators of a single particle are shown to be equivalent to tensor products of density operators of a single particle. This paper presents the results of a part of the author’s thesis \[W. Radzki, *Kummer contractions of product density matrices of systems of $n$ fermions and $n$ bosons* (Polish), MS thesis, Institute of Physics, Nicolaus Copernicus University, Toruń, 1999\]. address: 'Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, ul. Chopina $12 \slash 18$, 87-100 Toruń, Poland' author: - Wiktor Radzki date: 'October 25, 2008' title: Contractions of product density operators of systems of identical fermions and bosons --- [^1] Introduction ============ This paper, presenting the results of a part of the author’s thesis [@RadzkiUMK99], deals with contractions (partial traces) of antisymmetric and symmetric product density operators representing mixed states of systems of identical noninteracting fermions and bosons, respectively. If ${\mathcal{H}}$ is a separable Hilbert space of a single fermion (boson) then the space of the mion (resp. son) system is the antisymmetric (resp. symmetric) subspace ${\mathcal{H}}^{\wedge n}$ (resp. ${\mathcal{H}}^{\vee n}$) of ${\mathcal{H}}^{\otimes n}.$ Density operators of mion (resp. son) systems are identified with those defined on ${\mathcal{H}}^{\otimes n}$ and concentrated on ${\mathcal{H}}^{\wedge n}$ (resp. ${\mathcal{H}}^{\vee n}$). Recall that the expectation value of an observable represented by a bounded selfadjoint operator $B$ on given Hilbert space in a state described by a density operator $\rho$ equals ${\mathrm{Tr}}\,B\rho.$ If $B$ is an unbounded selfadjoint operator on a dense subspace of given Hilbert space, instead of $B$ one can consider its spectral measure $E_B(\Delta)$ (which is a bounded operator) of a Borel subset $\Delta$ of the spectrum of $B.$ Then ${\mathrm{Tr}}\,E_B(\Delta)\rho$ is the probability that the result of the measurement of the observable in question belongs to $\Delta$ [@vonNeumannS32]. ticle observables of mion and son systems ($k<n$) are represented, respectively, by operators of the form $${\label{KE}} \stackrel{\wedge}{\Gamma^{n}_{k}}B =A^{(n)}_{{\mathcal{H}}}\left(B\otimes{I}^{\otimes(n-k)}\right)A^{(n)}_{{\mathcal{H}}}, \quad \stackrel{\vee}{\Gamma^{n}_{k}}B =S^{(n)}_{{\mathcal{H}}}\left(B\otimes{I}^{\otimes(n-k)}\right)S^{(n)}_{{\mathcal{H}}}$$ (multiplied by ${\tbinom{n}{k}}$), where $A^{(n)}_{{\mathcal{H}}}$ and $S^{(n)}_{{\mathcal{H}}}$ are projectors of ${\mathcal{H}}^{\otimes n}$ onto ${\mathcal{H}}^{\wedge n}$ and ${\mathcal{H}}^{\vee n},$ respectively, $I$ is the identity operator on ${\mathcal{H}}$ and $B$ is a selfadjoint operator on ${\mathcal{H}}^{\otimes k}$(see [@KummerJMP67]). Operators  are called *antisymmetric* and *symmetric expansions of $B$*. In view of the earlier remark it is assumed that $B$ is bounded. The expectation values of observables represented by $\stackrel{\wedge}{\Gamma^{n}_{k}}B$ and $\stackrel{\vee}{\Gamma^{n}_{k}}B$ in states represented by mion and son density operators $K$ and $G,$ respectively, can be expressed as $${\label{kontrKE}} {\mathrm{Tr}}\,K\stackrel{\wedge}{\Gamma^{n}_{k}}B ={\mathrm{Tr}}\,B{\mathrm{L}}^{k}_{n}K, \quad {\mathrm{Tr}}\,G\stackrel{\vee}{\Gamma^{n}_{k}}B ={\mathrm{Tr}}\,B{\mathrm{L}}^{k}_{n}G$$ (see [@KummerJMP67 Eqs. (1.7), (3.19)]), where ticle density operators ${\mathrm{L}}^{k}_{n}K$ and ${\mathrm{L}}^{k}_{n}G$ are *tractions of $K$* and *$G$* (see Definition \[kontr\]), called also *reduced density operators*. Such operators were investigated by Coleman [@ColemanRMP63], Garrod and Percus [@GarrodJMP64], and Kummer [@KummerJMP67]. In the present paper particular interest is taken in the case when $K$ and $G$ are *product density operators*, i.e. they are of the form $${\label{prod}} K =\frac{1}{{\mathrm{Tr}}\,\rho^{\wedge n}}\rho^{\wedge n}, \quad G =\frac{1}{{\mathrm{Tr}}\,\rho^{\vee n}}\rho^{\vee n},$$ where $\rho^{\wedge n} =A^{(n)}_{{\mathcal{H}}}\rho^{\otimes n}A^{(n)}_{{\mathcal{H}}},$ $\rho^{\vee n} =S^{(n)}_{{\mathcal{H}}}\rho^{\otimes n}S^{(n)}_{{\mathcal{H}}},$ and $\rho$ is a density operator of a single fermion or boson, respectively. The first objective of this paper is to find the recurrence and explicit formulae for ${\mathrm{L}}^{k}_{n}K$ and ${\mathrm{L}}^{k}_{n}G$ for $K$ and $G$ being, respectively, antisymmetric and symmetric products of identical trace class operators, including operators . The explicit form of the operators ${\mathrm{L}}^{k}_{n}K$ and ${\mathrm{L}}^{k}_{n}G$ proves to be quite complex. However, they can be replaced by operators with simpler structure if only the limiting values of expectations , in the sense of the thermodynamic limit, are of interest. The second objective of this paper is to find that simpler forms of contractions ${\mathrm{L}}^{k}_{n}K$ and ${\mathrm{L}}^{k}_{n}G$ for product density operators , equivalent to the complete expressions in the thermodynamic limit. The problems described above have been solved for $k=1,2$ by Kossakowski and Maćkowiak [@KossakowskiRMP86], and Maćkowiak [@MackowiakPR99]. The formulae they derived were exploited in calculations of the free energy density of large interacting mion and son systems [@KossakowskiRMP86; @MackowiakPR99], as well as in the perturbation expansion of the free energy density for the purity Kondo Hamiltonian [@MackowiakPA97]. In the case of investigation of many-particle interactions of higher order [@VolovikWS92; @TarasewiczPC00; @MackowiakPC00; @SchneiderEL04], or higher order perturbation expansion terms of the free energy density, the expressions for $({\mathrm{Tr}}\,\rho^{\wedge n})^{-1}{\mathrm{L}}^{k}_{n}\rho^{\wedge n}$ and $({\mathrm{Tr}}\,\rho^{\vee n})^{-1}{\mathrm{L}}^{k}_{n}\rho^{\vee n}$ with $k\geq 3$ prove to be needed in the canonical and grand canonical ensemble approach, which is the physical motivation for the present paper. The main results of this paper are Theorems \[rek\], \[jawny\], \[glowne\], and \[zmiana\]. Preliminaries ============= [\[prelim\]]{} In this section notation and terminology are set up. Basic notation -------------- [\[oznacz\]]{} Let $({\mathcal{H}},{\left\langle\cdot,\cdot\right\rangle})$ be a separable Hilbert space over ${\mathbb{C}}$ or ${\mathbb{R}}$. The following notation is used in the sequel. $I$ – the identity operator on ${\mathcal{H}},$ ${\mathcal{B}}({\mathcal{H}})$ – the space of bounded linear operators on ${\mathcal{H}}$ with the operator norm ${\left\Vert\cdot\right\Vert}$, ${\mathcal{T}}({\mathcal{H}})$ – the space of trace class operators on ${\mathcal{H}}$ with the trace norm ${\mathrm{Tr}}{\left\vert\cdot\right\vert},$ ${\mathcal{B}}^{\ast}({\mathcal{H}})$ – the space of bounded selfadjoint operators on ${\mathcal{H}}$, ${\mathcal{B}}^{\ast}_{\geq 0}({\mathcal{H}})$ – the set of nonnegative definite bounded selfadjoint operators on ${\mathcal{H}}$, ${\mathcal{D}}({\mathcal{H}})$ – the set of density operators (matrices) on ${\mathcal{H}},$ i.e. $${\mathcal{D}}({\mathcal{H}}) ={\left\{D\in{\mathcal{T}}({\mathcal{H}}){\;\vert\;\;\;}D =D^{\ast}, D\geq 0, {\mathrm{Tr}}\,D=1\right\}}.$$ Set ${\mathcal{H}}^{\otimes n} =\underbrace{{\mathcal{H}}\otimes\cdots\otimes{\mathcal{H}}}_{n}$ and denote by $S_{n}$ the group of permutations of the set ${\left\{1,\ldots,n\right\}}.$ Let $A^{(n)}_{{\mathcal{H}}},S^{(n)}_{{\mathcal{H}}} \in{\mathcal{B}}({\mathcal{H}}^{\otimes n})$ be the projectors such that $$A^{(n)}_{{\mathcal{H}}}(\psi_{1}\otimes\cdots\otimes\psi_{n}) =\frac{1}{n!}\sum_{\pi\in S_{n}}{\mathrm{sgn}}\,\pi\,\psi_{\pi(1)}\otimes \cdots\otimes\psi_{\pi(n)},$$ $$S^{(n)}_{{\mathcal{H}}}(\psi_{1}\otimes\cdots\otimes\psi_{n}) =\frac{1}{n!}\sum_{\pi\in S_{n}}\psi_{\pi(1)}\otimes \cdots\otimes\psi_{\pi(n)}$$ for every $\psi_{1},\ldots,\psi_{n}\in{\mathcal{H}}.$ The closed linear subspaces ${\mathcal{H}}^{\wedge n} =A^{(n)}_{{\mathcal{H}}}{\mathcal{H}}^{\otimes n}$ and ${\mathcal{H}}^{\vee n} =S^{(n)}_{{\mathcal{H}}}{\mathcal{H}}^{\otimes n}$ of ${\mathcal{H}}^{\otimes n}$ are called the *antisymmetric* and *symmetric subspace*, respectively. The *antisymmetric* and *symmetric product* of operators $B\in{\mathcal{B}}\left({\mathcal{H}}^{\otimes k}\right),$ $C\in{\mathcal{B}}({\mathcal{H}}^{\otimes m})$ are defined as $B\wedge C =A^{(k+m)}_{{\mathcal{H}}}(B\otimes C)A^{(k+m)}_{{\mathcal{H}}}$ and $B\vee C =S^{(k+m)}_{{\mathcal{H}}}(B\otimes C)S^{(k+m)}_{{\mathcal{H}}},$ respectively. It is assumed $B^{\wedge n} =\underbrace{B\wedge\ldots\wedge B}_{n},$ $B^{\vee n} =\underbrace{B\vee\ldots\vee B}_{n},$ and $B^{\wedge 1} =B^{\vee 1} =B.$ Clearly, if $B\in{\mathcal{B}}({\mathcal{H}})$ then $B^{\wedge n} =A^{(n)}_{{\mathcal{H}}}B^{\otimes n} =B^{\otimes n}A^{(n)}_{{\mathcal{H}}},$ $B^{\vee n} =S^{(n)}_{{\mathcal{H}}}B^{\otimes n} =B^{\otimes n}S^{(n)}_{{\mathcal{H}}},$ and if $B\in{\mathcal{B}}^{\ast}_{\geq 0}({\mathcal{H}})$ then $B^{\wedge n},B^{\vee n}\in{\mathcal{B}}^{\ast}_{\geq 0}({\mathcal{H}}^{\otimes n}).$ Set ${\mathbb{R}}_{+} =[0,+\infty)$ and $\overline{{\mathbb{R}}}_{+} ={\mathbb{R}}_+\cup{\left\{+\infty\right\}}.$ The product of measures $\mu,$ $\mu_1$ is denoted by $\mu\otimes\mu_{1}$ and $\mu^{\otimes n}$ stands for $\underbrace{\mu\otimes\cdots\otimes\mu}_{n}.$ In subsequent sections use is made of *product integral kernels*, described in Appendix \[sectjad\]. Contractions of operators ------------------------- [\[rozkontr\]]{} The definition and basic properties of contractions of operators are now recalled for the reader’s convenience. A discussion of these properties was carried out by Kummer [@KummerJMP67; @KummerJMP70]. Let ${\mathcal{H}}$ be a separable Hilbert space over the field ${\mathbb{K}}={\mathbb{C}}$ or ${\mathbb{R}}.$ [\[kontr\]]{} Let $k,n\in{\mathbb{N}},$ $k<n,$ and $K\in{\mathcal{T}}({\mathcal{H}}^{\otimes n}).$ Then the *traction of $K$* is the operator ${\mathrm{L}}^{k}_{n}K\in{\mathcal{T}}({\mathcal{H}}^{\otimes k})$ such that $${\label{kontr1}} \forall_{C\in{\mathcal{B}}\left({\mathcal{H}}^{\otimes k}\right)} \colon \quad {\mathrm{Tr}}_{{\mathcal{H}}^{\otimes n}}(C\otimes{I}^{\otimes(n-k)})K ={\mathrm{Tr}}_{{\mathcal{H}}^{\otimes k}}C{\mathrm{L}}^{k}_{n}K.$$ It is also assumed ${\mathrm{L}}^{n}_{n}K =K.$ [\[popr\]]{} The operator ${\mathrm{L}}^{k}_{n}K$ always exists and is defined uniquely by Eq. . ${\mathrm{L}}^{k}_{n}K$ is a partial trace of $K$ with respect to the component ${\mathcal{H}}^{\otimes (n-k)}$ of ${\mathcal{H}}^{\otimes n}={\mathcal{H}}^{\otimes k}\otimes{\mathcal{H}}^{\otimes (n-k)}.$ If ${\mathcal{H}}={\mathcal{H}}_{Y}{\mathrel{\mathop:}=}L^{2}(Y,\mu),$ where the measure $\mu$ is separable and nite, and ${\mathcal{K}}$ is a product integral kernel of $K$ (see Appendix \[sectjad\]) then ${\mathrm{L}}^{k}_{n}K$ has an integral kernel ${\mathcal{K}}_{0}$ given by formula , according to Lemma \[parttr\] and Corollary \[red\]. Under the assumptions of Definition \[kontr\] one has ${\mathrm{Tr}}_{{\mathcal{H}}^{\otimes k}}{\mathrm{L}}^{k}_{n}K ={\mathrm{Tr}}_{{\mathcal{H}}^{\otimes n}}K,$ and if $p\in{\mathbb{N}},$ $k<p<n,$ then ${\mathrm{L}}^{k}_{p}\left({\mathrm{L}}^{p}_{n}K\right) ={\mathrm{L}}^{k}_{n}K.$ Moreover, if $K\in{\mathcal{B}}^{\ast}({\mathcal{H}}^{\otimes n})$ then ${\mathrm{L}}^{k}_{n}K\in{\mathcal{B}}^{\ast}({\mathcal{H}}^{\otimes k}),$ and if $K\in{\mathcal{B}}^{\ast}_{\geq 0}({\mathcal{H}}^{\otimes n})$ then ${\mathrm{L}}^{k}_{n}K\in{\mathcal{B}}^{\ast}_{\geq 0}({\mathcal{H}}^{\otimes k}).$ Contractions of density operators are called *reduced density operators*. Contractions preserve the Fermi and the Bose-Einstein statistics of the contracted operator, i.e. for $K\in A^{(n)}_{{\mathcal{H}}} {\mathcal{T}}({\mathcal{H}}^{\otimes n})A^{(n)}_{{\mathcal{H}}}$ and $G\in S^{(n)}_{{\mathcal{H}}}{\mathcal{T}}({\mathcal{H}}^{\otimes n}) S^{(n)}_{{\mathcal{H}}}$ one has ${\mathrm{L}}^{k}_{n}K\in A^{(k)}_{{\mathcal{H}}} {\mathcal{T}}({\mathcal{H}}^{\otimes k})A^{(k)}_{{\mathcal{H}}}$ and ${\mathrm{L}}^{k}_{n}G\in S^{(k)}_{{\mathcal{H}}}{\mathcal{T}}({\mathcal{H}}^{\otimes k})S^{(k)}_{{\mathcal{H}}}.$ For such $K$ and $G$ Eqs.  hold. The following theorem is a part of Coleman’s theorem [@ColemanRMP63; @KummerJMP67]. [\[colemferm\]]{} Let $n\in{\mathbb{N}},$ $n\geq 2.$ For every (mion) density operator $D\in{\mathcal{D}}\left({\mathcal{H}}^{\otimes n}\right),$ $D =A^{(n)}_{{\mathcal{H}}}DA^{(n)}_{{\mathcal{H}}},$ one has ${\left\Vert{\mathrm{L}}^{1}_{n}D\right\Vert} \leq\frac{1}{n}{\left\VertD\right\Vert}.$ Recurrence and explicit formulae for contractions of products of trace class operators ====================================================================================== [\[potkontr\]]{} In this section recurrence and explicit formulae for contractions of antisymmetric and symmetric powers of single particle operators are derived. In the whole section use is made of the Hilbert space ${\mathcal{H}}_{Y} {\mathrel{\mathop:}=}L^{2}(Y,\mu)$ over the field ${\mathbb{K}}={\mathbb{C}}$ or ${\mathbb{R}},$ where the measure $\mu$ is separable and nite. The following theorem in the case of $k=1,2$ was proved in [@KossakowskiRMP86; @MackowiakPR99]. [\[rek\]]{} Let $\rho\in{\mathcal{T}}({\mathcal{H}}_{Y}).$ If $k,n\in{\mathbb{N}},$ $1<k<n,$ then $$\begin{aligned} {\label{rek1}} \nonumber {\tbinom{n}{k}}{\mathrm{L}}^{k}_{n}\rho^{\wedge n} & ={\tbinom{n-1}{k-1}} \left({\mathrm{L}}^{k-1}_{n-1}\rho^{\wedge(n-1)}\right)\wedge \rho \\ &\quad -{\tbinom{n-1}{k}}\left({\mathrm{L}}^{k}_{n-1}\rho^{\wedge(n-1)}\right) \left({I}^{\otimes(k-1)}\otimes \rho\right)A^{(k)}_{{\mathcal{H}}_{Y}}, \end{aligned}$$ $$\begin{aligned} {\label{rek2}} \nonumber {\tbinom{n}{k}}{\mathrm{L}}^{k}_{n}\rho^{\vee n} & ={\tbinom{n-1}{k-1}} \left({\mathrm{L}}^{k-1}_{n-1}\rho^{\vee(n-1)}\right)\vee \rho \\ &\quad +{\tbinom{n-1}{k}}\left({\mathrm{L}}^{k}_{n-1}\rho^{\vee(n-1)}\right) \left({I}^{\otimes(k-1)}\otimes \rho\right)S^{(k)}_{{\mathcal{H}}_{Y}}, \end{aligned}$$ and if $n\in{\mathbb{N}},$ $n\geq 2,$ then $${\label{rek3}} n{\mathrm{L}}^{1}_{n}\rho^{\wedge n} =\left({\mathrm{Tr}}\,\rho^{\wedge(n-1)}\right)\rho -(n-1)\left({\mathrm{L}}^{1}_{n-1}\rho^{\wedge(n-1)}\right)\rho,$$ $${\label{rek4}} n{\mathrm{L}}^{1}_{n}\rho^{\vee n} =\left({\mathrm{Tr}}\,\rho^{\vee(n-1)}\right)\rho +(n-1)\left({\mathrm{L}}^{1}_{n-1}\rho^{\vee(n-1)}\right)\rho.$$ Let $\varrho\colon Y\times Y\to{\mathbb{K}}$ be a product integral kernel of $\rho.$ For every $m\in{\mathbb{N}}$ define the mapping $\varrho^{\wedge m}\colon Y^m\times Y^m\to{\mathbb{K}}$ by the formula $$\varrho^{\wedge m} \left( \begin{array}{c} x_1,\ldots,x_m \\ y_1,\ldots,y_m \end{array} \right) =\det\left[ \begin{array}{ccc} \varrho(x_1,y_1)&\cdots&\varrho(x_1,y_m) \\ \vdots&\cdots&\vdots\\ \varrho(x_m,y_1)&\cdots&\varrho(x_m,y_m) \end{array} \right].$$ Then the mapping ${\mathcal{K}}\colon Y^n\times Y^n\to{\mathbb{K}}$ given by $${\mathcal{K}}(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}) =\frac{1}{n!}\varrho^{\wedge n} \left( \begin{array}{c} x_1,\ldots,x_n\\ y_1,\ldots,y_n \end{array} \right)$$ is a product integral kernel of $\rho^{\wedge n} =A^{(n)}_{{\mathcal{H}}_{Y}} \rho^{\otimes n}.$ Eq.  will be first proved for $n>k+1.$ In view of Remark \[popr\], an integral kernel ${\mathcal{L}}\colon Y^k\times Y^k\to{\mathbb{K}}$ of ${\tbinom{n}{k}}{\mathrm{L}}^{k}_{n}\rho^{\wedge n}$ can be given by $${\mathcal{L}}(x^{\prime},y^{\prime}) ={\tbinom{n}{k}}\int_{Y^{n-k}} {\mathcal{K}}(x^{\prime},x^{\prime\prime},y^{\prime},x^{\prime\prime}) \,{\mathrm{d}}\mu^{\otimes(n-k)}(x^{\prime\prime})$$ for $(x^{\prime},y^{\prime})\in Y^{k}\times Y^{k}.$ Performing $k!$ permutations of the first $k$ rows and $k!$ permutations of the first $k$ columns of the determinant defining ${\mathcal{K}}$ and expanding that determinant with respect to the column one obtains $$\begin{aligned} {\label{rek6}} \nonumber & {\mathcal{L}}(x_{1},\ldots,x_{k},y_{1},\ldots,y_{k})\\ \nonumber & ={\tbinom{n}{k}}\frac{1}{n!}\frac{1}{(k!)^{2}} \sum_{\pi,\sigma\in S_{k}} {\mathrm{sgn}}\,\pi\,{\mathrm{sgn}}\,\sigma \sum_{j=1}^{k}(-1)^{k+j} \int_{Y^{n-k}}\varrho(x_{\pi(j)},y_{\sigma(k)}) \\ \nonumber & \qquad \cdot \varrho^{\wedge(n-1)} \left( \begin{array}{c} x_{\pi(1)},\ldots,x_{\pi(j-1)},x_{\pi(j+1)}, \ldots,x_{\pi(k)},x_{k+1},\ldots,x_{n}\\ y_{\sigma(1)},\ldots,y_{\sigma(k-1)},x_{k+1},\ldots,x_{n} \end{array} \right) \\ \nonumber & \qquad {\mathrm{d}}\mu^{\otimes(n-k)}(x_{k+1},\ldots,x_{n}) \\ \nonumber & \quad +{\tbinom{n}{k}}\frac{1}{n!}\frac{1}{(k!)^{2}} \sum_{\pi,\sigma\in S_{k}} {\mathrm{sgn}}\,\pi\,{\mathrm{sgn}}\,\sigma \sum_{j=k+1}^{n}(-1)^{k+j} \int_{Y^{n-k}}\varrho(x_{j},y_{\sigma(k)}) \\ \nonumber & \qquad \cdot\varrho^{\wedge(n-1)} \left( \begin{array}{c} x_{\pi(1)},\ldots,x_{\pi(k)},x_{k+1}, \ldots,x_{j-1},x_{j+1},\ldots,x_{n}\\ y_{\sigma(1)},\ldots,y_{\sigma(k-1)},x_{k+1},\ldots,x_{n} \end{array} \right) \\ & \qquad {\mathrm{d}}\mu^{\otimes(n-k)}(x_{k+1},\ldots,x_{n}). \end{aligned}$$ Consider the first term on the r.h.s. of Eq. . In all summands of $\sum_{j=1}^{k}$ except the last one the row of the determinant (containing the variable $x_{\pi(k)}$) can be shifted into the position, changing thereby the sign of the determinant by $(-1)^{(k-2)-(j-1)} =(-1)^{-k-j+1}.$ Then the first term of sum  assumes the form $$\begin{aligned} {\label{rek9}} \nonumber & {\tbinom{n}{k}}\frac{1}{n!}\frac{1}{(k!)^{2}} \sum_{\pi,\sigma\in S_{k}} {\mathrm{sgn}}\,\pi\,{\mathrm{sgn}}\,\sigma\sum_{j=1}^{k-1}(-1)^{k+j}(-1)^{-k-j+1} \int_{Y^{n-k}}\varrho(x_{\pi(j)},y_{\sigma(k)}) \\ \nonumber & \quad \cdot \varrho^{\wedge(n-1)} \left( \begin{array}{c} x_{\pi(1)},\ldots,x_{\pi(j-1)},x_{\pi(k)},x_{\pi(j+1)}, \ldots,x_{\pi(k-1)},x_{k+1},\ldots,x_{n} \\ y_{\sigma(1)},\ldots,y_{\sigma(k-1)},x_{k+1},\ldots,x_{n} \end{array}\right) \\ \nonumber & \quad {\mathrm{d}}\mu^{\otimes(n-k)}(x_{k+1},\ldots,x_{n}) \\ \nonumber & +{\tbinom{n}{k}}\frac{1}{n!}\frac{1}{(k!)^{2}} \sum_{\pi,\sigma\in S_{k}} {\mathrm{sgn}}\,\pi\,{\mathrm{sgn}}\,\sigma\,(-1)^{k+k} \int_{Y^{n-k}}\varrho(x_{\pi(k)},y_{\sigma(k)}) \\ & \quad \cdot \varrho^{\wedge(n-1)} \left( \begin{array}{c} x_{\pi(1)},\ldots,x_{\pi(k-1)},x_{k+1},\ldots,x_{n} \\ y_{\sigma(1)},\ldots,y_{\sigma(k-1)},x_{k+1},\ldots,x_{n} \end{array} \right) {\mathrm{d}}\mu^{\otimes(n-k)}(x_{k+1},\ldots,x_{n}). \end{aligned}$$ Let $T_{jk}\in S_{k}$ denote the transposition $j\leftrightarrow k$ for $j<k$ (then $(-1)^{k+j}(-1)^{-k-j+1} =(-1) ={\mathrm{sgn}}\,T_{jk}$) and the identity permutation for $j=k$ (with ${\mathrm{sgn}}\,T_{kk}=1$). Expression  can be written as $$\begin{aligned} & \sum_{j=1}^{k} {\tbinom{n}{k}}\frac{1}{n!}\frac{1}{(k!)^{2}} \sum_{\pi,\sigma\in S_{k}}({\mathrm{sgn}}\,\pi\,{\mathrm{sgn}}\,T_{jk})\,{\mathrm{sgn}}\,\sigma \int_{Y^{n-k}}\varrho(x_{(\pi\circ T_{jk})(k)},y_{\sigma(k)}) \\ & \quad \cdot \varrho^{\wedge(n-1)} \left( \begin{array}{c} x_{(\pi\circ T_{jk})(1)}, \ldots,x_{(\pi\circ T_{jk})(k-1)},x_{k+1},\ldots,x_{n} \\ y_{\sigma(1)},\ldots,y_{\sigma(k-1)},x_{k+1},\ldots,x_{n} \end{array}\right) \\ &\quad {\mathrm{d}}\mu^{\otimes(n-k)}(x_{k+1},\ldots,x_{n}) \end{aligned}$$ $$\begin{aligned} {\label{rek10}} \nonumber & ={\tbinom{n-1}{k-1}}\frac{1}{(k!)^{2}} \sum_{\tau,\sigma\in S_{k}} {\mathrm{sgn}}\,\tau\,{\mathrm{sgn}}\,\sigma \varrho(x_{\tau(k)},y_{\sigma(k)})\int_{Y^{n-k}}\frac{1}{(n-1)!} \\ & \quad \cdot \varrho^{\wedge(n-1)} \left( \begin{array}{c} x_{\tau(1)},\ldots,x_{\tau(k-1)},x_{k+1},\ldots,x_{n} \\ y_{\sigma(1)},\ldots,y_{\sigma(k-1)},x_{k+1},\ldots,x_{n} \end{array} \right) {\mathrm{d}}\mu^{\otimes(n-k)}(x_{k+1},\ldots,x_{n}). \end{aligned}$$ The function ${\mathcal{P}}_1\colon Y^k\times Y^k\to{\mathbb{K}},$ such that ${\mathcal{P}}_{1}(x_{1},\ldots,x_{k},y_{1},\ldots,y_{k})$ is equal to expression , is an integral kernel of the operator $${\tbinom{n-1}{k-1}}\left({\mathrm{L}}^{k-1}_{n-1}\rho^{\wedge(n-1)}\right)\wedge \rho,$$ which appears on the r.h.s. of Eq. . Consider now the second term of the sum on the r.h.s. of Eq. . One can change the indices of the integral variables $x_{k+1},\ldots,x_{j}$ in all summands of $\sum_{j=k+1}^{n}$ except the first one, according to the rule $x_{j}\to x_{k+1}\to x_{k+2} \to\cdots\to x_{j}$ for the summand, and simultaneously change the order of the columns of the determinant inversely (which changes the sign by $(-1)^{(j-1)-k} =(-1)^{(k+1)-j}$). The resulting sum $\sum_{j=k+1}^{n}$ then contains $n-k$ terms identical to the one with $j=k+1.$ Thus the second term of sum  equals $$\begin{aligned} &-(n-k){\tbinom{n}{k}}\frac{1}{n!}\frac{1}{(k!)^{2}} \sum_{\pi,\sigma\in S_{k}} {\mathrm{sgn}}\,\pi\,{\mathrm{sgn}}\,\sigma\int_{Y^{n-k}} \varrho(x_{k+1},y_{\sigma(k)}) \\ &\qquad \cdot \varrho^{\wedge(n-1)} \left( \begin{array}{c} x_{\pi(1)},\ldots,x_{\pi(k)},x_{k+2},\ldots,x_{n} \\ y_{\sigma(1)},\ldots,y_{\sigma(k-1)},x_{k+1},\ldots,x_{n} \end{array} \right) {\mathrm{d}}\mu^{\otimes(n-k)}(x_{k+1},\ldots,x_{n}). \end{aligned}$$ $$\begin{aligned} {\label{rek8}} &\nonumber =-{\tbinom{n-1}{k}}\frac{1}{k!}\sum_{\sigma\in S_{k}} {\mathrm{sgn}}\,\sigma\int_{Y}\varrho(x_{k+1},y_{\sigma(k)}) \left(\int_{Y^{n-1-k}}\frac{1}{(n-1)!} \right. \\ &\nonumber \qquad \cdot \varrho^{\wedge(n-1)} \left( \begin{array}{c} x_{1},\ldots,x_{k},x_{k+2},\ldots,x_{n} \\ y_{\sigma(1)},\ldots,y_{\sigma(k-1)},x_{k+1},\ldots,x_{n} \end{array} \right) \\ &\qquad \left. {\mathrm{d}}\mu^{\otimes(n-1-k)}(x_{k+2},\ldots,x_{n})\right) {\mathrm{d}}\mu(x_{k+1}). \end{aligned}$$ The function ${\mathcal{P}}_2\colon Y^k\times Y^k\to{\mathbb{K}},$ such that ${\mathcal{P}}_{2}(x_{1},\ldots,x_{k},y_{1},\ldots,y_{k})$ is equal to expression , is an integral kernel of the operator $$-{\tbinom{n-1}{k}}\left({\mathrm{L}}^{k}_{n-1}\rho^{\wedge(n-1)}\right) \left({I}^{\otimes(k-1)}\otimes \rho\right)A^{(k)}_{{\mathcal{H}}_{Y}},$$ which occurs on the r.h.s. of Eq. . One concludes that the kernel ${\mathcal{L}}$ of the operator on the l.h.s. of Eq.  is equal to the kernel ${\mathcal{P}}_{1}+{\mathcal{P}}_{2}$ of the operator on the r.h.s. of Eq. , which proves the equality of both operators. The proof of Eq.  for $n=k+1$ and the proof of Eq.  proceed analogously. Similarly, the proof of Eqs. , is accomplished by changing the product $\wedge$ into $\vee$ and replacing determinants in all formulae by pernaments, defined for every complex matrix $A=[a_{i,j}]_{i,j=1}^{m}$ as $${\mathrm{per}}A =\sum_{\pi\in S_{m}}a_{\pi(1),1} \cdots a_{\pi(m),m}.$$ Notice that signs of permutations are omitted in this case, similarly as the multipliers $\pm 1$ in the Laplace expansions. [\[komkontr\]]{} Let $k,m\in{\mathbb{N}},$ $1<k<m,$ $\rho\in{\mathcal{T}}\left({\mathcal{H}}_{Y}\right),$ $j_{k}\in{\left\{k,\ldots,m\right\}},$ and $$R{\mathrel{\mathop:}=}\sum_{j_{k-1}=k-1}^{j_{k}-1} \sum_{j_{k-2}=k-2}^{j_{k-1}-1} \ldots\sum_{j_{1}=1}^{j_{2}-1}\rho^{j_{1}} \otimes \rho^{j_{2}-j_{1}}\otimes\cdots \otimes \rho^{j_{k}-j_{k-1}}$$ (for $k=2$ the only summation index is $j_{k-1}=j_{1}).$ Then $A^{(k)}_{{\mathcal{H}}_{Y}}R =RA^{(k)}_{{\mathcal{H}}_{Y}}$ and $S^{(k)}_{{\mathcal{H}}_{Y}}R =RS^{(k)}_{{\mathcal{H}}_{Y}}.$ The proof of the above lemma consists in demonstrating the invariance of $R$ under permutations of factors in the tensor products. To this end it suffices to observe that $R$ is invariant under transpositions of neighbouring factors. [\[pomoc\]]{} Let $\rho\in{\mathcal{T}}({\mathcal{H}}_{Y}),$ $\xi^{\wedge}_{s} {\mathrel{\mathop:}=}{\mathrm{Tr}}\,\rho^{\wedge s},$ $\xi^{\vee}_{s} {\mathrel{\mathop:}=}{\mathrm{Tr}}\,\rho^{\vee s}$ for $s\in{\mathbb{N}},$ $\xi^{\wedge}_{0} {\mathrel{\mathop:}=}1,$ $\xi^{\vee}_{0} {\mathrel{\mathop:}=}1,$ and $$\Pi^{\wedge p}_{m}(\rho) {\mathrel{\mathop:}=}\sum_{j_{p}=p}^{m}\sum_{j_{p-1}=p-1}^{j_{p}-1} \ldots\sum_{j_{1}=1}^{j_{2}-1} {\xi}^{\wedge}_{m-j_{p}}(-1)^{p+j_{p}}\rho^{j_{1}} \wedge \rho^{j_{2}-j_{1}}\wedge\ldots\wedge \rho^{j_{p}-j_{p-1}},$$ $$\Pi^{\vee p}_{m}(\rho) {\mathrel{\mathop:}=}\sum_{j_{p}=p}^{m}\sum_{j_{p-1}=p-1}^{j_{p}-1} \ldots\sum_{j_{1}=1}^{j_{2}-1} {\xi}^{\vee}_{m-j_{p}}\rho^{j_{1}} \vee \rho^{j_{2}-j_{1}}\vee\cdots \vee \rho^{j_{p}-j_{p-1}}$$ for $p,m\in{\mathbb{N}},$ $p\leq m.$ (For $p=1$ the only summation index is $j_{1}$ and the summation runs over the operators $\rho^{j_{1}}.$) If $2\leq p<m$ then $${\label{pomoc1}} \Pi^{\wedge p}_{m}(\rho) =\left(\Pi^{\wedge(p-1)}_{m-1}(\rho)\right)\wedge\rho -\left(\Pi^{\wedge p}_{m-1}(\rho)\right) ({I}^{\otimes(p-1)}\otimes \rho)A^{(p)}_{{\mathcal{H}}_{Y}}$$ and $${\label{pomoc2}} \Pi^{\vee p}_{m}(\rho) =\left(\Pi^{\vee (p-1)}_{m-1}(\rho)\right)\vee \rho +\left(\Pi^{\vee p}_{m-1}(\rho)\right) ({I}^{\otimes(p-1)}\otimes \rho)S^{(p)}_{{\mathcal{H}}_{Y}}.$$ Eq.  will be first proved for $p>2.$ One has $$\begin{aligned} & \Pi^{\wedge p}_{m}(\rho) =\xi^{\wedge}_{m-p}\rho\wedge\ldots\wedge \rho +\sum_{l_{p}=p}^{m-1}\sum_{l_{p-1}=p-1}^{l_{p}} \sum_{l_{p-2}=p-2}^{l_{p-1}-1}\ldots \sum_{l_{1}=1}^{l_{2}-1} \\ &\qquad {\xi}^{\wedge}_{m-l_{p}-1}(-1)^{p+l_{p}+1}\rho^{l_{1}} \wedge \rho^{l_{2}-l_{1}}\wedge\ldots\wedge \rho^{l_{p-1}-l_{p-2}} \wedge \rho^{l_{p}-l_{p-1}+1} \end{aligned}$$ $$\begin{aligned} {\label{pomoc4}} \nonumber & =\xi^{\wedge}_{m-p}\rho\wedge\ldots\wedge \rho +\sum_{l_{p}=p}^{m-1}\sum_{l_{p-1}=p-1}^{l_{p}-1} \sum_{l_{p-2}=p-2}^{l_{p-1}-1}\ldots \sum_{l_{1}=1}^{l_{2}-1} \\ &\nonumber \qquad {\xi}^{\wedge}_{m-l_{p}-1} (-1)^{p+l_{p}+1}\rho^{l_{1}} \wedge \rho^{l_{2}-l_{1}}\wedge\ldots\wedge \rho^{l_{p-1}-l_{p-2}} \wedge \rho^{l_{p}-l_{p-1}+1} \\ \nonumber &\quad +\sum_{l_{p}=p}^{m-1} \sum_{l_{p-2}=p-2}^{l_{p}-1}\sum_{l_{p-3}=p-3}^{l_{p-2}-1} \ldots \sum_{l_{1}=1}^{l_{2}-1} \\ &\qquad {\xi}^{\wedge}_{m-l_{p}-1}(-1)^{p+l_{p}+1}\rho^{l_{1}} \wedge \rho^{l_{2}-l_{1}}\wedge\ldots\wedge \rho^{l_{p-2}-l_{p-3}} \wedge \rho^{l_{p}-l_{p-2}}\wedge \rho. \end{aligned}$$ The first and the third term after the last of equalities  yield $$\begin{aligned} {\label{pomoc5}} \nonumber & \sum_{j_{p-1}=p-1}^{m-1} \sum_{j_{p-2}=p-2}^{j_{p-1}-1}\ldots \sum_{j_{1}=1}^{j_{2}-1} \\ &\quad \left({\xi}^{\wedge}_{(m-1)-j_{p-1}} (-1)^{(p-1)+j_{p-1}}\rho^{j_{1}} \wedge \rho^{j_{2}-j_{1}}\wedge \ldots\wedge \rho^{j_{p-1}-j_{p-2}}\right)\wedge \rho \end{aligned}$$ for $l_{p}\equiv j_{p-1},$ $l_{p-2}\equiv j_{p-2},\ldots,l_{1}\equiv j_{1}.$ By Lemma \[komkontr\], the second term after the last of equalities  equals $${\label{pomoc6}} -\left(\Pi^{\wedge p}_{m-1}(\rho)\right) ({I}^{\otimes(p-1)}\otimes \rho)A^{(p)}_{{\mathcal{H}}_{Y}}.$$ The sum of expressions  and  is equal to the r.h.s. of Eq.  for $p>2.$ After simplifications the proof also applies to the case of $p=2.$ The proof of Eq.  is analogous to that of Eq. . The next theorem provides the explicit form of tractions of product operators. The proof for $k=1,2$ was given in [@KossakowskiRMP86; @MackowiakPR99]. The author of [@MackowiakPR99] emphasized that formula  for $k=2$ was derived by S. Pruski in 1978. [\[jawny\]]{} Let $k,n\in{\mathbb{N}},$ $k<n,$ $\rho\in{\mathcal{T}}\left({\mathcal{H}}_{Y}\right),$ $\xi^{\wedge}_{s} {\mathrel{\mathop:}=}{\mathrm{Tr}}\,\rho^{\wedge s},$ $\xi^{\vee}_{s} {\mathrel{\mathop:}=}{\mathrm{Tr}}\,\rho^{\vee s}$ for $s\in{\mathbb{N}},$ and $\xi^{\wedge}_{0} {\mathrel{\mathop:}=}1,$ $\xi^{\vee}_{0} {\mathrel{\mathop:}=}1.$ Then $$\begin{aligned} {\label{jawny1th}} &\nonumber {\tbinom{n}{k}}{\mathrm{L}}^{k}_{n}\rho^{\wedge n} \\ &\nonumber =\sum_{j_{k}=k}^{n}\sum_{j_{k-1}=k-1}^{j_{k}-1} \ldots\sum_{j_{1}=1}^{j_{2}-1} {\xi}^{\wedge}_{n-j_{k}}(-1)^{k+j_{k}}\rho^{j_{1}} \wedge \rho^{j_{2}-j_{1}}\wedge \ldots \wedge \rho^{j_{k}-j_{k-1}} \\ \nonumber & =\sum_{i_{1}=1}^{n-(k-1)} \sum_{i_{2}=1}^{n-i_{1}-(k-2)}\ldots \sum_{i_{k-1}=1}^{n-i_{1}-\cdots-i_{k-2}-1} \sum_{i_{k}=1}^{n-i_{1}-\cdots-i_{k-1}} \\ &\quad {\xi}^{\wedge}_{n-i_{1}-\cdots-i_{k}} (-1)^{k+i_{1}+\cdots+i_{k}}\rho^{i_{1}}\wedge\ldots\wedge \rho^{i_{k}} \end{aligned}$$ and $$\begin{aligned} {\label{jawny2th}} \nonumber & {\tbinom{n}{k}}{\mathrm{L}}^{k}_{n}\rho^{\vee n} \\ \nonumber & =\sum_{j_{k}=k}^{n} \sum_{j_{k-1}=k-1}^{j_{k}-1} \ldots\sum_{j_{1}=1}^{j_{2}-1} {\xi}^{\vee}_{n-j_{k}}\rho^{j_{1}}\vee \rho^{j_{2}-j_{1}}\vee \cdots \vee \rho^{j_{k}-j_{k-1}} \\ &\nonumber =\sum_{i_{1}=1}^{n-(k-1)}\sum_{i_{2}=1}^{n-i_{1}-(k-2)} \ldots \sum_{i_{k-1}=1}^{n-i_{1}-\cdots-i_{k-2}-1} \sum_{i_{k}=1}^{n-i_{1}-\cdots-i_{k-1}} \\ &\quad {\xi}^{\vee}_{n-i_{1}-\cdots-i_{k}} \rho^{i_{1}}\vee\cdots\vee \rho^{i_{k}}. \end{aligned}$$ (For $k=1$ the only summation indices are $j_1$ and $i_1$ and the summation runs over the operators $\rho^{j_1}$ and $\rho^{i_1}$, respectively.) For every $p,m\in{\mathbb{N}},$ $p\leq m,$ let $\Pi^{\wedge p}_{m}(\rho)$ be defined as in Lemma \[pomoc\]. Then the first of equalities  can be written as $${\label{jawny1}} {\tbinom{n}{k}}{\mathrm{L}}^{k}_{n}\rho^{\wedge n} =\Pi^{\wedge k}_{n}(\rho).$$ The proof of Eq.  will be carried out by (double) induction with respect to $k$ and, for fixed $k,$ with respect to $n>k.$ $1^{\circ}.$ ($k=1$) This part of the proof is by induction with respect to $n>1.$ a) ($n=2$) According to Theorem \[rek\], $2{\mathrm{L}}^{1}_{2}\rho^{\wedge 2} =\left({\mathrm{Tr}}\,\rho\right)\rho-\rho^{2} =\Pi^{\wedge 1}_{2}(\rho).$ b) Assuming validity of formula  (with $k=1)$ for $n\in{\left\{2,\ldots,m-1\right\}},$ where $m\in{\mathbb{N}},$ $m>2,$ its validity will be shown for $n=m.$ One has $$\Pi^{\wedge 1}_{m}(\rho) =\xi^{\wedge}_{m-1}\rho +\sum_{j_{1}=2}^{m}\xi^{\wedge}_{m-j_{1}} (-1)^{1+j_{1}}\rho^{j_{1}} =\xi^{\wedge}_{m-1}\rho -\left(\Pi^{\wedge 1}_{m-1}(\rho)\right)\rho.$$ Thus, according to the inductive hypothesis for $n\in{\left\{2,\ldots,m-1\right\}},$ $$\Pi^{\wedge 1}_{m}(\rho) =\xi^{\wedge}_{m-1}\rho -(m-1)\left({\mathrm{L}}^{1}_{m-1}\rho^{\wedge(m-1)}\right)\rho,$$ which, in view of Theorem \[rek\], yields ${\tbinom{m}{1}}{\mathrm{L}}^{1}_{m}\rho^{\wedge m} =\Pi^{\wedge 1}_{m}(\rho).$ $2^{\circ}.$ Assuming validity of formula  for $k\in{\left\{1,\ldots,p-1\right\}}$ (and every $n>k),$ where $p\in{\mathbb{N}},$ $p>1,$ its validity will be shown for $k=p.$ For arbitrarily fixed $p$ the proof will be carried out by induction with respect to $n>p.$ a) ($n=p+1$) By the inductive hypothesis with respect to $k$ and Lemma \[pomoc\], $$\Pi^{\wedge p}_{p+1}(\rho) ={\tbinom{(p+1)-1}{p-1}} \left({\mathrm{L}}^{p-1}_{(p+1)-1}\rho^{\wedge((p+1)-1)}\right)\wedge \rho -\rho^{\wedge p}({I}^{\otimes(p-1)}\otimes \rho)A^{(p)}_{{\mathcal{H}}_{Y}},$$ hence $\displaystyle {\tbinom{p+1}{p}}{\mathrm{L}}^{p}_{p+1}\rho^{\wedge (p+1)} =\Pi^{\wedge p}_{p+1}(\rho),$ according to Theorem \[rek\]. b) Assuming validity of formula  for $n\in{\left\{p+1,\ldots,m-1\right\}},$ where $k=p,$ $m\in{\mathbb{N}},$ $m>p+1,$ its validity will be shown for $n=m.$ By the inductive hypothesis for $k\in{\left\{1,\ldots,p-1\right\}}$ and Lemma \[pomoc\] one has $$\Pi^{\wedge p}_{m}(\rho) ={\tbinom{m-1}{p-1}} \left({\mathrm{L}}^{p-1}_{m-1}\rho^{\wedge(m-1)}\right)\wedge \rho -\left(\Pi^{\wedge p}_{m-1}(\rho)\right) ({I}^{\otimes(p-1)}\otimes \rho)A^{(p)}_{{\mathcal{H}}_{Y}}.$$ According to the inductive hypothesis for $n\in{\left\{p+1,\ldots,m-1\right\}}$ one thus obtains $$\begin{gathered} \Pi^{\wedge p}_{m}(\rho) ={\tbinom{m-1}{p-1}}\left({\mathrm{L}}^{p-1}_{m-1}\rho^{\wedge(m-1)}\right) \wedge \rho \\ -{\tbinom{m-1}{p}}\left({\mathrm{L}}^{p}_{m-1}\rho^{\wedge(m-1)}\right) \left({I}^{\otimes(p-1)}\otimes \rho\right)A^{(p)}_{{\mathcal{H}}_{Y}}, \end{gathered}$$ which, in view of Theorem \[rek\], yields ${\tbinom{m}{p}}{\mathrm{L}}^{p}_{m}\rho^{\wedge m} =\Pi^{\wedge p}_{m}(\rho).$ This completes the inductive proof for Eq.  with respect to $n>p$ and with respect to $k.$ Now turn to the second of equalities . For $k=1$ it is identity. Let $k\geq 2.$ Setting $j_{1}=i_{1},$ $j_{2}=i_{1}+i_{2},$ …, $j_{k}=i_{1}+\cdots+i_{k}$ or, equivalently, $i_{1}=j_{1},$ $i_{2}=j_{2}-j_{1},$ $i_{3}=j_{3}-j_{2},$ …, $i_{k}=j_{k}-j_{k-1},$ one checks that both sides of the equality in question are equal to $$\begin{gathered} \sum_{j_{1}=1}^{n-(k-1)}\sum_{j_{2}=j_{1}+1}^{n-(k-2)} \ldots \sum_{j_{k-1}=j_{k-2}+1}^{n-1}\sum_{j_{k}=j_{k-1}+1}^{n} \\ {\xi}^{\wedge}_{n-j_{k}}(-1)^{k+j_{k}}\rho^{j_{1}} \wedge \rho^{j_{2}-j_{1}}\wedge\ldots \wedge \rho^{j_{k}-j_{k-1}}. \end{gathered}$$ The proof of Eq.  is analogous to that of Eq. . Asymptotic form for contractions of product states ================================================== [\[rozas\]]{} The explicit forms of the contractions of product states given by Theorem \[jawny\] are quite complex. In the present section they are replaced by simpler operators, equivalent in the thermodynamic limit. The main results in this section are Theorems \[glowne\] and \[zmiana\]. In what follows use is made of the Hilbert space ${\mathcal{H}}_{\Omega}{\mathrel{\mathop:}=}L^{2}(\Omega,\mu)$ (over ${\mathbb{C}}$ or ${\mathbb{R}}$), where the measure $\mu$ is separable, nite, and satisfies the condition $\mu(\Omega)=+\infty.$ For every rable subset $Y\subset\Omega$ it is assumed ${\mathcal{H}}_{Y}{\mathrel{\mathop:}=}L^{2}(Y,\mu).$ Let ${\mathcal{M}}(\Omega)$ be a fixed family of measurable subsets of $\Omega$ such that $0<\mu(Y)<+\infty$ for every $Y\in{\mathcal{M}}(\Omega)$ (it can be the family of all such subsets). Fix $d\in{\mathbb{R}},$ $d>0,$ and assume that there exists a sequence ${\left\{Y_{n}\right\}}_{n\in{\mathbb{N}}} \subset{\mathcal{M}}(\Omega)$ such that $\frac{n}{\mu(Y_{n})}\to d$ as $n\to\infty.$ [\[granica\]]{} Fix $d\in{\mathbb{R}},$ $d>0,$ and let ${\left\{b_{Y,n}\right\}}_{(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}}}$ be a family of complex numbers. A complex number $b$ is said to be the *thermodynamic limit* of this family if for every sequence ${\left\{Y_{n}\right\}}_{n\in{\mathbb{N}}}\subset{\mathcal{M}}(\Omega)$ such that $\displaystyle\lim_{n\to\infty}\frac{n}{\mu(Y_{n})}=d$ the condition $\displaystyle \lim_{n\to\infty}b_{Y_{n},n}=b$ is fulfilled. In such a case $b$ is denoted by $\displaystyle{\underset{n,\mu(Y)\to\infty}{d-\lim}}b_{Y,n}.$ Special attention will be given to the families of complex numbers of the form ${\mathrm{Tr}}\,({\mathrm{L}}^{k}_{n}K_{Y,n})C_{Y},$ where $k,n\in{\mathbb{N}},$ $n>k,$ $K_{Y,n}\in{\mathcal{T}}({\mathcal{H}}_{Y}^{\otimes n}),$ and $C_{Y}\in{\mathcal{B}}({\mathcal{H}}_{Y}^{\otimes k}).$ Definition \[granica\] does not guarantee the convergence of families ${\left\{b_{Y,n}\right\}}$ of interest in physics. To obtain such a convergence, additional conditions (such as conditions of uniform growth [@RuelleB69]) are usually imposed on the sequence ${\left\{Y_n\right\}}_{n\in{\mathbb{N}}}$ in question. However, those additional conditions do not affect considerations in this paper. Expression of expectation values of observables in mixed states by using trace, mentioned in Introduction, is the motivation for the following definition. [\[relacja\]]{} Fix $k\in{\mathbb{N}}$ and $d\in{\mathbb{R}},$ $d>0.$ Families ${\mbox{$\left\{A_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ and ${\mbox{$\left\{B_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ of operators $A_{Y,n},B_{Y,n}\in{\mathcal{T}}({\mathcal{H}}_{Y}^{\otimes k})$ are said to be *asymptotically equivalent* (symbolically: $A_{Y,n}\approx B_{Y,n}),$ if for every family ${\mbox{$\left\{C_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ of operators $C_{Y,n}\in{\mathcal{B}}({\mathcal{H}}_{Y}^{\otimes k})$ with uniformly bounded operator norms one has $${\label{relacja1}} {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,(A_{Y,n}-B_{Y,n})C_{Y,n}=0.$$ Condition  is required to hold in particular for families ${\mbox{$\left\{C_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ such that $C_{Y,n}=C_{Y,m}$ for all $Y\in{\mathcal{M}}(\Omega),$ $n,m\in{\mathbb{N}}.$ [\[defasdiff\]]{} The authors of [@KossakowskiRMP86; @MackowiakPR99] used some different definition of asymptotic equivalence of families of operators, closer to Definition \[rel\] in this paper. [\[strongprop\]]{} For fixed $k\in{\mathbb{N}}$ and $d\in{\mathbb{R}},$ $d>0,$ the relation $\approx$ is an equivalence relation. If $A_{Y,n}\approx B_{Y,n}$ then for every family of operators $C_{Y,n}$ as in Definition \[relacja\] the limit $\displaystyle{\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,A_{Y,n}C_{Y,n}$ exists iff the limit $\displaystyle{\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,B_{Y,n}C_{Y,n}$ exists, in which case both limits are equal. Notice also that if $A_{Y,n}\approx B_{Y,n}$ then $A_{Y,n}+D_{Y,n} \approx B_{Y,n}+D_{Y,n}$ and $aA_{Y,n}\approx aB_{Y,n}$ for every family ${\mbox{$\left\{D_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}\subset {\mathcal{T}}({\mathcal{H}}_{Y}^{\otimes k})$ and $a\in{\mathbb{C}}.$ Furthermore, for every family ${\mbox{$\left\{A_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}\subset {\mathcal{T}}\left({\mathcal{H}}_{Y}^{\otimes k}\right)$ with uniformly bounded trace norms ${\mathrm{Tr}}\,{\left\vertA_{Y,n}\right\vert}$ and for every sequence ${\left\{a_{n}\right\}}_{n\in{\mathbb{N}}}\subset{\mathbb{C}}$ convergent to $a\in{\mathbb{C}}$ one has $a_{n}A_{Y,n}\approx aA_{Y,n}.$ [\[mocna\]]{} Let ${\mbox{$\left\{A_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ and ${\mbox{$\left\{B_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ be as in Definition \[relacja\]. Then $${\label{mocna1}} {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\left\vertA_{Y,n}-B_{Y,n}\right\vert}=0 \quad \Rightarrow \quad A_{Y,n}\approx B_{Y,n}.$$ Moreover, if the operators $A_{Y,n},$ $B_{Y,n}$ are selfadjoint then $${\label{mocna2}} A_{Y,n} \approx B_{Y,n} \quad \Rightarrow \quad {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\left\vertA_{Y,n}-B_{Y,n}\right\vert}=0.$$ Implication follows from Definition \[relacja\] and the estimate $${\left\vert{\mathrm{Tr}}\,(A_{Y,n}-B_{Y,n})C_{Y,n}\right\vert} \leq{\left\VertC_{Y,n}\right\Vert}\,{\mathrm{Tr}}\,{\left\vertA_{Y,n}-B_{Y,n}\right\vert}.$$ Now assume that $A_{Y,n}\approx B_{Y,n},$ which is equivalent to the condition $${\label{mocna5}} D_{Y,n}\approx 0,$$ where $D_{Y,n} {\mathrel{\mathop:}=}A_{Y,n}-B_{Y,n}.$ The operators $D_{Y,n}$ have the spectral representations $$D_{Y,n} =\sum_{i=1}^{\infty}\lambda_{i}(Y,n)P_{\varphi_{i}(Y,n)},$$ where $P_{\varphi_{i}(Y,n)}$ are the projectors onto orthogonal one dimensional subspaces of eigenvectors $\varphi_{i}(Y,n)$ of $D_{Y,n},$ corresponding to eigenvalues $\lambda_{i}(Y,n)\in{\mathbb{R}}.$ Since $\displaystyle \sum_{i=1}^{\infty}{\left\vert\lambda_{i}(Y,n)\right\vert} ={\mathrm{Tr}}\,{\left\vertD_{Y,n}\right\vert}<+\infty,$ for every $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}}$ there exists $m(Y,n)\in{\mathbb{N}}$ such that $\displaystyle\sum_{i=m(Y,n)+1}^{\infty} {\left\vert\lambda_{i}(Y,n)\right\vert}<\frac{1}{n}.$ Thus the operators $$F_{Y,n} =\sum_{i=1}^{m(Y,n)}\lambda_{i}(Y,n)P_{\varphi_{i}(Y,n)}$$ satisfy the condition $${\label{mocna7}} {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,{\left\vertD_{Y,n}-F_{Y,n}\right\vert} ={\underset{n,\mu(Y)\to\infty}{d-\lim}}\sum_{i=m(Y,n)+1}^{\infty}{\left\vert\lambda_{i}(Y,n)\right\vert}=0,$$ which, in view of implication  proved and condition , yields $F_{Y,n}\approx D_{Y,n}\approx 0.$ In particular, $${\label{mocna6}} {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,F_{Y,n}C_{Y,n}=0,$$ where $$C_{Y,n} =\sum_{i=1}^{m(Y,n)} {\mathrm{sgn}}\left(\lambda_{i}(Y,n)\right)P_{\varphi_{i}(Y,n)}, \qquad {\left\VertC_{Y,n}\right\Vert}=1.$$ Observe that ${\mathrm{Tr}}\,F_{Y,n}C_{Y,n} ={\mathrm{Tr}}\,{\left\vertF_{Y,n}\right\vert},$ hence condition  gives $${\label{mocnad}} {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,{\left\vertF_{Y,n}\right\vert}=0.$$ Since ${\mathrm{Tr}}\,{\left\vertD_{Y,n}\right\vert} \leq{\mathrm{Tr}}\,{\left\vertD_{Y,n}-F_{Y,n}\right\vert} +{\mathrm{Tr}}\,{\left\vertF_{Y,n}\right\vert},$ conditions  and  yield $${\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,{\left\vertA_{Y,n}-B_{Y,n}\right\vert} \equiv{\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,{\left\vertD_{Y,n}\right\vert}=0,$$ which proves implication . The following lemma follows from Lemma \[mocna\]. [\[iloczyny\]]{} Fix $k,m\in{\mathbb{N}}.$ Let ${\mbox{$\left\{A_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ and ${\mbox{$\left\{B_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ be families of selfadjoint operators $A_{Y,n},B_{Y,n}\in{\mathcal{T}}({\mathcal{H}}_{Y}^{\otimes k})$ such that $A_{Y,n}\approx B_{Y,n},$ and let ${\mbox{$\left\{D_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ be a family of operators $D_{Y,n}\in{\mathcal{T}}({\mathcal{H}}_{Y}^{\otimes m})$ with uniformly bounded trace norms ${\mathrm{Tr}}\,{\left\vertD_{Y,n}\right\vert}.$ Then $$A_{Y,n}\otimes D_{Y,n}\approx B_{Y,n}\otimes D_{Y,n}, \quad D_{Y,n}\otimes A_{Y,n}\approx D_{Y,n}\otimes B_{Y,n},$$ $$A_{Y,n}\wedge D_{Y,n}\approx B_{Y,n}\wedge D_{Y,n}, \quad D_{Y,n}\wedge A_{Y,n}\approx D_{Y,n}\wedge B_{Y,n},$$ $$A_{Y,n}\vee D_{Y,n}\approx B_{Y,n}\vee D_{Y,n}, \quad D_{Y,n}\vee A_{Y,n}\approx D_{Y,n}\vee B_{Y,n}.$$ In the sequel ${\mbox{$\left\{\rho_{Y}\right\}_{Y \in\mathcal{M}(\Omega)}$}}$ denotes a family of nonnegative definite selfadjoint operators $\rho_{Y}\in{\mathcal{T}}\left({\mathcal{H}}_{Y}\right),$ and for every $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}}$ it is assumed that $$\xi^{\wedge}_{Y,0} {\mathrel{\mathop:}=}1, \quad \xi^{\vee}_{Y,0} {\mathrel{\mathop:}=}1, \qquad \rho_{Y}^{\wedge 1} {\mathrel{\mathop:}=}\rho_{Y}, \quad \rho_{Y}^{\vee 1} {\mathrel{\mathop:}=}\rho_{Y},$$ $$\xi^{\wedge}_{Y,n} {\mathrel{\mathop:}=}{\mathrm{Tr}}\,\rho_{Y}^{\wedge n}>0, \qquad \xi^{\vee}_{Y,n} {\mathrel{\mathop:}=}{\mathrm{Tr}}\,\rho_{Y}^{\vee n}>0,$$ $$s^{\wedge}_{Y,n} {\mathrel{\mathop:}=}\frac{\xi^{\wedge}_{Y,n-1}}{\xi^{\wedge}_{Y,n}}, \qquad s^{\vee}_{Y,n} {\mathrel{\mathop:}=}\frac{\xi^{\vee}_{Y,n-1}}{\xi^{\vee}_{Y,n}}.$$ The objective of this section is to find density operators of the most simple form which are asymptotically equivalent to the operators $$\stackrel{\wedge}{\sigma}^{(k)}_{Y,n} {\mathrel{\mathop:}=}{\mathrm{L}}^{k}_{n} \left(\frac{1}{\xi^{\wedge}_{Y,n}}\rho_{Y}^{\wedge n}\right), \quad \stackrel{\vee}{\sigma}^{(k)}_{Y,n} {\mathrel{\mathop:}=}{\mathrm{L}}^{k}_{n} \left(\frac{1}{\xi^{\vee}_{Y,n}}\rho_{Y}^{\vee n}\right),$$ defined for fixed $k\in{\mathbb{N}}$ and every $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}},$ $n>k.$ [\[odwr\]]{} For every $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}}$ the operator ${I}+s^{\wedge}_{Y,n+1}\rho_{Y}$ is invertible and ${\left\Vert({I}+s^{\wedge}_{Y,n+1}\rho_{Y})^{-1}\right\Vert}=1.$ Furthermore, if $s^{\vee}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert}<1$ then ${I}-s^{\vee}_{Y,n+1}\rho_{Y}$ is invertible and $\displaystyle {\left\Vert({I}-s^{\vee}_{Y,n+1}\rho_{Y})^{-1}\right\Vert} =(1-s^{\vee}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert})^{-1}.$ The next theorem is a version of a theorem studied in [@KossakowskiRMP86; @MackowiakPR99] (see Remark \[defasdiff\]). [\[as\]]{} If $\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \approx \stackrel{\wedge}{\sigma}^{(1)}_{Y,n+1}$ and the reals $s^{\wedge}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert},$ $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}},$ are uniformly bounded then $${\label{as1}} \stackrel{\wedge}{\sigma}^{(1)}_{Y,n} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y}) \approx (n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y},$$ $${\label{as2}} \stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \approx (n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})^{-1}.$$ If $\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \approx \stackrel{\vee}{\sigma}^{(1)}_{Y,n+1}$ and the reals $s^{\vee}_{Y,n}{\left\Vert\rho_{Y}\right\Vert},$ $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}},$ are uniformly bounded then $${\label{as3}} \stackrel{\vee}{\sigma}^{(1)}_{Y,n}({I}-s^{\vee}_{Y,n+1}\rho_{Y}) \approx (n+1)^{-1}s^{\vee}_{Y,n+1}\rho_{Y}.$$ If, additionally, $s^{\vee}_{Y,n}{\left\Vert\rho_{Y}\right\Vert}\leq\epsilon$ for some $\epsilon<1$ and every $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}}$ then $${\label{as4}} \stackrel{\vee}{\sigma}^{(1)}_{Y,n} \approx (n+1)^{-1}s^{\vee}_{Y,n+1}\rho_{Y} ({I}-s^{\vee}_{Y,n+1}\rho_{Y})^{-1}.$$ By Theorem \[rek\] and the assumption $\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \approx \stackrel{\wedge}{\sigma}^{(1)}_{Y,n+1}$ one has $${\label{strongprel}} \stackrel{\wedge}{\sigma}^{(1)}_{Y,n} -(n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y} \approx -(n+1)^{-1}n\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} (s^{\wedge}_{Y,n+1}\rho_{Y}).$$ Since ${\mathrm{Tr}}\,{\left\vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} (s^{\wedge}_{Y,n+1}\rho_{Y})\right\vert} \leq s^{\wedge}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert}\, {\mathrm{Tr}}\,{\left\vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\vert} =s^{\wedge}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert},$ relation  yields , in view of Remark \[strongprop\]. Now turn to the proof of relation . According to Remark \[odwr\], $$\begin{aligned} {\label{as5}} \nonumber & {\mathrm{Tr}}\,{\left\vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} -(n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})^{-1}\right\vert} \\ \nonumber &\leq {\left\Vert({I}+s^{\wedge}_{Y,n+1}\rho_{Y})^{-1}\right\Vert} \,{\mathrm{Tr}}\,{\left\vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y}) -(n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y}\right\vert} \\ & ={\mathrm{Tr}}\,{\left\vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y}) -(n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y}\right\vert}. \end{aligned}$$ The explicit form of $\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}$ given by Theorem \[jawny\] shows that $\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}$ commutes with ${I}+s^{\wedge}_{Y,n+1}\rho_{Y},$ and since both operators are selfadjoint, $\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})$ is also selfadjoint. Thus conditions , , and Lemma \[mocna\] yield . The proof of relations ,  runs parallel to that of , . Notice that in this case the expression ${\left\Vert({I}+s^{\wedge}_{Y,n+1}\rho_{Y})^{-1}\right\Vert}=1$ from estimate  is replaced by ${\left\Vert({I}-s^{\wedge}_{Y,n+1}\rho_{Y})^{-1}\right\Vert} =(1-s^{\wedge}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert})^{-1} \leq(1-\epsilon)^{-1}$ (see Remark \[odwr\]). The following theorem for $k=2$ (with the reservation of Remark \[defasdiff\]) was obtained in [@KossakowskiRMP86; @MackowiakPR99]. The author of [@MackowiakPR99] gave also arguments that can be used to check the assumptions of this theorem. [\[glowne\]]{} If $\stackrel{\wedge}{\sigma}^{(k)}_{Y,n} \approx \stackrel{\wedge}{\sigma}^{(k)}_{Y,n+1}$ for every $k\in{\mathbb{N}}$ and $${\label{assferm}} s^{\wedge}_{Y,n}{\left\Vert\rho_{Y}\right\Vert} \leq 2 \quad \text{for every $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}}$}$$ then, for every $k\in{\mathbb{N}},$ $k\geq 2,$ $${\label{glowne2}} \stackrel{\wedge}{\sigma}^{(k)}_{Y,n} \approx k!\underbrace{\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots\wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}}_{k}.$$ If $\stackrel{\vee}{\sigma}^{(k)}_{Y,n} \approx \stackrel{\vee}{\sigma}^{(k)}_{Y,n+1}$ for every $k\in{\mathbb{N}}$ and $${\label{assbos}} s^{\vee}_{Y,n}{\left\Vert\rho_{Y}\right\Vert} \leq\epsilon \quad \text{for some $\epsilon<1$ and every $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}}$}$$ then, for every $k\in{\mathbb{N}},$ $k\geq 2,$ $${\label{glowne4}} \stackrel{\vee}{\sigma}^{(k)}_{Y,n} \approx k!\underbrace{\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\vee \cdots \vee\stackrel{\vee}{\sigma}^{(1)}_{Y,n}}_{k}.$$ First equivalence  will be proved. Observe that $$\begin{aligned} & 2\,{\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\vert} \\ & ={\mathrm{Tr}}{\left\vert\left(\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right) \left({I}^{\otimes(q-1)} \otimes ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}} \right. \\ &\quad \left. +\left(\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right) \left({I}^{\otimes(q-1)} \otimes ({I}-s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}}\right\vert} \end{aligned}$$ $$\begin{aligned} & \leq {\mathrm{Tr}}{\left\vert\left(\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right) \left({I}^{\otimes(q-1)} \otimes ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}}\right\vert} \\ & \quad +{\left\Vert{I}-s^{\wedge}_{Y,n+1}\rho_{Y}\right\Vert} \,{\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\vert}, \end{aligned}$$ hence $$\begin{gathered} {\label{glowne12}} \left(2-{\left\Vert{I}-s^{\wedge}_{Y,n+1}\rho_{Y}\right\Vert}\right) \,{\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\vert} \\ \leq {\mathrm{Tr}}{\left\vert\left(\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right) \left({I}^{\otimes(q-1)} \otimes ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}}\right\vert}. \end{gathered}$$ Since the operators $\rho_{Y}$ are trace class, $\displaystyle\inf_{\varphi\in{\mathcal{H}}_{Y};\,{\left\Vert\varphi\right\Vert}=1} {\left\langle\varphi,\rho_{Y}\varphi\right\rangle}=0.$ Thus, by assumption  and the selfadjointness of the operators ${I}-s^{\wedge}_{Y,n+1}\rho_{Y},$ one obtains $$\begin{aligned} {\label{glowne13}} &\nonumber {\left\Vert{I}-s^{\wedge}_{Y,n+1}\rho_{Y}\right\Vert} =\sup_{\substack{\varphi\in{\mathcal{H}}_{Y} \\ {\left\Vert\varphi\right\Vert}=1}} {\left\vert{\left\langle\varphi,({I}-s^{\wedge}_{Y,n+1}\rho_{Y})\varphi\right\rangle}\right\vert} \\ & =\max{\left\{1-s^{\wedge}_{Y,n+1} \inf_{\substack{\varphi\in{\mathcal{H}}_{Y} \\ {\left\Vert\varphi\right\Vert}=1}} {\left\langle\varphi,\rho_{Y}\varphi\right\rangle}, \; s^{\wedge}_{Y,n+1} \sup_{\substack{\varphi\in{\mathcal{H}}_{Y} \\ {\left\Vert\varphi\right\Vert}=1}} {\left\langle\varphi,\rho_{Y}\varphi\right\rangle}-1\right\}}=1. \end{aligned}$$ The rest of the proof of  is by induction with respect to $k\geq 2.$ $1^{\circ}.$ ($k=2$) By Theorem \[rek\] for $n\geq 2$ one has $$\begin{aligned} {\label{glowne5}} \nonumber \frac{1}{(n+1)^{2}}{\tbinom{n+1}{2}} \stackrel{\wedge}{\sigma}^{(2)}_{Y,n+1} & =\frac{n}{n+1}\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \wedge \left((n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y}\right) \\ &\quad -\frac{1}{(n+1)^{2}}{\tbinom{n}{2}} \stackrel{\wedge}{\sigma}^{(2)}_{Y,n} \left({I}\otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(2)}_{{\mathcal{H}}_{Y}}. \end{aligned}$$ Assumption  gives ${\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(2)}_{Y,n} \left({I}\otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(2)}_{{\mathcal{H}}_{Y}}\right\vert} \leq s^{\wedge}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert} \,{\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(2)}_{Y,n}\right\vert} \leq 2,$ hence, by Eq. , Remark \[strongprop\], and the assumption $\stackrel{\wedge}{\sigma}^{(2)}_{Y,n} \approx \stackrel{\wedge}{\sigma}^{(2)}_{Y,n+1},$ one obtains $$\stackrel{\wedge}{\sigma}^{(2)}_{Y,n} +\stackrel{\wedge}{\sigma}^{(2)}_{Y,n} \left({I}\otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(2)}_{{\mathcal{H}}_{Y}} \approx 2\frac{n}{n+1}\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \wedge\left((n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y}\right).$$ Thus, in view of equivalence  from Theorem \[as\] and Lemma \[iloczyny\], one has $$\begin{gathered} {\label{rown1}} \stackrel{\wedge}{\sigma}^{(2)}_{Y,n} +\stackrel{\wedge}{\sigma}^{(2)}_{Y,n} \left({I}\otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(2)}_{{\mathcal{H}}_{Y}} \\ \approx 2\frac{n}{n+1}\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \wedge\left(\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right). \end{gathered}$$ Furthermore, assumption  implies that the trace norms of the operators on the r.h.s of  are uniformly bounded. Therefore, according to Remark \[strongprop\], $${\label{glowne6}} \left(\stackrel{\wedge}{\sigma}^{(2)}_{Y,n} -2\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right) \left({I}\otimes ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(2)}_{{\mathcal{H}}_{Y}}\approx 0.$$ The explicit form of $\stackrel{\wedge}{\sigma}^{(2)}_{Y,n},$ $\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \wedge \stackrel{\wedge}{\sigma}^{(1)}_{Y,n}$ given by Theorem \[jawny\] implies that $\stackrel{\wedge}{\sigma}^{(2)}_{Y,n} -2\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \wedge \stackrel{\wedge}{\sigma}^{(1)}_{Y,n}$ and $\left({I}\otimes ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(2)}_{{\mathcal{H}}_{Y}}$ commute, which proves the selfadjointness of the operator on the l.h.s of . Thus conditions , for $q=2$, , and Lemma \[mocna\] yield relation  for $k=2.$ $2^{\circ}.$ Assuming validity of equivalence  for $k\in{\left\{2,\ldots,q-1\right\}},$ where $q\in{\mathbb{N}},$ $q>2,$ its validity will be proved for $k=q.$ By Theorem \[rek\] for $n\geq q$ one has $$\begin{aligned} {\label{glowne9}} \nonumber \frac{1}{(n+1)^{q}}{\tbinom{n+1}{q}} \stackrel{\wedge}{\sigma}^{(q)}_{Y,n+1} & =\frac{1}{(n+1)^{q-1}}{\tbinom{n}{q-1}} \stackrel{\wedge}{\sigma}^{(q-1)}_{Y,n} \wedge \left((n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y}\right) \\ & \quad -\frac{1}{(n+1)^{q}}{\tbinom{n}{q}} \stackrel{\wedge}{\sigma}^{(q)}_{Y,n} \left({I}^{\otimes(q-1)} \otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}}. \end{aligned}$$ Assumption  implies $${\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} \left({I}^{\otimes(q-1)} \otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}}\right\vert} \leq s^{\wedge}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert} \,{\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(q)}_{Y,n}\right\vert} \leq 2,$$ hence, in view of Eq. , Remark \[strongprop\], and the assumption $\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} \approx \stackrel{\wedge}{\sigma}^{(q)}_{Y,n+1},$ $$\begin{gathered} \stackrel{\wedge}{\sigma}^{(q)}_{Y,n} +\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} \left({I}^{\otimes(q-1)} \otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}} \\ \approx \frac{q!}{(n+1)^{q-1}}{\tbinom{n}{q-1}} \stackrel{\wedge}{\sigma}^{(q-1)}_{Y,n} \wedge\left((n+1)^{-1}s^{\wedge}_{Y,n+1}\rho_{Y}\right). \end{gathered}$$ Thus, by relation  from Theorem \[as\], Lemma \[iloczyny\], and Remark \[strongprop\], one has $$\begin{gathered} {\label{rown2}} \stackrel{\wedge}{\sigma}^{(q)}_{Y,n} +\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} \left({I}^{\otimes(q-1)} \otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}} \\ \approx \frac{q!}{(q-1)!}\stackrel{\wedge}{\sigma}^{(q-1)}_{Y,n} \wedge\left(\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right), \end{gathered}$$ since the trace norms of the operators on the r.h.s. of  are uniformly bounded, by assumption . Furthermore, in view of Lemma \[iloczyny\] and the inductive hypothesis $\displaystyle \stackrel{\wedge}{\sigma}^{(q-1)}_{Y,n} \approx (q-1)!\underbrace{\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}}_{q-1},$ condition  yields $$\begin{gathered} \stackrel{\wedge}{\sigma}^{(q)}_{Y,n} +\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} \left({I}^{\otimes(q-1)} \otimes (s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}} \\ \approx q!\underbrace{\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}}_{q-1} \wedge\left(\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right), \end{gathered}$$ hence $${\label{glowne10}} \left(\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\underbrace{\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}}_{q}\right) \left({I}^{\otimes(q-1)} \otimes ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}}\approx 0.$$ From the explicit form of $\stackrel{\wedge}{\sigma}^{(q)}_{Y,n},$ $\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}$ given by Theorem \[jawny\] one finds that $\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}$ and $\left({I}^{\otimes(q-1)} \otimes ({I}+s^{\wedge}_{Y,n+1}\rho_{Y})\right) A^{(q)}_{{\mathcal{H}}_{Y}}$ commute, which proves the selfadjointness of the operator on the l.h.s of . Thus conditions , , , and Lemma \[mocna\] yield $\stackrel{\wedge}{\sigma}^{(q)}_{Y,n} \approx q!\underbrace{\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}}_{q}.$ Validity of relation  has been proved. Now turn to equivalence . Similarly to  one has $$\begin{gathered} \left(2-{\left\Vert{I}+s^{\vee}_{Y,n+1}\rho_{Y}\right\Vert}\right) \,{\mathrm{Tr}}{\left\vert\stackrel{\vee}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\vee \cdots \vee\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right\vert} \\ \leq {\mathrm{Tr}}{\left\vert\left(\stackrel{\vee}{\sigma}^{(q)}_{Y,n} -q!\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\vee \cdots \vee\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right) \left({I}^{\otimes(q-1)} \otimes ({I}-s^{\vee}_{Y,n+1}\rho_{Y})\right) S^{(q)}_{{\mathcal{H}}_{Y}}\right\vert}. \end{gathered}$$ Furthermore, according to assumption , $${\label{epsestpr}} 2-{\left\Vert{I}+s^{\vee}_{Y,n+1}\rho_{Y}\right\Vert} \geq 2-(1+s^{\vee}_{Y,n+1}{\left\Vert\rho_{Y}\right\Vert}) \geq 1-\epsilon>0.$$ The rest of the proof of  is by induction with respect to $k\geq 2$ and proceeds analogously to the proof of  with condition  replaced by  and the operators ${I}\mp s^{\wedge}_{Y,n+1}\rho_{Y}$ replaced by ${I}\pm s^{\vee}_{Y,n+1}\rho_{Y}$ (inversion of signs). Theorem \[glowne\] allows to replace tractions of antisymmetric and symmetric product density operators by antisymmetric and symmetric products of ticle contractions, respectively, if the number $n$ of particles in the system is large. Further simplification, consisting in replacement of antisymmetric and symmetric products by tensor products, will be now proved possible. To this end weaker conditions on the asymptotic equivalence relation will be imposed. [\[rel\]]{} Fix $k\in{\mathbb{N}}$ and $d\in{\mathbb{R}},$ $d>0.$ Families ${\mbox{$\left\{A_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}},$ ${\mbox{$\left\{B_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ of operators $A_{Y,n},B_{Y,n}\in{\mathcal{T}}\left({\mathcal{H}}_{Y}^{\otimes k}\right)$ are called *weakly asymptotically equivalent* (symbolically: $A_{Y,n}\sim B_{Y,n}$), if $\displaystyle{\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,(A_{Y,n}-B_{Y,n})C_{Y,n}=0$ for every family ${\mbox{$\left\{C_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ of operators of the form $\displaystyle C_{Y,n} =\bigotimes_{i=1}^{k}C_{Y,n}^{(i)},$ where $C_{Y,n}^{(i)}\in{\mathcal{B}}\left({\mathcal{H}}_{Y}\right)$ ($i\in{\left\{1,\ldots,k\right\}},$ $(Y,n)\in{\mathcal{M}}(\Omega)\times{\mathbb{N}}$) are operators with uniformly bounded operator norms. The relation $\sim$ has the properties analogous to the properties of the relation $\approx$ from Remark \[strongprop\]. [\[cykliczny\]]{} Let $k\in{\mathbb{N}},$ $k\geq 2.$ Fix $\pi\in S_k.$ A set $X\subset{\left\{1,\ldots,k\right\}}$ is called a *cyclic set of the permutation $\pi$*, if $X={\left\{l_1,\ldots,l_q\right\}}$ for some $l_1,\ldots, l_q\in{\left\{1,\ldots,k\right\}},$ $q\in{\left\{2,\ldots,k\right\}},$ such that $\pi(l_{s})=l_{s+1}$ for every $s\in{\left\{1,\ldots,q-1\right\}},$ and $\pi(l_{q})=l_{1}.$ A singleton ${\left\{l\right\}}\subset{\left\{1,\ldots,k\right\}}$ such that $\pi(l)=l$ is also called a cyclic set of the permutation $\pi.$ Note that the set ${\left\{1,\ldots,k\right\}}$ from the above definition can be represented as the union of disjoint cyclic sets of $\pi.$ [\[pars\]]{} Let $k\in{\mathbb{N}},$ $k\geq 2.$ If $B^{(1)},\ldots,B^{(k)}\in{\mathcal{T}}\left({\mathcal{H}}_{Y}\right)$ then $${\label{pars1}} k!\,{\mathrm{Tr}}\left(B_{Y,n}^{(1)}\otimes \cdots\otimes B_{Y,n}^{(k)}\right) A^{(k)}_{{\mathcal{H}}_{Y}} =\sum_{\pi\in S_{k}}{\mathrm{sgn}}\,\pi\prod_{j=1}^{p(\pi)} {\mathrm{Tr}}\prod_{s=1}^{q_{j}}B_{Y,n}^{(l_{j,s})},$$ where $p(\pi)\in{\left\{1,\ldots,k\right\}}$ is the number of disjoint cyclic sets of $\pi,$ indexed by $j,$ and $q_{j}$ denotes the number of elements of the cyclic set of $\pi,$ which is ${\left\{l_{j,1},\ldots,l_{j,q_{j}}\right\}},$ where $${\label{lemcyclord}} \pi(l_{j,q_j})=l_{j,1} \quad \text{and, for $q_j\geq 2,$} \quad \pi(l_{j,s})=l_{j,s+1}, \quad s=1,\ldots,q_j-1.$$ Clearly, $\displaystyle \sum_{j=1}^{p(\pi)}q_{j}=k$ and $\displaystyle \bigcup_{j=1}^{p(\pi)}\bigcup_{s=1}^{q_{j}}{\left\{l_{j,s}\right\}} ={\left\{1,\ldots,k\right\}}.$ Let ${\left\{\varphi_{i}\right\}}_{i\in{\mathbb{N}}}$ be an orthonormal basis of ${\mathcal{H}}_{Y}.$ One has $${\label{pars6}} k!\,{\mathrm{Tr}}\left(B_{Y,n}^{(1)}\otimes \cdots\otimes B_{Y,n}^{(k)}\right) A^{(k)}_{{\mathcal{H}}_{Y}} =\sum_{\pi\in S_{k}}{\mathrm{sgn}}\,\pi\prod_{j=1}^{p(\pi)}M_j,$$ where $$M_j{\mathrel{\mathop:}=}\sum_{i_{l_{j,1}}=1}^{\infty} \ldots\sum_{i_{l_{j,q_{j}}}=1}^{\infty} {\left\langle\varphi_{i_{l_{j,1}}},B^{(l_{j,1})} \varphi_{i_{\pi(l_{j,1})}}\right\rangle} \cdots {\left\langle\varphi_{i_{l_{j,q_{j}}}},B^{(l_{j,q_{j}})} \varphi_{i_{\pi(l_{j,q_{j}})}}\right\rangle}.$$ If $q_{j}>2$ for some $j\in{\left\{1,\ldots,p(\pi)\right\}}$ then, by condition  and Parseval’s formula, $$\begin{aligned} M_j &=\sum_{i_{l_{j,1}}=1}^{\infty} \ldots \sum_{i_{l_{j,q_{j}}}=1}^{\infty} \\ &\quad {\left\langle\varphi_{i_{l_{j,1}}},B^{(l_{j,1})} \varphi_{i_{l_{j,2}}}\right\rangle}{\left\langle\varphi_{i_{l_{j,2}}},B^{(l_{j,2})} \varphi_{i_{l_{j,3}}}\right\rangle} \cdots \cdots {\left\langle\varphi_{i_{l_{j,q_{j}}}},B^{(l_{j,q_{j}})} \varphi_{i_{l_{j,1}}}\right\rangle} \end{aligned}$$ $$\begin{aligned} &=\sum_{i_{l_{j,1}}=1}^{\infty} \sum_{i_{l_{j,3}}=1}^{\infty} \ldots \sum_{i_{l_{j,q_{j}}}=1}^{\infty} \\ &\qquad {\left\langle\varphi_{i_{l_{j,1}}},B^{(l_{j,1})}B^{(l_{j,2})} \varphi_{i_{l_{j,3}}}\right\rangle} \cdots {\left\langle\varphi_{i_{l_{j,q_{j}}}},B^{(l_{j,q_{j}})} \varphi_{i_{l_{j,1}}}\right\rangle}. \end{aligned}$$ Performing successive summations one then obtains $$M_j =\sum_{i_{l_{j,1}}=1}^{\infty} {\left\langle\varphi_{i_{l_{j,1}}},\left( \prod_{s=1}^{q_{j}}B^{(l_{j,s})}\right) \varphi_{i_{l_{j,1}}}\right\rangle} ={\mathrm{Tr}}\prod_{s=1}^{q_{j}}B^{(l_{j,s})}.$$ The derivation of the above formula for $q_j=1,2,$ after simplifications, proceeds analogously. This completes the proof of Eq. , in view of Eq. . [\[asnorm\]]{} One has $${\label{normasferm}} {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\left\Vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\Vert}=0,$$ and if $\stackrel{\vee}{\sigma}^{(2)}_{Y,n} \approx 2\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \vee\stackrel{\vee}{\sigma}^{(1)}_{Y,n}$ (see Theorem \[glowne\]) then $${\label{normasbos}} {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\left\Vert\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right\Vert}=0.$$ To prove Eq.  it suffices to observe that, according to Theorem \[colemferm\], $${\left\Vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\Vert} ={\left\Vert{\mathrm{L}}^{1}_{n} \left(\frac{1}{\xi^{\wedge}_{Y,n}}\rho_{Y}^{\wedge n}\right)\right\Vert} \leq\frac{1}{n}\frac{1}{\xi^{\wedge}_{Y,n}} {\left\Vert\rho_{Y}^{\wedge n}\right\Vert} \leq\frac{1}{n}\frac{1}{\xi^{\wedge}_{Y,n}}{\mathrm{Tr}}\rho_{Y}^{\wedge n} =\frac{1}{n}.$$ Now Eq.  will be proved. Let ${\left\{\varphi_{i}\right\}}_{i\in{\mathbb{N}}}$ be an orthonormal basis of ${\mathcal{H}}_{Y}$ for fixed $Y\in{\mathcal{M}}(\Omega).$ Then $$\begin{aligned} {\label{asnorm6}} \nonumber {\mathrm{Tr}}\,2\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \vee\stackrel{\vee}{\sigma}^{(1)}_{Y,n} &=2\,{\mathrm{Tr}}\left(\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \otimes\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right) S^{(2)}_{{\mathcal{H}}_{Y}} \\ &\nonumber =\sum_{\pi\in S_{2}}\sum_{i_{1},i_{2}=1}^{\infty} {\left\langle\varphi_{i_{1}},\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \varphi_{i_{\pi(1)}}\right\rangle} {\left\langle\varphi_{i_{2}},\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \varphi_{i_{\pi(2)}}\right\rangle} \\ & =\left({\mathrm{Tr}}\,\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right)^{2} +{\mathrm{Tr}}\left(\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right). \end{aligned}$$ Taking into account Eq. , the relation $\stackrel{\vee}{\sigma}^{(2)}_{Y,n} \approx 2\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \vee\stackrel{\vee}{\sigma}^{(1)}_{Y,n},$ Definition \[relacja\] for $C_{Y,n} ={I}^{\otimes 2},$ and the equality ${\mathrm{Tr}}\,\stackrel{\vee}{\sigma}^{(2)}_{Y,n} ={\mathrm{Tr}}\,\stackrel{\vee}{\sigma}^{(1)}_{Y,n}=1,$ one obtains $${\label{asnorm7}} {\underset{n,\mu(Y)\to\infty}{d-\lim}}{\mathrm{Tr}}\,\left(\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right)=0.$$ Furthermore, $${\left\Vert\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\varphi\right\Vert}^{2} ={\left\langle\varphi,\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \stackrel{\vee}{\sigma}^{(1)}_{Y,n}\varphi\right\rangle} \leq{\mathrm{Tr}}\,\left(\stackrel{\vee}{\sigma}^{(1)}_{Y,n} \stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right)$$ for every $\varphi\in{\mathcal{H}}_{Y}$ such that ${\left\Vert\varphi\right\Vert}=1,$ hence Eq.  yields Eq. . Notice that Eq.  can be also proved analogously to Eq.  under the additional assumption $\stackrel{\wedge}{\sigma}^{(2)}_{Y,n} \approx 2\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}.$ The proof of the next theorem for $k=2$ was given in [@KossakowskiRMP86; @MackowiakPR99]. [\[zmiana\]]{} Let $k\in{\mathbb{N}},$ $k\geq 2.$ One has $${\label{zmiana1}} k!\underbrace{\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}}_{k} \sim \underbrace{\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\otimes \cdots \otimes\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}}_{k},$$ and if $\displaystyle {\underset{n,\mu(Y)\to\infty}{d-\lim}} {\left\Vert\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\right\Vert}=0$ (see Lemma \[asnorm\]) then $${\label{zmiana2}} k!\underbrace{\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\vee \cdots \vee\stackrel{\vee}{\sigma}^{(1)}_{Y,n}}_{k} \sim \underbrace{\stackrel{\vee}{\sigma}^{(1)}_{Y,n}\otimes \cdots \otimes\stackrel{\vee}{\sigma}^{(1)}_{Y,n}}_{k}.$$ First Eq.  will be proved. Fix a family ${\mbox{$\left\{C_{Y,n}\right\}_{(Y,n) \in\mathcal{M}(\Omega)\times\mathbb{N}}$}}$ of operators such as in Definition \[rel\] and set $$B_{Y,n}^{(r)} {\mathrel{\mathop:}=}\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}C_{Y,n}^{(r)}, \quad r=1,\ldots,k.$$ Then, by Lemma \[pars\], one has $$\begin{aligned} & {\mathrm{Tr}}\,k!\left(\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right) \left(C_{Y,n}^{(1)}\otimes\cdots\otimes C_{Y,n}^{(k)}\right) \\ & =k!\,{\mathrm{Tr}}\left(B_{Y,n}^{(1)}\otimes \cdots\otimes B_{Y,n}^{(k)}\right) A^{(k)}_{{\mathcal{H}}_{Y}} =\sum_{\pi\in S_{k}}{\mathrm{sgn}}\,\pi\prod_{j=1}^{p(\pi)} {\mathrm{Tr}}\prod_{s=1}^{q_{j}}B_{Y,n}^{(l_{j,s})} \end{aligned}$$ $$\begin{aligned} & ={\mathrm{Tr}}\left(\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\otimes \cdots \otimes\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right) \left(C_{Y,n}^{(1)}\otimes\cdots\otimes C_{Y,n}^{(k)}\right) \\ &\quad +\sum_{\substack{\pi\in S_{k} \\ \pi\not =\mathrm{Id}}}{\mathrm{sgn}}\,\pi \prod_{j=1}^{p(\pi)} {\mathrm{Tr}}\prod_{s=1}^{q_{j}}B_{Y,n}^{(l_{j,s})}. \end{aligned}$$ Thus $$\begin{gathered} {\label{zmiana3}} {\mathrm{Tr}}\left(k!\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\wedge \ldots \wedge\stackrel{\wedge}{\sigma}^{(1)}_{Y,n} -\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\otimes \cdots\otimes\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right) \left(C_{Y,n}^{(1)}\otimes\cdots\otimes C_{Y,n}^{(k)}\right) \\ =\sum_{\substack{\pi\in S_{k} \\ \pi\not=\mathrm{Id}}}{\mathrm{sgn}}\,\pi \prod_{j=1}^{p(\pi)} {\mathrm{Tr}}\prod_{s=1}^{q_{j}}B_{Y,n}^{(l_{j,s})}. \end{gathered}$$ Now, let $\pi\in S_{k},$ $\pi\not=\mathrm{Id},$ be fixed. If $q_{j}=1$ for some $j\in{\left\{1,\ldots,p(\pi)\right\}}$ then $${\left\vert{\mathrm{Tr}}\prod_{s=1}^{q_{j}}B_{Y,n}^{(l_{j,s})}\right\vert} \equiv{\left\vert{\mathrm{Tr}}\,B_{Y,n}^{(l_{j,1})}\right\vert} \leq{\left\VertC_{Y,n}^{(l_{j,1})}\right\Vert} \,{\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\vert} ={\left\VertC_{Y,n}^{(l_{j,1})}\right\Vert},$$ whereas if $q_{j}\geq 2$ then $$\begin{aligned} {\left\vert{\mathrm{Tr}}\prod_{s=1}^{q_{j}}B_{Y,n}^{(l_{j,s})}\right\vert} &\leq{\left\Vert\prod_{s=1}^{q_{j}-1}B_{Y,n}^{(l_{j,s})}\right\Vert} \,{\mathrm{Tr}}{\left\vertB_{Y,n}^{(l_{j,q_{j}})}\right\vert} \\ & \leq{\left\Vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\Vert}^{q_{j}-1} \left(\prod_{s=1}^{q_{j}-1}{\left\VertC_{Y,n}^{(l_{j,s})}\right\Vert}\right) {\left\VertC_{Y,n}^{(l_{j,q_{j}})}\right\Vert} \,{\mathrm{Tr}}{\left\vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\vert} \\ & \leq{\left\Vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\Vert}^{q_{j}-1} \prod_{s=1}^{q_{j}}{\left\VertC_{Y,n}^{(l_{j,s})}\right\Vert}. \end{aligned}$$ Since $\pi\not=\mathrm{Id},$ there exists at least one $j\in{\left\{1,\ldots,p(\pi)\right\}}$ such that $q_{j}\geq 2,$ hence $$\begin{aligned} {\left\vert\prod_{j=1}^{p(\pi)} {\mathrm{Tr}}\prod_{s=1}^{q_{j}}B_{Y,n}^{(l_{j,s})}\right\vert} & \leq\left(\prod_{j=1}^{p(\pi)} {\left\Vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\Vert}^{q_{j}-1}\right) \prod_{j=1}^{p(\pi)} \prod_{s=1}^{q_{j}} {\left\VertC_{Y,n}^{(l_{j,s})}\right\Vert} \\ & ={\left\VertC_{Y,n}\right\Vert}\prod_{j=1}^{p(\pi)} {\left\Vert\stackrel{\wedge}{\sigma}^{(1)}_{Y,n}\right\Vert}^{q_{j}-1} \end{aligned}$$ and at least one exponent $q_{j}-1$ is nonzero. Thus, by the uniform boundedness of the norms ${\left\VertC_{Y,n}\right\Vert}$ and Lemma \[asnorm\], the termodynamic limit of the l.h.s of  equals $0,$ which proves the validity of relation . The proof of relation , after discarding the permutation signs and replacing $\wedge$ by $\vee,$ proceeds analogously. Product integral kernels of trace class operators ================================================= [\[sectjad\]]{} In this section theorems concerning product integral kernels, exploited in Section \[potkontr\], are formulated. Fix the Hilbert space ${\mathcal{H}}_Y {\mathrel{\mathop:}=}L^{2}(Y,\mu)$ over the field ${\mathbb{K}}={\mathbb{C}}$ or ${\mathbb{R}},$ where the measure $\mu$ is separable and nite. For every $n\in{\mathbb{N}}$ the space ${\mathcal{H}}_Y^{\otimes n}$ is identified with $L^2(Y^n,\mu^{\otimes n}).$ Unless otherwise stated, elements of $L^2$ spaces are identified with their representatives and denoted by the same symbols. Let ${\mathcal{K}}\in L^2(Y^2,\mu^{\otimes 2}).$ In the case of the integral operator $K\colon H_Y\to H_Y$ defined for every $\varphi\in H_Y$ and $x\in Y$ by $${\label{intdef}} (K\varphi)(x) =\int_{Y}{\mathcal{K}}(x,y)\varphi(y)\,{\mathrm{d}}\mu(y)$$ both ${\mathcal{K}}$ regarded as an element of $L^2(Y^2,\mu^{\otimes 2})$ as well as its arbitrary representative is called an *integral kernel of $K$*. The kernel ${\mathcal{K}}$ is unique as an element of $L^2(Y^2,\mu^{\otimes 2})$ but a representative of ${\mathcal{K}}$ of a special form, given in Lemma \[tr\] and Definition \[ilkern\], is useful in computations of the trace of $K.$ Let ${\mathcal{HS}}({\mathcal{H}}_Y)$ be the space of Hilbert-Schmidt operators on ${\mathcal{H}}_Y$ with the inner product defined by ${\left\langleA,B\right\rangle}_{{\mathcal{HS}}({\mathcal{H}}_Y)} {\mathrel{\mathop:}=}{\mathrm{Tr}}\,A^{\ast}B$ and the induced norm denoted by ${\left\Vert\cdot\right\Vert}_{{\mathcal{HS}}({\mathcal{H}}_Y)}.$ In the sequel use is made of the following theorem, the proof of which can be found in [@SchattenSV60]. [\[hscalk\]]{} An operator $K\in{\mathcal{B}}(H_Y)$ is Hilbert-Schmidt iff it is an integral operator with an integral kernel ${\mathcal{K}}\in L^2(Y^2,\mu^{\otimes 2}).$ Furthermore, ${\left\VertK\right\Vert}_{{\mathcal{HS}}(H_Y)} ={\left\Vert{\mathcal{K}}\right\Vert}_{L^2(Y^2,\mu^{\otimes 2})}.$ [\[hscalkcor\]]{} Let $K,G\in{\mathcal{HS}}(H_Y)$ and let ${\mathcal{K}},{\mathcal{G}}\in L^2(Y^2,\mu^{\otimes 2})$ be integral kernels of the operators $K,$ $G,$ respectively. Then ${\left\langleK,G\right\rangle}_{{\mathcal{HS}}(H_Y)} ={\left\langle{\mathcal{K}},{\mathcal{G}}\right\rangle}_{L^2(Y^2,\mu^{\otimes 2})}.$ Recall that $K\in{\mathcal{B}}(H_Y)$ is a trace class operator iff there exist operators $K_{1},K_{2}\in{\mathcal{HS}}(H_Y)$ such that $K=K_{1}K_{2}.$ Moreover, ${\mathrm{Tr}}\,K ={\left\langleK_{1}^{\ast},K_{2}\right\rangle}_{{\mathcal{HS}}(H_Y)}.$ This fact, Theorem \[hscalk\], and Corollary \[hscalkcor\] imply the following lemma, in which elements of the $L^2$ space are distinguished from their representatives. The element of the $L^2$ space represented by a function $f$ is denoted by $[f].$ [\[tr\]]{} Let $K\in{\mathcal{T}}\left(H_Y\right),$ $K =K_{1}K_{2},$ where $K_{1},K_{2}\in{\mathcal{HS}}\left(H_Y\right).$ Let $[{\mathcal{K}}_{1}],[{\mathcal{K}}_{2}]\in L^2(Y^2,\mu^{\otimes 2})$ be integral kernels of $K_{1},K_{2}.$ Then for any choice of representatives ${\mathcal{K}}_1\in [{\mathcal{K}}_1],$ ${\mathcal{K}}_2\in [{\mathcal{K}}_2]$ the function ${\mathcal{K}}\colon Y\times Y\to{\mathbb{K}}$ defined for $(x,y)\in Y\times Y$ by $${\label{tr1}} {\mathcal{K}}(x,y) =\int_{Y}{\mathcal{K}}_{1}(x,z){\mathcal{K}}_{2}(z,y)\,{\mathrm{d}}\mu(z)$$ is integrable and it is an integral kernel of $K.$ The function ${\mathcal{L}}\colon Y\to{\mathbb{K}}$ defined for $x\in Y$ by ${\mathcal{L}}(x) ={\mathcal{K}}(x,x)$ is grable. Moreover, $${\label{tr2}} {\mathrm{Tr}}\,K =\int_{Y}{\mathcal{L}}(x){\mathrm{d}}\mu(x) \equiv\int_{Y}{\mathcal{K}}(x,x)\,{\mathrm{d}}\mu(x).$$ [\[ilkern\]]{} Under the assumptions of Lemma \[tr\], the function ${\mathcal{K}}$ given by formula  (for any choice of representatives ${\mathcal{K}}_1,$ ${\mathcal{K}}_2$ of $[{\mathcal{K}}_1],[{\mathcal{K}}_2]$) is called a *product integral kernel of $K$*. Notice that for $\mu$ being the Lebesgue measure on $[0,1]\times[0,1]$ formula  is valid, for example, if ${\mathcal{K}}$ is any continuous function. In the following lemma, which follows from Lemma \[tr\], the function ${\mathcal{K}}_0$ need not be a product integral kernel of $K_0$ but the integral formula for the trace of $K_0$ still holds for ${\mathcal{K}}_0.$ [\[parttr\]]{} Let $k,n\in{\mathbb{N}},$ $k<n,$ and let ${\mathcal{K}}$ be a product integral kernel of $K\in{\mathcal{T}}\left(H_Y^{\otimes n}\right) \equiv{\mathcal{T}}\left(L^2(Y^n,\mu^{\otimes n})\right).$ Then the function ${\mathcal{K}}_{0}\colon Y^{k}\times Y^{k}\to{\mathbb{K}}$ defined for $(x^{\prime},y^{\prime})\in Y^{k}\times Y^{k}$ by $${\label{parttr1}} {\mathcal{K}}_{0}(x^{\prime},y^{\prime}) =\int_{Y^{n-k}} {\mathcal{K}}(x^{\prime},x^{\prime\prime},y^{\prime},x^{\prime\prime})\, {\mathrm{d}}\mu^{\otimes(n-k)}(x^{\prime\prime})$$ is integrable and the integral operator $K_0$ with the kernel ${\mathcal{K}}_0$ belongs to ${\mathcal{T}}\left(H_Y^{\otimes k}\right).$ For every $\chi,\varphi\in H_Y^{\otimes k}$ and every orthonormal basis ${\left\{\psi_i\right\}}_{i\in{\mathbb{N}}}$ of $H_Y^{\otimes (n-k)}$ one has $${\left\langle\chi,K_0\varphi\right\rangle}_{H_Y^{\otimes k}} =\sum_{i=1}^{\infty} {\left\langle\chi\otimes\psi_i, K(\varphi\otimes\psi_i)\right\rangle}_{H_Y^{\otimes n}}.$$ The function ${\mathcal{L}}_{0}\colon Y^{k}\to{\mathbb{K}}$ defined for $x^{\prime}\in Y^{k}$ by ${\mathcal{L}}_0(x^{\prime}) ={\mathcal{K}}_0(x^{\prime},x^{\prime})$ is grable. Moreover, $$\int_{Y^{k}}{\mathcal{K}}_{0}(x^{\prime},x^{\prime}) \,{\mathrm{d}}\mu^{\otimes k}(x^{\prime}) \equiv\int_{Y^{k}}{\mathcal{L}}_{0}(x^{\prime}) \,{\mathrm{d}}\mu^{\otimes k}(x^{\prime}) ={\mathrm{Tr}}\,K_{0} ={\mathrm{Tr}}\,K.$$ [\[red\]]{} Under the assumptions of Lemma \[parttr\], if $C\in{\mathcal{B}}\left(H_Y^{\otimes k}\right)$ then ${\mathrm{Tr}}\,CK_{0} ={\mathrm{Tr}}\,(C\otimes{I}^{\otimes(n-k)})K.$ Acknowledgements {#acknowledgements .unnumbered} ================ This article presents the results of a part of the autor’s MS thesis [@RadzkiUMK99] written in Institute of Physics, Nicolaus Copernicus University, Toruń, under the supervision of Professor Jan Maćkowiak. The author wishes to express his gratitude to Professor Maćkowiak for helpful suggestions and remarks. Professor Maćkowiak prepared also, on his own initiative, the English translation of appropriate parts of the author’s thesis, which was useful for the author in editing of the present paper. [99]{} A. J. Coleman, *Structure of fermion density matrices*, Rev. Mod. Phys. **35** (3) (1963), 668-687. C. Garrod and J. K. Percus, *Reduction of the $N$-particle variational problem*, J. Math. Phys. **5** (12) (1964), 1756-1776. A. Kossakowski, J. Ma[ć]{}kowiak, *Minimization of the free energy of large continuous quantum systems over product states*, Rep. Math. Phys. **24** (3) (1986), 365-376. H. Kummer, *[$n$]{}-representability problem for reduced density matrices*, J. Math. Phys. **8** (10) (1967), 2063-2081. H. Kummer, *Mathematical description of a system consisting of identical quantum-mechanical particles*, J. Math. Phys. **11** (2) (1970), 449–474. J. Ma[ć]{}kowiak, *Infinite-volume limit of continuous $n$-particle quantum systems*, Phys. Rep. **308** (4) (1999), 235–331. J. Ma[ć]{}kowiak, P. Tarasewicz, *An extension of the [B]{}ardeen-[C]{}ooper-[S]{}chrieffer model of superconductivity*, Physica C **331** (1) (2000), 25-37. J. Ma[ć]{}kowiak, M. Wiśniewski, *A perturbation expantion for the free energy of the exchange [H]{}amiltonian*, Physica A **242** (3-4) (1997), 482-500. J. von Neumann, *Mathematische Grundlagen der Quantenmechanik*, Springer-Verlag, Berlin, 1932. W. Radzki, *Kummer contractions of product density matrices of systems of $n$ fermions and $n$ bosons* (Polish), MS thesis, Institute of Physics, Nicolaus Copernicus University, Toruń, 1999. D. Ruelle, *Statistical Mechanics*, Benjamin, New York, 1969. R. Schatten, *Norm ideals of completely continuous operators*, Springer-Verlag, Berlin, G[ö]{}ttingen, Heidelberg, 1960. C. W. Schneider, G. Hammerl, G. Logvenov, T. Kopp, J. R. Kirtley, P. J. Hirschfeld, J. Mannhart, *critical current – Oscillations of SQUIDs*, Europhys. Lett. **68** (1) (2004), 86-92. P. Tarasewicz, J. Ma[ć]{}kowiak, *Thermodynamic functions of Fermi gas with quadruple [BCS]{}-type binding potential*, Physica C **329** (2) (2000), 130-148. G. E. Volovik, *Exotic Properties of Superfluid $^{3}\mathrm{He}$*, World Scientific, Signapore, 1992. [^1]: *PACS numbers*: 02.30.Tb, 03.65.-w, 05.30.-d, 05.30.Fk, 05.30.Jp, 05.70.-a
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the conditions for reactivity enhancement of catalytic processes in porous solids by use of molecular traffic control (MTC) as a function of reaction rate and grain size. With dynamic Monte-Carlo simulations and continuous-time random walk theory applied to the low concentration regime we obtain a quantitative description of the MTC effect for a network of intersecting single-file channels in a wide range of grain parameters and for optimal external operating conditions. The efficiency ratio (compared with a topologically and structurally similar reference system without MTC) is inversely proportional to the grain diameter. However, for small grains MTC leads to a reactivity enhancement of up to approximately 30% of the catalytic conversion $A\to B$ even for short intersecting channels. This suggests that MTC may significantly enhance the efficiency of a catalytic process for small porous nanoparticles with a suitably chosen binary channel topology.' author: - 'Andreas Brzank^1,2^' - Gunter Schütz^1^ title: Molecular Traffic Control in Porous Nanoparticles --- Introduction ============ Zeolites are used for catalytic processes in a variety of applications, e.g. cracking of large hydrocarbon molecules. In a number of zeolites diffusive transport occurs along quasi-one-dimensional channels which do not allow guest molecules to pass each other [@Baer01]. Due to mutual blockage of reactand $A$ and product molecules $B$ under such [*single-file conditions*]{} [@Karg92] the effective reactivity of a catalytic process $A\to B$ – determined by the residence time of molecules in the zeolite – may be considerably reduced as compared to the reactivity in the absence of single-file behaviour. It has been suggested that the single-file effect may be circumvented by the so far controversial concept of molecular traffic control (MTC) [@Dero80; @Dero94]. This notion rests on the assumption that reactands and product molecules resp. may prefer spatially separated diffusion pathways and thus avoid mutual suppression of self-diffusion inside the grain channels. The necessary (but not sufficient) requirement for the MTC effect, a channel selectivity of two different species of molecules, has been verified by means of molecular dynamic (MD) simulations of two-component mixtures in the zeolite ZSM-5 [@Snur97] and relaxation simulations of a mixture of differently sized molecules (Xe and SF$_6$) in a bimodal structure possessing dual-sized pores (Boggsite with 10-ring and 12-ring pores) [@Clar00]. Also equilibrium Monte-Carlo simulations demonstrate that the residence probability in different areas of the intracrystalline pore space may be notably different for the two components of a binary mixture [@Clar99] and thus provide further support for the notion of channel selectivity in suitable bimodal channel topologies. Whether a MTC effect leading to reactivity enhancement actually takes place was addressed by a series of dynamic Monte Carlo simulations (DMCS) of a stochastic model system with a network of perpendicular sets of bimodal intersecting channels and with catalytic sites located at the intersecting pores (NBK model) [@Neug00; @Karg00; @Karg01]. The authors of these studies found numerically the occurrence of the MTC effect by comparing the outflow of reaction products in the MTC system with the outflow from a reference system with equal internal and external system parameters, but no channel selectivity (Fig. \[systemPics\]). The dependency of the MTC effect as a function of the system size has been investigated in [@Brau03]. The MTC effect is favored by a small number of channels and occurs only for long channels between intersections, which by themselves lead to a very low absolute outflow compared to a similar system with shorter channels. A recent analytical treatment of the master equation for this stochastic many-particle model revealed the origin of this effect at high reactivities [@Brza04]. It results from an interplay of the long residence time of guest molecules under single-file conditions with a saturation effect that leads to a depletion of the bulk of the crystallite. Thus the MTC effect is firmly established, but the question of its relevance for applications remains open. ![REF system (left) with $N=3$ channels and MTC system (right) of the the same size. In contrast to the REF case, where we allow both types of particles ($A$ and $B$ particles) to enter any channel, in the MTC system $A$ particles are carried through the vertical $\alpha$ channels whereas the $B$ particles diffuse along the horizontal $\beta$ channels. Black squares indicate catalytic sites where a catalytic transformation $A\to B$ is allowed.[]{data-label="systemPics"}](refpicbw.ps "fig:"){width="6cm"} ![REF system (left) with $N=3$ channels and MTC system (right) of the the same size. In contrast to the REF case, where we allow both types of particles ($A$ and $B$ particles) to enter any channel, in the MTC system $A$ particles are carried through the vertical $\alpha$ channels whereas the $B$ particles diffuse along the horizontal $\beta$ channels. Black squares indicate catalytic sites where a catalytic transformation $A\to B$ is allowed.[]{data-label="systemPics"}](mtcpicbw.ps "fig:"){width="6cm"} Here we address this question by a systematic study of the MTC effect as a function of the reactivity of the catalytic sites and as a function of grain size, but using fixed small channel length. This choice is motivated by potential relevance from an applied perspective. Moreover, for the same reason we determine the MTC effect by making a comparison with the reference system using the same set of fixed internal (material-dependent) parameters, but (unlike in previous studies [@Karg01; @Brau03; @Brza04]) for each case (MTC and REF resp.) different optimal external (operation-dependent) parameters which one would try to implement in an industrially relevant process. It will transpire that a significant MTC effect (reactivity enhancement up to $\approx 30\%$) occurs in our model system even for small channel length at realistic intermediate reaction rates of the catalyst, provided the grain size is sufficiently small. This may be of interest as since the first successful synthesis of mesoporous MCM-41 nanoparticles [@Beck92], there has been intense research activity in the design and synthesis of structured mesoporous solids with a controlled pore size. In particular, synthesis of bimodal nanostructures with independently controlled small and large mesopore sizes has become feasible [@Sun03].\ In this work we do not study a specific process in a specific material, but we demonstrate the validity of the MTC concept even if channels inside the porous material are short. This is novel and – from an applied viewpoint – crucial since it is a necessary condition for starting expensive and time-consuming quantitative investigations in specific settings. NBK Model ========= As in [@Brau03; @Brza04] we consider the NBK lattice model [@Neug00] with a quadratic array of $N\times N$ channels (Fig. \[systemPics\]) which is a measure of the grain size of the crystallite. Each channel has $L$ sites between the intersection points where the irreversible catalytic process $A\to B$ takes place. We assume the boundary channels of the grain to be connected to the surrounding gas phase, modelled by reservoirs of constant densities such that the entrances of the respective channels (extra reservoir sites) have a fixed $A$ particle density $\rho$. We assume the reaction products $B$ which leave the crystallite to be removed immediately from the gas phase such that the density of $B$ particles in the reservoir is always 0. Short-range interaction between particles inside the narrow pores is described by an idealized hard core repulsion which forbids double-occupancy of lattice sites. The underlying dynamics are stochastic. We work in a continuous time description where the transition probabilities become transition rates and no multiple transitions occur at the same infinitesimal time unit. Each elementary transition between microscopic configurations of the system takes place randomly with an exponential waiting-time distribution. Diffusion is modelled by jump processes between neighbouring lattice sites. $D$ is the elementary (attempt) rate of hopping and is assumed to be the same for both species $A,B$ of particles. In the absence of other particles $D$ is the self-diffusion coefficient for the mean-square displacement along a channel. If a neighboring site is occupied by a particle then a hopping attempt is rejected (single-file effect). The dynamics inside a channel are thus given by the symmetric exclusion process [@Spit70; @Spoh83; @vanB83; @Schu94] which is well-studied in the probabilistic [@Ligg99] and statistical mechanics literature [@Schu01]. The self-diffusion along a channel is anomalous, the effective diffusion rate between intersection points decays asymptotically as $1/L$, see [@vanB83] and references therein. At the intersections the reaction $A\to B$ occurs with a reaction rate $c$. This reaction rate influences, but is distinct from, the effective grain reactivity which is largely determined by the residence time of guest molecules inside the grain which under single-file conditions grows in the reference system with the third power of the channel length $L$ [@Rode99]. At the boundary sites particles jump into the reservoir with a rate $D(1-\rho_A-\rho_B)$ in the general case. Correspondingly particles are injected into the grain with rates $D\rho_{A,B}$ respectively. As discussed above here we consider only $\rho_A=\rho$, $\rho_B=0$. For the REF system A and B particles are allowed to enter and leave both types of channels, the vertical ($\alpha$) and horizontal ($\beta$) ones. In case of MTC $A$($B$) particles will enter $\alpha$($\beta$)-channels only, mimicking complete channel selectivity. Therefore all channel segments carry only one type of particles in the MTC case. For the boundary channels complete selectivity implies that $\alpha$-channels are effectively described by connection with an $A$-reservoir of density $\rho_A=\rho$ ($B$-particles do not block the boundary sites of $\alpha$-channels) and $\beta$-channels are effectively described by connection with a $B$-reservoir of density $\rho_B=0$, respectively. ($A$-particles do not block the boundary sites of $\beta$-channels.) This stochastic dynamics, which is a Markov process, fully defines the NBK model. In both cases, MTC and REF system, the external concentration gradient between $A$ and $B$ reservoir densities induces a particle current inside the grain which drives the system in a stationary nonequilibrium state. For this reason there is no Gibbs measure and equilibrium Monte-Carlo algorithms cannot be applied for determining steady state properties. Instead we use dynamic Monte-Carlo simulation (DMCS) with random sequential update. This ensures that the simulation algorithm yields the correct stationary distribution of the model. Monte Carlo results =================== Anticipating concentration gradients between intersection points we expect due to the exclusion dynamics linear density profiles within the channel segments [@Spoh83; @Schu01; @Brza04], the slope and hence the current being inversely proportional to the number of lattice sites $L$. The total output current $j$ of $B$ particles, defined as the number of $B$-particles leaving the grain per time unit in the stationary state, is the main quantity of interest. It determines the effective reactivity of the grain. We are particularly interested in studying the system in its maximal current state for given reactivity $c$ and size constants $N$, $L$, which are intrinsic material properties of a given grain. The $A$ particle reservoir density $\rho$, determined by the density in the gas phase, can be tuned in a possible experimental setting. Let us therefore denote the reservoir density which maximizes the output current by $\rho^*$ and the maximal current by $j^*$. For MTC systems as defined above we always expect $\rho_{MTC}^*=1$, since the highly charged entrances of $\alpha$-channels do not block the exit of $B$-particles and hence do not prevent them from leaving the system. Fig. \[rhostar\] shows $j$ as a function of $\rho$ for both a MTC and REF system of $N=5$, $L=1$ and reactivity $c=0.1$. Indeed for MTC the maximal output occurs for the maximal reservoir density. In case of the REF system $\rho_{REF}^*$ as well as $j_{REF}^*$ need to be found by simulation. We iteratively approach the maximal current by a set of 9 datapoints. The “best” datapoint has been chosen in order to approximate the maximum. Statistical errors are displayed. They are, however, mostly within symbol size. ![$j_{REF}$ (solid symbols) and $j_{MTC}$ (open symbols) as a function of the reservoir density for a system with $N=5$, $L=1$, $c=0.1$.[]{data-label="rhostar"}](rhomax.eps){width="7cm"} In order to measure the efficiency of a MTC system over the associated REF system we define the efficiency ratio $$\begin{aligned} R(c,N,L)=\frac{j_{MTC}^*}{j_{REF}^*}\end{aligned}$$ which is a function of the system size $N$, $L$ and reactivity $c$. Fig. \[RMTC\] (left) shows the measured ratio $R$ for a large range of reactivities. We plotted systems with $L=2$ and different $N$. We note that the MTC effect has a strong negative dependence on increasing $N$ for all $N$ and increasing $c$ above some optimal value $c^\ast$. We denote this optimal value by $R_{max}$. Fig. \[RMTC\] (right) shows $R_{max}^*(N)$ for different $L$ and proves that there is an MTC effect for any $L$ and any $N$. Notice, however, that with increasing $N$ the optimal ratio not only decreases, but appears at unnaturally small reactivities $c$. This is highly undesirable as then the actual output current shrinks to zero. Even though more efficient than a REF system, a MTC grain with large $N$ would not operate under practically relevant conditions. ![Left: Ratio $R(c)$ for different number of channels $N$ and $L=2$. Right: Maximal ratio $R^*$ for different $L$[]{data-label="RMTC"}](Rmodel1l2.eps "fig:"){width="5cm"} ![Left: Ratio $R(c)$ for different number of channels $N$ and $L=2$. Right: Maximal ratio $R^*$ for different $L$[]{data-label="RMTC"}](RmaxMTC.eps "fig:"){width="5cm"} As expected from previous results [@Brau03; @Brza04] the MTC effect is seen to increase with increasing $L$, even though the measure $R$ used here is different (Fig. \[increasingL\]). This follows from theoretical studies of single-file systems. The mean traveling time of a product molecule through a channel of length $L$ is proportional to $L^2$ in the MTC case as in ordinary diffusion, but proportional to $L^3$ in the REF case due to mutual blockage. Hence the current is proportional to $1/L$ in a MTC system, but proportional to $1/L^2$ in a REF system. This holds for all values of the parameters and hence for sufficiently large $L$ the MTC system becomes more efficient. ![Dependence of the output ratio on the channel length $L$.[]{data-label="increasingL"}](Rmodel1.eps){width="7cm"} In order to understand the strong decrease of $R$ for large reactivities and large number of channels it is instructive to study the stationary density profiles. Fig. \[profiles\] shows the $B$ particle densities for a REF system (left) and MTC system (right) with $N=L=9$ and large reactivity $c\to\infty$. Theoretical investigations of MTC systems in the large reactivity case [@Brza04] show that in the state of maximal current ($\rho=1$) the output of $B$ particles is independent of the number of channels. For large $L$ the maximal output current becomes $j^*_{c\to\infty} \simeq \frac{4D}{L}$ where $D$ is the diffusion constant. A nonvanishing current of $B$ particles can be observed only at the four corners of the lattice (Fig. \[max\] right). For fixed moderate $c$ this extreme situation is not realized, but nevertheless with increasing $N$ one expects that the bulk gets increasingly depleted, since in each layer a fraction of $A$ particles is converted into $B$ particles. Thus the total $A$-density in each layer may be described by the form $$\begin{aligned} \frac{d}{dx} N_A(x) = - \gamma N_A(x)\end{aligned}$$ The coarse-grained ansatz for the number $N_A(x)$ of $A$-particles in layer $x$ predicts an exponential decrease of the $A$ density, leaving only an active boundary layer of finite thickness $$\begin{aligned} \xi = 1/\gamma \propto 1/c\end{aligned}$$ at the top and bottom respectively of the (in our simulation two-dimensional) grain. Hence, as a function of $N$, $j^\ast_{MTC}$ saturates at some constant $$\begin{aligned} \lim_{N \to\infty} j^\ast_{MTC}(c,N,L)= C^\ast_{MTC}(c,L).\end{aligned}$$ On the other hand, in the REF system the output current scales linearly with increasing $N$ for all, even large, $c$ (Fig. \[max\] left). This is because even though again the bulk depletes with increasing $N$ the active boundary layer is a surface scaling linearly with $N$. Thus $$\begin{aligned} \lim_{N \to\infty} j^\ast_{REF}(c,N,L)= N C^\ast_{REF}(c,L)\end{aligned}$$ Hence $$\begin{aligned} R(c,N,L) \propto 1/N\end{aligned}$$ and the MTC effect vanishes at some $N$ for fixed reactivity $c$ and channel length $L$. Looking on a wider range of reactivities (Fig. \[currents\]) we notice that the maximal currents for MTC systems saturate for some intermediate $c$ whereas in the REF case the current reaches a plateau for large reactivities. This observation can be rationalized by noticing that an increase of the output with increasing $c$ is limited by the incoming current of available $A$ particles. Since in the REF system $A$ and $B$ particles block each other an increasing current of $B$-particles always restricts the number of $A$ particles diffusing in. Hence the saturation due to high reactivity sets in only for larger values of $c$ than in the MTC system. ![Profiles of the REF (left) and MTC (right) system in the large-reactivity case. $N=L=9$[]{data-label="profiles"}](profileREFB.eps "fig:"){width="5cm"} ![Profiles of the REF (left) and MTC (right) system in the large-reactivity case. $N=L=9$[]{data-label="profiles"}](profileMTCBrho1.eps "fig:"){width="5cm"} ![Active channel segments in the large-reactivity case. REF system (left), MTC system (right)[]{data-label="max"}](refmaxbw.ps "fig:"){width="6cm"} ![Active channel segments in the large-reactivity case. REF system (left), MTC system (right)[]{data-label="max"}](mtcmaxbw.ps "fig:"){width="6cm"} ![Maximal currents for the REF system (left) and the MTC system (right). $L=2$ and different $N$.[]{data-label="currents"}](jmodel0l2.eps "fig:"){width="5cm"} ![Maximal currents for the REF system (left) and the MTC system (right). $L=2$ and different $N$.[]{data-label="currents"}](jmodel1l2.eps "fig:"){width="5cm"} REF with small reactivities =========================== The arguments put forward above for explaining qualitative and quantitative features of the MTC effect at intermediate and large reactivities $c$ fail for small reactivities, i.e., when $c$ is of the order of the inverse mean intracrystalline residence time. We first consider the reference system. In this case the grain contains only a very small number of $B$ particles at any time. In this low-concentration regime the diffusion of $B$ particles inside the grain may be described by a linear diffusion equation with an effective diffusion coefficient determined by the interaction with $A$ particles as follows. The $A$ particles are considered as a medium with constant equilibrium density $\rho$ throughout the system. At the intersections $B$ particles are created randomly with effective rate $c\rho$. We then describe the diffusion of a $B$ particle in the channel between intersections by a random walk from intersection to intersection with an effective diffusion constant $D_{eff}$ given by the (inverse) mean travelling time between intersections. Let $\rho_{(x,y)}$ be the $B$ particle density at the intersection denoted by $(x,y)$ and $\Delta$ a discrete two-dimensional Laplace operator $$\begin{aligned} \label{laplace} \Delta\rho_{(x,y)}=\frac{1}{4}\left( \rho_{(x+1,y)}+\rho_{(x-1,y)}+ \rho_{(x,y+1)}+\rho_{(x,y-1)}-4\rho_{(x,y)}\right).\end{aligned}$$ Here the lattice unit is given by the channel length rather than the pore size inside a channel. For the $B$ particle density at intersections we thus obtain $$\begin{aligned} \label{diffeq}\frac{\partial}{\partial t}\rho_{(x,y)}=D_{eff}\Delta\rho_{(x,y)}+c\rho.\end{aligned}$$ A stationary solution to can be found by use of the discrete sine transform $\tilde{\rho}_{(q,p)}=\sum_{x=1}^N\sum_{y=1}^N \rho_{(x,y)} \sin\frac{q\pi x}{N+1}\sin\frac{p\pi y}{N+1}$. We express (with vanishing time derivative) in terms of the transformed density $\tilde{\rho}_{(q,p)}$. Taking into account the boundary conditions $\rho_{0,y}=\rho_{x,0}=\rho_{x,N+1}=\rho_{N+1,y}=0$ with $0\le x,y \le N+1$ we find $$\begin{aligned} \label{diffeqtrns} \tilde{\rho}_{(q,p)}=\frac{2c\rho_A} {\cos\frac{q\pi}{N+1}+\cos\frac{p\pi}{N+1}-2} \sum_{n=1}^N\sum_{m=1}^N\sin\frac{q\pi n}{N+1}\sin\frac{p\pi m}{N+1}.\end{aligned}$$ The non zero contributions of the double sum can be expressed as a product of two Cotangents. Transforming back finally yields $$\begin{aligned} \label{REFsolution} {\rho}_{(x,y)}&=\lambda\sum_{n=1}^N\sum_{m=1}^N \frac{B_{(n,m)}}{\cos\frac{n\pi}{N+1}+\cos\frac{m\pi}{N+1}-2} \sin\frac{n\pi x}{N+1}sin\frac{n\pi y}{N+1}\\ B_{(n,m)}&= \begin{cases} 0 & \text{if $n$ or $m$ even} \\ \frac{1}{(N+1)^2}\cot\frac{m\pi}{2(N+1)}\cot\frac{n\pi}{2(N+1)} & \text{else} \end{cases}\end{aligned}$$ $D_{eff}$ together with the reactivity and reservoir density, is absorbed into a fitting parameter $\lambda \sim \frac{c\rho}{D_{eff}}$. With ${\rho}^{Sim}_{(x,y)}$ being the $B$ particle densities obtained from simulations and ${\rho}^{Th}_{(x,y)}$ the theoretical densities, we define the homogenized mean square deviation $$\begin{aligned} \label{quality} Q:=\frac{2}{N}\sqrt{\sum_\text{intersections} \left(\frac{{\rho}^{Sim}_{(x,y)}-{\rho}^{Th}_{(x,y)}}{{\rho}^{Sim}_{(x,y)}+{\rho}^{Th}_{(x,y)}}\right)^2}\end{aligned}$$ as a measure for the quality of the approximation. The sum runs over all intersections, with the local deviation normalized by the local mean. This assures that all intersections contribute with their proper weight. For large $c$ the profile flattens (see Fig. \[profiles\]) and is not very well described by . Fig. \[csmall1\] shows $Q$ as a function of $c$. The boundary density has been chosen to ${\rho}=0.5$. For small reactivies the collapse of the simulated and calculated profile is fairly good ($Q\approx 0.1$). However, the best collapse occurs for small intermediate reactivities. This somewhat peculiar behavior can be explained by assuming that this random walk is not a Markov process as implied in the derivation of . The structural change in the occupation of the channel segment, once a particle covered the distance between two adjacent intersections, implies a “memory” effect. Hence we cannot assume perfect statistical independence of subsequent random traveling times between intersections which implies that the Markov assumption is not very well satisfied. As we increase the reactivity more than one B particle may be present in the system. This corresponds to an ensemble average over almost independent random walkers which cancels the time correlations of each individual walker and thus improves the validity of the Markov assumption. This leads to the good data collapse for some intermediate reactivities. As $c$ increases further both the assumption of equilibrium of $A$ particles and the low-concentration approximation (\[diffeq\]) for $B$ particles fail.\ ![Collapse of the simulated and calculated profiles for REF systems. Left: $N=5$, $\rho=0.5$ and different $L$. (right) $N=5$, $L=1$ and different $\rho$.[]{data-label="csmall1"}](REFcsmalln5.eps "fig:"){width="5cm"} ![Collapse of the simulated and calculated profiles for REF systems. Left: $N=5$, $\rho=0.5$ and different $L$. (right) $N=5$, $L=1$ and different $\rho$.[]{data-label="csmall1"}](REFcsmalln5rho0.5.eps "fig:"){width="5cm"} MTC with small reactivities =========================== Fig. \[MTCprofile\] shows the stationary density profiles of a MTC system with small reactivity. We first note that the output current is proportional to the number $N$ of $\beta$ exit channels, in agreement with the observation that $R_{max}$ approaches a constant for large $N$, rather than decaying proportional to $1/N$ as $R$ for fixed reactivity does. Moreover, due to rare transition events all $\alpha$-channels are almost in equilibrium with the reservoir (Fig. \[MTCprofile\] left). Thus it is sufficient to single out only one $\beta$ channel. ![Profiles for a MTC system $N=L=5$ and small reactivity $c=0.005$. (left) densities of A and (right) of B particles. $\rho=0.3$[]{data-label="MTCprofile"}](profileMTCAsmallc.eps "fig:"){width="5cm"} ![Profiles for a MTC system $N=L=5$ and small reactivity $c=0.005$. (left) densities of A and (right) of B particles. $\rho=0.3$[]{data-label="MTCprofile"}](profileMTCBsmallc.eps "fig:"){width="5cm"} We adapt the approach we took for the REF system to the present case. Due to low concentration we use a linear diffusion equation for the $B$-density inside a $\beta$-channel. There are, however, two essential differences. (1) Because auf the absence of $A$-particles we do not consider the occupation of intersections alone, but we track the motion of $B$ particles inside a channel. (2) Due to the inequivalence of the sites inside the $\beta$-channels and the intersection points where also $A$ may be located, we need to describe a random walker with space dependent hopping rates. Intersections, on the one hand, serve as sites of B particle creation with a rate proportional to the reactivity. On the other hand intersections are occupied by A particles with a probability $\rho$ and hence, block B particles. This leads to different hopping rates onto and from an intersection as displayed in Fig. \[MTCbeta\]. ![$\beta$ channel with hopping rates. $L=4$[]{data-label="MTCbeta"}](MTCbeta.eps){width="12cm"} The master equation description leads to a set of equations for the local $B$ particle densities ${\left\langle}n_x {\right\rangle}$. $$\begin{aligned} \label{densities} \frac{d}{dt}{\left\langle}n_x {\right\rangle}&= \begin{cases} D\left( {\left\langle}n_{x-1} {\right\rangle}+ {\left\langle}n_{x+1} {\right\rangle}- {\left\langle}n_{x} {\right\rangle}- (1-\rho){\left\langle}n_{x} {\right\rangle}\right) &x=(L+1)r\pm1\\ D\left( 1-\rho \right)\left( {\left\langle}n_{x-1} {\right\rangle}+ {\left\langle}n_{x+1} {\right\rangle}\right)-2D{\left\langle}n_{x} {\right\rangle}+ c\rho &x=(L+1)r\\ D\left( {\left\langle}n_{x-1} {\right\rangle}+ {\left\langle}n_{x+1} {\right\rangle}- 2{\left\langle}n_{x} {\right\rangle}\right) &\text{else}\\ \end{cases}\end{aligned}$$ Here $x$ is the lattice position inside a channel in units of the pore size. In the stationary state left hand site of vanishes and it follows a recurrence relation for the B-particle intersection densities $\rho_r\equiv{\left\langle}n_{(L+1)r}{\right\rangle}$. $$\begin{aligned} \label{MTCrecurence} \rho_{r}&=\frac{1}{2}\left(\rho_{r+1}+\rho_{r-1}+K \right)\\ K&=\frac{c\rho}{D}\left[ \left(L+1\right) - \left( L-1 \right)\rho \right]\end{aligned}$$ The inhomogeneity $K$ depends on the segment length $L$ and the transition rates. A solution satisfying and the boundary condition $\rho_0=\rho_{N+1}=0$ can be obtained by use of a quadratic ansatz. We find $$\begin{aligned} \label{MTCsolution} \rho_r\equiv{\left\langle}n_{(L+1)r} {\right\rangle}=\frac{1}{2}rK\left[N+1-r\right]\end{aligned}$$ Due to the modified hopping rates in the neighborhood of the intersections the density profile is linear between ${\left\langle}n_{(L+1)r-1}{\right\rangle}$ and ${\left\langle}n_{(L+1)r-L}{\right\rangle}$ rather than between the intersections $\rho_r$ and $\rho_{r-1}$ itself. This becomes noticeable for large reservoir densities $\rho$. There is a “jump” between intersections and their adjacent lattice sites. Fig. \[MTCsmalldemo\] illustrates this by means of a profile obtained from MC simulations). Let us consider a channel segment embedded by the two intersections $\rho_r$ and $\rho_{r-1}$. Solving for the lattice sites located next to the intersections leads to $$\begin{aligned} \label{MTCnextsites} {\left\langle}n_{(L+1)r-1} {\right\rangle}&=\frac{\rho_{r-1}+\rho_r\left( L-(L-1)\rho \right)} {(L+1)-2L\rho+(L-1)\rho^2}\\ {\left\langle}n_{(L+1)r-L} {\right\rangle}&=\frac{\rho_{r}+\rho_{r-1}\left( L-(L-1)\rho \right)} {(L+1)-2L\rho+(L-1)\rho^2}\end{aligned}$$ and for the very first channel segment we find $$\begin{aligned} \label{MTCboundsegment} {\left\langle}n_{L} {\right\rangle}&= \frac{L\rho_{1}}{L+1-L\rho}.\end{aligned}$$ We have now fully determined the profile for MTC systems with small reactivity. The currents between neighboring lattice sites are proportional to the density difference $j_{(x,x+1)}\sim ({\left\langle}n_{x+1} {\right\rangle}- {\left\langle}n_x {\right\rangle})$ with a proportional constant being the actual hopping rate. ![Demonstration of “jumps” at intersections for large $\rho$. Here we choose $\rho=0.9$.[]{data-label="MTCsmalldemo"}](profileMTCBdemo.eps){width="8cm"} Fig. \[MTCprofiledemo\] shows the simulated profile of a $\beta$ channel together with the theoretical densities. We chose an intermediate reservoir density $\rho=0.5$. The collapse of the two curves is very good. In order to study the quality of the approximation for different sets of parameter we use the definition above. The normalized mean square deviation $Q$ is plotted for different $L$ and different $\rho$ (Fig. \[MTCparameters\]). In the regime of interest, i.e. for small reactivities, the profile is well described by the theory even for small $L$ and rather high reservoir densities. ![Densities of a $\beta$ channel. Small open circles connected with the dotted line show the theoretical densities. Solid circles are densities obtained by MC simulation ($N=L=5$, $c=0.0005$, $\rho=0.5$) .[]{data-label="MTCprofiledemo"}](MTCsmallcdemo.eps){width="8cm"} ![Collapse of the simulated and calculated profiles for MTC systems. Left: $N=5$, $\rho=0.5$ and different $L$. Right: $N=5$, $L=5$ and different $\rho$.[]{data-label="MTCparameters"}](MTCcsmallrho0.5.eps "fig:"){width="6cm"} ![Collapse of the simulated and calculated profiles for MTC systems. Left: $N=5$, $\rho=0.5$ and different $L$. Right: $N=5$, $L=5$ and different $\rho$.[]{data-label="MTCparameters"}](MTCcsmall.eps "fig:"){width="6cm"} Conclusion ========== Our simulations and analytical results describe the MTC effect quantitatively over a wide range of parameters within the NBK model. Moreover, trends which are independent of model details have been identified by analytical calculations and lead to a more positive result than concluded by [@Brau03] where no MTC effect at all was reported for short interconnecting channels $L=1$. For reasonable reactivities and channel lengths the MTC effect vanishes proportionally to $1/N$, i.e., is inversely proportional to the grain diameter. This was shown explicitly for a two-dimensional simulation model, but the reasoning that led to this conclusion extends straightforwardly to a three-dimensional system. Nevertheless, for optimized external process parameters the NBK model exhibits an enhancement of the effective reactivity of up to approx. $30\%$ for small grains and any (even short) channel length and reactivity $c$. In the present study we have not taken into consideration that smaller molecules may diffuse into larger channels, not only the smaller channels. This was done in order to keep the model simple so that the origin of the observed MTC effect becomes as transparent as possible. We did make simulations for a different model where this mechanism is taken into account. We have found in these simulations that the MTC effect becomes stronger. In summary, our investigations suggest that MTC may enhance significantly the effective reactivity in zeolitic nanoparticles with suitable binary channel systems and thus may be of practical relevance in applications. [10]{} Ch. Baerlocher, W. M. Meier and D. H Olson, Atlas of Zeolite Structure Types, Elsevier: London 2001. J. Kärger and D.M. Ruthven, Diffusion in Zeolites and Other Microporous Solids, Wiley: New York 1992. E.G. Derouane and Z. Gabelica, J. Catal. 65 (1980) 486. E.G. Derouane, Appl. Catal. A, N2 (1994) 115. R. Q. Snurr and J. Kärger, Phys. Chem. B 101 (1997) 6469. L. A. Clark,G. T. Ye, R. Q. Snurr, Phys. Rev. Lett. 84 (2000) 2893. L. A. Clark,G. T. Ye,A. Gupta,L. L. Hall,R. Q. Snurr, J. Chem. Phys. 111 (1999) 1209. M. Heuchel, R.Q. Snurr, E. Buss, Langmuir 13 (1997) 6249. N. Neugebauer, P. Bräuer, J. Kärger, J. Catal. 194 (2000) 1. J. Kärger, P. Bräuer, H. Pfeifer, Z. Phys. Chem. 104 (2000) 1707. J. Kärger, P. Bräuer, A. Neugebauer, Europhys. Lett. 53 (2001) 8. P. Bräuer, A. Brzank,J. Kärger, J. Phys. Chem. B 107 (2003) 1821. A. Brzank, G. M. Schütz, P. Bräuer, and J. Kärger, Phys. Rev. E 69 (2004) 031102. J.S. Beck, J.C. Vartulli, W.J. Roth, M.E. Leonowice, C.T. Kresge, K.D. Schmitt, C. T-W. Chu, D.H. Olsen, E.W. Sheppard, S.B. McCullen, J.B. Higgins, J.L. Schlenker, J. Am. Chem. Soc. 114 (1992) 10834. J.H. Sun, Z. Shan, Th. Maschmeyer, M.-O. Coppens, Langmuir 19 (2003) 8395. F. Spitzer, Adv. Math. 5 (1970) 246. H. Spohn, J. Phys. A 16 (1983) 4275. H. van Beijeren, K.W. Kehr, R. Kutner, Phys. Rev. B 28 (1983) 5711. G. Schütz, S. Sandow, Phys. Rev. E 49 (1994) 2726. T.M. Liggett: [*Stochastic Interacting Systems: Contact, Voter and Exclusion Processes*]{} (Springer, Berlin, 1999). G.M. Schütz, in: [*Phase Transitions and Critical Phenomena*]{}, C. Domb and J. Lebowitz (eds.), (Academic, London, 2001). C. Rödenbeck, J. Kärger, J. Chem. Phys. 110 (1999) 3970.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The engineered spin structures recently built and measured in scanning tunneling microscope experiments are calculated using density functional theory. By determining the precise local structure around the surface impurities, we find the Mn atoms can form molecular structures with the binding surface, behaving like surface molecular magnets. The spin structures are confirmed to be antiferromagnetic, and the exchange couplings are calculated within $8\%$ of the experimental values simply by collinear-spin GGA+U calculations. We can also explain why the exchange couplings significantly change with different impurity binding sites from the determined local structure. The bond polarity is studied by calculating the atomic charges with and without the Mn adatoms. In addition, we study a second adatom, Co. We study the surface Kondo effect of Co by calculating the surrounding local density of states and the on-site Coulomb $U$, and compare and contrast the behavior of Co and Mn. Finally, our calculations confirm that the Mn and Co spins of these structures are 5/2 and 3/2 respectively, as also measured indirectly by STM.' author: - 'Chiung-Yuan Lin' - 'B. A. Jones' title: 'First-principles Calculations of Engineered Surface Spin Structures' --- Introduction ============ Assembling and manipulating a few spins (1$\sim$20) is essential for the development of nano magnetic devices. During the past decades, chemists have been able to synthesize molecular magnets that carry giant molecular spins. Potential applications of molecular magnets have been extensively proposed in the literature [@MM], such as magnetic storage bits, quantum computation, and magnetooptical switches. The atoms within a molecular magnet form chemical bonds with each other, and therefore are very difficult to manipulate. Instead of assembling atomic spins chemically to form isolated molecules, the advance of manipulating atoms on surfaces by scanning tunnelling microscope (STM) has made it possible to make, probe, and manipulate individual atomic spins. In a pioneering experiment, Hirjibehedin et al. [@MnScience] carried out low-temperature STM measurements of atomic chains of up to 10 Mn atoms. These magnetic chains are assembled by atomic manipulation on copper nitride islands that provide an insulating monolayer between the chains and a Cu(100) substrate (to be called CuN surface later in this paper). Ref. 2 shows the calculation of exchange coupling $J$ using the Heisenberg Hamiltonian to be successful. It demonstrated that the exchange coupling $J$ can be tuned by placing the magnetic atoms at different binding sites on the substrate. Nevertheless, the STM experiments can not provide either a detailed study of the single CuN layer or the sub-atomic spatial structures around the Mn atoms. As we will show in this work [@Mn2009PRB], the former can explain why tunnelling current and spin can both be preserved, and the later is essential for realizing the molecular magnetism of the Mn-surface complex as well as understanding how $J$ depends on the Mn binding site. Moreover, the $5/2$ spin of the Mn atoms on such a CuN surface is calculated directly here rather than indirectly extracting from inelastic-tunnelling-spectroscopy steps in the experiments. In addition to the interatomic magnetic coupling, the surface Kondo effect is also an interesting topic in engineered spin systems. Recent studies show that the surface Kondo effect is interestingly influenced by either the magnetic anisotropy of the Kondo atom itself [@CoNaturePhysics] or by being coupled to a second magnetic atom [@CoPRL]. These systems both have Co as the adatom for their Kondo impurity and are built on the CuN surface that was previously used to study coupling of Mn atoms. These experimental studies explain surface Kondo under external influences using phenological models, and obtain great success. However, detailed microscopic understandings such as the local density of states (LDOS) around the Co and the on-site Coulomb repulsion $U$ were not achieved yet. Also, Ref. 4 concludes indirectly that the Co spin on this surface is $S=3/2$ by first excluding $S=1/2$ and integer $S$ from the experimental fitting and then excluding $S\geq5/2$ based on the experience that the spins of surface-adsorbed atoms are generally unchanged or reduced from the free atom. Yet a direct measurement or calculation was not done. In this work, we perform first-principles calculations of the clean CuN surface and of Mn and Co adatoms on this surface with structure optimization. We find, surprisingly, that when the Mn atoms are deposited on the Cu sites of the CuN surface, the nearby N atoms break bounds with their neighboring Cu and form a “quasi” molecular structure on the surface, a situation which does not happen for Mn at the N sites. This fact itself was not determined from experiment, and can only be realized from a first-principles calculation. As a comparison, we study the clean CuN surface and find that the CuN monolayer is formed by polar covalently bounded Cu and N, and such a layer is shown to provide a semi-metal surface layer on the underlying Cu substrate allowing the coexistence of the Mn spin and STM current. We also accurately calculate the exchange coupling $J$ using the GGA+U method, from which we demonstrate that first-principles calculation has the capability of predicting $J$ of given physical systems. For a Co atom on the same surface, we determined the on-site Coulomb $U$ that is very important in understanding the Kondo effect. We also compare the LDOS of Co on the Cu and N sites, and explain why the Kondo effect is observed in the experiments on the Cu site but not on the N. Finally, we determine, by analyzing the Co density of states, a Co spin that matches what was measured indirectly from STM experiments [@CoNaturePhysics]. Theory ====== The CuN monolayer between the magnetic atoms and Cu substrate originates from the idea of preserving the atomic spins from being screened by the underlying conduction electrons, while at the same time allowing enough tunneling current from an STM tip to probe the spin excitations. To understand this further in a microscopic picture, we simulate both the Cu(100) and CuN surfaces by a supercell of 7-layer slabs separated by 8 vacuum layers, where for the CuN surface, each slab has CuN monolayers on both sides and three Cu layers in between. The electronic structure is calculated, in the frame work of density functional theory, using the all-electron full-potential linearized augmented plane wave (FLAPW) method [@win2k] with the exchange-correlation potential in the generalized gradient approximation (GGA) [@PBE96]. We calculate the LDOS of both the Cu(100) and CuN surfaces at the Fermi energy as a function along the $z$ direction through the surface Cu atom. As seen from Fig. \[fig-CuN-wfn\], the LDOS of the clean Cu(100) surface has a much longer tail into vacuum than the CuN surface. The calculated work functions are 4.6 and 5.2 eV respectively, a difference of 0.6 eV, much smaller than a typical bulk insulator, which has a work function $>\!\!\!\!\!\!\!_{_{_{\textstyle \sim}}}\,3$ eV more than copper. This shows that the CuN monolayer provides the Cu substrate a moderate conduction that makes possible the coexistence of the atomic spin and STM current. ![\[fig-CuN-wfn\] LDOS($\epsilon_{F}$) along the out-of-surface direction with the surface Cu atom as the origin, for both the clean Cu(100) (orange) and CuN (green) surfaces. (The vacuum corresponds to positive values of $z$.)](LDOS-Cu-vs-CuN.ps){width="6.5cm"} ![\[Mn2-unitcell\] Unit cell of a Mn dimer on the CuN surface.](Mn2unitcell.ps){width="8cm"} To calculate the electronic structures of Mn(Co) on the CuN surface, we simulate the single magnetic atom on this surface by a supercell of 5-layer slabs similar to the one for CuN surface with the Mn(Co) atoms placed on top of the CuN surface at $7.24 \AA$ separation. The crystal structure is optimized until the maximum force among all the atoms reduces to $<\!\!\!\!\!\!\!_{_{_{\textstyle \sim}}}\,2$ mRy$/a_{0}$. The $3d$ orbital can in general have strong Coulomb repulsion $U$ that can not be taken into account by GGA. Using a constraint-GGA method [@Novak], we obtain the $U$ value of a single Mn at the Cu site of the CuN surface to be 4.9 eV, and 3.9 eV at the N site. Since the calculated Mn $U$’s fall in the range of strong correlation, they are then used in the GGA+U calculation [@Anisimov] for Mn $3d$. To calculate a Mn dimer on the CuN surface, we simulate the system by the same slab setup as the single Mn except that the Mn atoms on the surface are arranged as in Fig. \[Mn2-unitcell\]. The electronic structure is also calculated using GGA+U with $U$ on the Mn $3d$ orbitals. For Co on the Cu site, we also apply the constraint-GGA method and obtain $U=0.8$ eV. We then calculate this system by GGA with no additional $U$. In fact, since the experiments show such a Co adatom exhibits the Kondo effect, it does not make sense to apply the $U$ statically in a dynamical process (Kondo). Results and Discussion ====================== ![\[fig-CuN-rho\] Electron density contour of the CuN surface along the N raw and the out-of-plane direction.](CuN-charge-density.ps){width="8cm"} ![\[fig-MnCuN-rho\] Electron density contour of a single Mn on the CuN surface along the N-Mn-N raw and the out-of-plane direction.](MnCuN-charge-density.ps){width="8cm"} ![\[fig-MnCuN-dos\] Mn $d$-projected density of states of a single Mn on the CuN surface (the leftmost curve (blue in color) for spin up and the rightmost curve (pink) for spin down).](DOS_Mn_d.ps){width="6.5cm"} ----------------------- ------------------ ------------------ Exchange coupling $J$ Cu-site Mn dimer Cu-site Mn dimer (meV) GGA$+$U $6.50\pm0.05$ $2.5$ (calculated $U$) ($U=4.9$eV) ($U=3.9$eV) STM $6.2\pm0.3$ $2.7$ GGA $18.5$ $-1.8$ GGA+U 5.4 5.1 (calculated $U$+1eV) ----------------------- ------------------ ------------------ : \[J\] Calculated exchange coupling $J$ at different $U$ values, compared with the STM measurements. To see the effect on the surface of the presence of Mn atoms, we plot the electron density of the clean CuN surface in Fig. \[fig-CuN-rho\]. We find the N atoms snug in between the surface Cu atoms to form a CuN surface layer, joined by shared charge densities as well as proximity. The vertical distance between N and the surface Cu is only 0.26 Å, essentially collinear. The density contour shared by N and Cu indicates that a polar covalent bond is formed between Cu (metallic) to N (larger electronegativity). In fact, a Bader analysis [@Bader] on our calculated electron-density distribution shows N and surface Cu are $-1.2$ and $+0.6$ charged respectively. Fig. \[fig-MnCuN-rho\] shows the electron density contour of a single Mn atom placed on the Cu atom on this surface. As one can see, the atomic structure is substantially rearranged. The Mn atom attracts its neighboring N atoms remarkably out of the surface, forming a new polar covalent bond that replaces the CuN binding network, and the Cu atom underneath Mn moves towards the bulk. We have calculated that Mn and its neighboring N are $+1.0$ and $-1.3$ charged respectively, indicating that the Mn-N bond has a stronger polarity than the Cu-N. The calculated density of states for a single Mn on the CuN surface is plotted in Fig. \[fig-MnCuN-dos\]. It is clearly seen that the Mn $3d$ majority spin states are all below the Fermi level and the minority states are all above, which implies a $3d^{5}$ configuration for Mn, a spin $S=5/2$ configuration. We also do the same analysis for Mn at the N site of the CuN surface, and this structure exhibits the same unchanged Mn spin. This verifies the same conclusion drawn from comparing spin chains of different lengths in the STM experiment [@MnScience]. We now consider the exchange coupling of a dimer of Mn. The spin excitation measured by STM [@MnScience] occurs between the antisymmetric spin ground state and the first excited state. These quantum atomic-spin states are not accessible by density-functional electronic-structure calculation. However, the collinear spin states (with parallel and antiparallel spins) of a Heisenberg spin dimer exactly correspond to the collinear magnetic-moment configurations of the real crystal system of the Mn dimer absorbed on the CuN substrate. The parallel and antiparallel spin states have energy expectation values $\pm JS^2$ respectively. One simply takes the difference of the total energies between the parallel- and antiparallel-spin dimer on the CuN surface, and then extract $J$ from this energy difference $\delta E$ and $S$ by the following equation, $$\delta E=JS^2-(-JS^2)=2JS^2 \label{EJS}$$ For a Mn dimer at the Cu site of a CuN surface, we obtain an exchange coupling of $J=6.4$ eV from (\[EJS\]), which shows excellent agreement with the STM measured $J=6.2\pm0.2$ eV. In order to show that this agreement is not just a coincidence, we do the same calculation for a Mn dimer on the N site. The exchange coupling $J$ turns out to be $2.5$ eV, which is also close to the STM measurement ($J=2.7$ eV), and is roughly half of the Cu-site $J$. Thus, we have demonstrated that DFT reproduces the exchange coupling between these engineered spins, and will have the capability of predicting similar systems. In order to check whether it is reasonable to use the $U$ values determined by the constraint-GGA method in calculating $J$, we also calculate $J$ using other $U$ values. The resulting $J$’s are listed in Table \[J\]. We note the significant lack of the agreement of $J$ calculated by alternative methods with the experimental values. This strongly suggests that the constraint-GGA method can very likely be used to correctly predict the exchange couplings of similar spin systems. ![\[fig-Mn2CuN-Nsite-rho\] Electron density contour of a Mn dimer at the N site of the CuN surface along the Mn-N raw and the out-of-plane direction. The white solid line shows our proposed coupling path between Mn spins. The dashed shows how corrugated the clean CuN surface becomes in the presence of Mn adatoms.](Mn2CuN-NSite-charge-density.ps){width="8cm"} ![\[fig-Mn2CuN-Cusite-rho\] Electron density contour of a Mn dimer at the Cu site of the CuN surface along the N-Mn-N raw and the out-of-plane direction. The white solid line shows our proposed coupling path between Mn spins. The dashed shows how corrugated the clean CuN surface becomes in the presence of Mn adatoms.](Mn2CuN-CuSite-charge-density.ps){width="8cm"} The electron density contour of the N-site Mn dimer in Fig. \[fig-Mn2CuN-Nsite-rho\] shows a structure completely different from Mn on the Cu site in Fig. \[fig-Mn2CuN-Cusite-rho\]. The Mn dimer on the Cu site forms a chain-like structure bridged by the significantly lifted middle N atom, while on the N site the Mn is attached to the surface like a crown. The binding structures of the Mn atoms strongly suggest that the Mn spins are coupled through the N atoms. The electron density contours indicate that the Mn dimer at the Cu site has a coupling path considerably shorter than when in the surprisingly different structure at the N site. We propose that this explains why the exchange coupling $J$ measured by STM for the Cu-site Mn dimer has a value twice that of the N-site. ![\[fig-MnCo-wfn\] LDOS($\epsilon_{F}$) along the out-of-surface direction through the adatoms Mn (the larger, solid blue circle in the upper plot) and Co (the larger, solid purple circle in the lower) on the CuN surfaces. The smaller, solid green circles are the Cu atoms underneath the adatoms. The origin is chosen at location of the surface Cu atom of the clean CuN surface, and the vacuum corresponds to positive values of $z$.](Mn-Co-LDOS.ps){width="6.5cm"} ![\[fig-CoCuN-rho\] Electron density contour of a single Co on the CuN surface along the N-Co-N raw and the out-of-plane direction.](CoCuN-charge-density.ps){width="8cm"} Co atoms on the CuN surface behave quite differently from Mn as experiments [@CoNaturePhysics; @CoPRL] show. Co displays a Kondo effect, while Mn does not. The relaxed structure via our calculation shows Co settles lower in the surface than Mn (see Fig. \[fig-MnCo-wfn\]), and so interacts more with the conduction electrons. We also compare the surface LDOS with Co and Mn as in Fig. \[fig-MnCo-wfn\], and find that there is more LDOS between Co and the Cu underneath it than for Mn. This fact can also be seen by comparing the charge contour plots of these two systems (see Fig.\[fig-MnCuN-rho\] and \[fig-CoCuN-rho\]). Such substantial LDOS near Co provides the conduction electrons needed for a Kondo effect to happen. ![\[fig-Co-spin-up\] The Co $3d$ spin-up total density of states on the CuN surface.](Co-DOS-up.ps){width="9cm"} To find the Co spin from our calculation, we plot the densities of states of the $3d$ Co on the CuN surface as in Fig. \[fig-Co-spin-up\], \[fig-Co-spin-down-z2\], and \[fig-Co-spin-down-xy\]. One clearly sees that the spin-up total density of states and the spin-down $3d_{z^2}$ and $3d_{xz}$ ones are all occupied, while the rest are majority unoccupied. This density-of-state analysis gives $S=3/2$ for Co on the CuN surface by approximating the Co $3d$ in terms of an atomic-like electron configuration of 5 spin-up and 2 spin-down electrons. Another interesting point is to compare the $U$ values of Co on Au(111) and this CuN/Cu(100) surface since Co/Au(111) [@Kondoscience] is one of the most extensively studied surface Kondo systems. The on-site Coulomb repulsion of Co on Au(111) was extracted to be $2.8$ eV from a previous first-principles calculation [@Ujsaghy]. The present study has obtained $U=0.8$ eV for Co on the CuN surface. The substantial difference of Co $U$ of the two systems can be explained in the way that Co surrounded by N is more positively charged than that on Cu(111), so adding an electron into Co on CuN is easier because a Co ion attracts an electron more strongly. ![\[fig-Co-spin-down-z2\] The Co $3d$ spin-down densities of states of the $d_{z^2}$ and $d_{x^2-y^2}$ subshells on the CuN surface.](Co-DOS-z2.ps){width="9cm"} ![\[fig-Co-spin-down-xy\] The Co $3d$ spin-down densities of states of the $d_{xy}$, $d_{xz}$, and $d_{yz}$ subshells on the CuN surface.](Co-DOS-xy.ps){width="9cm"} Conclusions =========== In summary, we have calculated the electronic structures of novelly engineered spin systems. The precise atomic charges and positions of those systems, not accessible by experimental techniques, are determined by structure relaxation and Bader analysis respectively in our calculations. The charge analysis shows that the Mn-N bond formed by Mn adsorbed on the CuN surface has stronger bond polarity than the Cu-N bond. The presence of Mn gives rise to substantial rearrangement of the atomic structure: the Mn atoms at the Cu sites perturb their surrounding atomic positions, while those at the N sites do not. The calculated $J$’s agree excellently with the STM measurements for two different Mn binding sites. Such agreement serves as a touchstone of DFT’s future predictability in similar systems, and is important in searching for a desired $J$ (e.g. large value or ferromagnetic) for device applications, with the goal of avoiding multiple experimental trials. The electronic structures of the Co atoms on the same surface is also calculated. From that we explain why Co has Kondo effect while Mn doesn’t. We also find the Co spin to be $S=3/2$, in agreement with the STM’s indirect derivation [@CoNaturePhysics]. The on-site Coulomb is calculated to be $U=0.8$ eV, much smaller than that of the popular surface Kondo system Co/Au(111), which we explain by the polarities of Co to its nearest neighbor atoms. We thank C. F. Hirjibehedin, C. P. Lutz, and A. J. Heinrich for stimulating discussions, and the technical help of the IBM Almaden Blue Gene support team. C. Y. Lin acknowledges supports from the Taiwan National Science Foundation, the Taiwan National Center for High-performance Computing, and the Taiwan National Center for Theoretical Sciences (South). [99]{} B. Barbara and L. Gunther, Physics World [**12**]{}, 35 (1999); D. Gatteschi and R. Sessoli, Angew. Chem. Int. Ed. [**42**]{}, 268 (2003); M. Verdaguer, Polyhedron [**20**]{}, 1115 (2001). C. F. Hirjibehedin, C. P. Lutz, and A. J. Heinrich, Science [**312**]{}, 1021 (2006). Some of the results of our work were previously presented in APS March Meeting 2006, which was cited by the paper: A. N. Rudenko [*et al.*]{}, Phys. Rev. B [**79**]{}, 144418 (2009). A. F. Otte [*et al.*]{}, Nature Physics, [**4**]{}, 847 (2008). A. F. Otte [*et al.*]{}, Phys. Rev. Lett. [**103**]{}, 107203 (2009). P. Blaha [*et al.*]{}, WIEN2k (Karlheinz Schwarz, Techn. Universität Wien, Austria) (1999), ISBN 3-9501031-1-2. J. P. Perdew, K. Burke, and M. Ernzerhof, Phys. Rev. Lett. [**77**]{}, 3865 (1996). G. K. H. Madsen and P. Novák, Europhys. Lett. [**69**]{}, 777 (2005). R. F. W. Bader, Atoms in Molecules a Quantum Theory, (Clarendon Press, Oxford, 1994). V. I. Anisimov, I. V. Solovyev, M. A. Korotin, M. T. Czyzyk, and G. A. Sawatzky, Phys. Rev. B [**48**]{}, 16929 (1993); A. I. Liechtenstein, V. I. Anisimov, J. Zaanen, Phys. Rev. B [**52**]{}, R5467 (1995). V. Madhavan, W. Chen, T. Jamneala, M. F. Crommie, and N. S. Wingreen, Science [**280**]{}, 567 (1998). O. Újsághy, J. Kroha, L. Szunyogh, and A. Zawadowski, Phys. Rev. Lett. [**85**]{}, 2557 (2000).
{ "pile_set_name": "ArXiv" }
--- abstract: | The large-distance behaviors of the random field and random anisotropy $O(N)$ models are studied with the functional renormalization group in $4-\epsilon$ dimensions. The random anisotropy Heisenberg $(N=3)$ model is found to have a phase with an infinite correlation length at low temperatures and weak disorder. The correlation function of the magnetization obeys a power law $\langle {\bf m}({\bf r}_1) {\bf m}({\bf r}_2)\rangle\sim| {\bf r}_1-{\bf r}_2|^{-0.62\epsilon}$. The magnetic susceptibility diverges at low fields as $\chi\sim H^{-1+0.15\epsilon}$. In the random field $O(N)$ model the correlation length is found to be finite at the arbitrarily weak disorder for any $N>3$. The random field case is studied with a new simple method, based on a rigorous inequality. This approach allows one to avoid the integration of the functional renormalization group equations. address: | Department of Condensed Matter Physics, Weizmann Institute of Science, 76100 Rehovot, Israel\ Landau Institute for Theoretical Physics, 142432 Chernogolovka, Moscow region, Russia author: - 'D.E. Feldman' title: 'Quasi-long-range order in the random anisotropy Heisenberg model: functional renormalization group in $4-\epsilon$ dimensions' --- Introduction ============ The effect of impurities on the order in condensed matter is interesting, since the disorder is almost inevitably present in any system. If the disorder is weak the short-range order is the same as in the pure system. However, the large-distance behavior can be strongly modified by the arbitrarily weak disorder. This happens in the systems of continuous symmetry in the presence of the random symmetry breaking field [@IM]. The first experimental example of this kind is the amorphous magnet [@HPZ; @OS]. During the last decade a lot of other related objects were found. These are liquid crystals in the porous media [@LQ], nematic elastomers [@NE], He-3 in aerogel [@He3] and vortex phases of impure superconductors [@HTSC]. The nature of the low-temperature phases of these systems is still unclear. The only reliable statement is that a long-range order is absent [@IM; @L; @P; @AW]. However, other details of the large-distance behavior are poorly understood. The neutron scattering [@Xray] reveals sharp Bragg peaks in impure superconductors at low temperatures and weak external magnetic fields. Since the vortices can not form a regular lattice [@L] it is tempting to assume that there is a quasi-long-range order (QLRO), that is the correlation length is infinite and correlation functions depend on the distance slow. Recent theoretical [@RFXY; @12a] and numerical [@numXY] studies of the random field XY model, which is the simplest model of the vortex system in the impure superconductor [@HTSC], support this picture. The theoretical advances [@RFXY; @12a] are afforded by two new technical approaches: the functional renormalization group [@FRG] and the replica variational method [@MP]. These methods are free from drawbacks of the standard renormalization group and give reasonable results. The variational method regards the possibility of spontaneous replica symmetry breaking and treats the fluctuations approximately. On the other hand the functional renormalization group provides a subtle analysis of the fluctuations about the replica symmetrical ground state. Surprisingly, the methods give close and sometimes even the same results. Both techniques were originally suggested for the random manifolds [@FRG; @MP] and then allowed to obtain information about some other disordered systems with the Abelian symmetry [@RFXY; @12a; @F; @EN; @RT]. Less is known about the non-Abelian systems. The simplest of them are the random field (RF) [@IM] and random anisotropy (RA) [@HPZ] Heisenberg models. The latter was introduced as a model of the amorphous magnet [@HPZ; @OS]. In spite of a long discussion, initiated by Ref. [@AP], the question of QLRO in these models is still open. There is an experimental evidence in favor of no QLRO [@B]. On the other hand recent numerical simulations [@num] support the possibility of QLRO in these systems. The only theoretical approach, developed up to now, is based on the spherical approximation [@P; @spher; @G] and predicts the absence of QLRO at $N\gg 1$ magnetization components. However, there is no reason for this approximation to be valid at $N\sim 1$. In this paper we study the RF and RA $O(N)$ models in $4-\epsilon$ dimensions with the functional renormalization group. The large-distance behaviors of the systems are found to be quite different. Whereas in the RF $O(N)$ model with $N>3$ the correlation length is always finite, the RA Heisenberg $(N=3)$ model has a phase with QLRO. In that phase the correlation function of the magnetization obeys a power law and the magnetic susceptibility diverges at low fields. The paper has the following structure. In the second section the models are formulated and a qualitative discussion is given. The third section contains a derivation of the one-loop renormalization group (RG) equations. The 4th section is devoted to the RF model. The absence of QLRO in that model at $N>3$ is shown with a new method, based on a rigorous inequality. This approach simplifies tedious RG calculations and can be useful in other problems. The RA case is considered in the 5th section. The stable RG fixed point corresponds to QLRO in $4-\epsilon$ dimensions at $N\sim 1$. In particular, at the weak disorder the correlation length is infinite in the low temperature phases of the RA XY ($N=2$) and Heisenberg ($N=3$) models. However, QLRO is absent at $N\ge 10$. In 4 dimensions the correlation functions of the RA Heisenberg model depend on the distance logarithmically. The exact result for the two-spin correlator is given by the $\ln^{-0.62}R$ law. The Conclusion contains a discussion of the results. Appendix A is devoted to a generalization of the Schwartz-Soffer inequality [@SwSo]. The generalized inequality is applied to the stability analyses of the RG fixed points. Appendix B describes a simple Migdal-Kadanoff renormalization group approach that reproduces qualitatively the results of the rigorous method. This approximation provides good estimations of the critical exponents of the RA XY and Heisenberg models. Appendix C includes some technical details of the functional RG in the spherical model. Model ===== To describe the large-distance behavior at low temperatures we use the classical nonlinear $\sigma$-model with the Hamiltonian \[1\] H=d\^D x\[J\_\_([**x**]{}) \_([**x**]{}) + V\_[imp]{}([**x**]{})\], where ${\bf n}({\bf x})$ is the unit vector of the magnetization, $V_{\rm imp}({\bf x})$ the random potential. In the RF case it has the form \[2\] V\_[imp]{}=-\_h\_([**x**]{})n\_([**x**]{}); =1,...,N, where the random field ${\bf h}({\bf x})$ has a Gaussian distribution and $\langle h_\alpha({\bf x})h_\beta({\bf x}')\rangle=A^2\delta({\bf x}-{\bf x}')\delta_{\alpha\beta}$. In the RA case the random potential is given by the equation \[3\] V\_[imp]{}=-\_[,]{}\_([**x**]{})n\_([**x**]{})n\_([**x**]{}); ,=1,...,N, where $\tau_{\alpha\beta}({\bf x})$ is a Gaussian random variable, $\langle\tau_{\alpha\beta}({\bf x})\tau_{\gamma\delta}({\bf x}')\rangle=A^2\delta_{\alpha\gamma}\delta_{\beta\delta}\delta({\bf x}-{\bf x}')$. The random potential (\[3\]) corresponds to the same symmetry as the more conventional choice $V_{\rm imp}=-({\bf hn})^2$ but is more convenient for the further discussion. We assume that the temperature is low and the thermal fluctuations are negligible. The Imry-Ma argument [@IM; @P] suggests that in our problem the long-range order is absent at any dimension $D<4$. One can estimate the Larkin length, up to which there are strong ferromagnetic correlations, with the following qualitative RG approach. Let one remove the fast modes and rewrite the Hamiltonian in terms of the block spins, corresponding to the scale $L=ba$, where $a$ is the ultraviolet cut-off, $b>1$. Then let one make rescaling such that the Hamiltonian would restore its initial form with new constants $A(L), J(L)$. Dimensional analysis provides estimations \[4\] J(L)\~b\^[D-2]{} J(a); A(L)\~b\^[D/2]{}A(a) To estimate the typical angle $\phi$ between neighbor block spins, one notes that the effective field, acting on each spin, has two contributions: the exchange contribution and the random one. The exchange contribution of order $J(L)$ is oriented along the local average direction of the magnetization. The random contribution of order $A(L)$ may have any direction. This allows one to write at low temperatures that $\phi(L)\sim A(L)/J(L)$. The Larkin length corresponds to the condition $\phi(L)\sim 1$ and equals $L\sim (J/A)^{2/(4-D)}$ in agreement with the Imry-Ma argument [@IM]. If Eq. (\[4\]) were exact the Larkin length could be interpreted as the correlation length. However, there are two sources of corrections to Eq. (\[4\]). Both of them are relevant already at the derivation of the RG equation for the pure system in $2+\epsilon$ dimensions [@Pol]. The first source is the renormalization due to the interaction and the second one results from the magnetization rescaling which is necessary to ensure the fixed length condition ${\bf n}^2=1$. The leading corrections to Eq. (\[4\]) are proportional to $\phi^2 J, \phi^2 A$. Thus, the RG equation for the combination $(A(L)/J(L))^2$ reads \[6\] ()\^2= ()\^2+ c()\^4, =4-D If the constant $c$ in Eq. (\[6\]) is positive the Larkin length is the correlation length indeed. But if $c<0$ the RG equation has a fixed point, corresponding to the phase with an infinite correlation length. As seen below, both situations are possible, depending on the system. The large-distance behaviors of the RF and RA $O(N)$ models are known in two limit cases: $N=2$ and $N=\infty$. In the spherical limit ($N=\infty$) QLRO is absent (Appendix C, [@spher]) while the XY model possesses QLRO [@RFXY; @12a; @prim]. Hence, the ordering disappears at some critical number $N_c$ of the magnetization components. This critical number is larger in the RA model, since the fluctuations of the magnetization are stronger in the RF case. Indeed, in the RF model the magnetization tends to be oriented along the random field, whereas in the RA case there are two preferable local magnetization directions so that the spins tend to lie in the same semispace. RG equations ============ In the previous section the RG equations are discussed from the qualitative point of view. Eq. (\[6\]) corresponds to the Migdal-Kadanoff approach of Appendix B. In the present section we derive the RG equations in a systematic way. The one-loop RG equations for the $N$-component RF and RA models in $4+\epsilon$ dimensions were already derived in Ref. [@DF]. We can directly use that result, since the RG equations in dimensions $D<4$ can be obtained by just changing the sign of $\epsilon$. However, the approach [@DF] is cumbersome and we provide below a simpler derivation. We use the method, suggested by Polyakov [@Pol] for the pure system in $2+\epsilon$ dimensions. This method is technically simpler and closer to the intuition than the other approaches. A disadvantage of the method is the difficulty of the generalization for the higher orders in $\epsilon$. This generalization requires the field-theoretical approach [@ZJ]. The same consideration as in the XY [@12a] and random manifold [@FRG] models suggests that near a zero-temperature fixed point in $4-\epsilon$ dimensions there is an infinite set of relevant operators. Let us show that after the replica averaging the relevant part of the effective replica Hamiltonian can be represented in the form \[7\] H\_R=d\^D x\[\_a\_\_\_a\_\_a - \_[ab]{}\], where $a,b$ are replica indices, $R(z)$ is some function, $T$ the temperature. We ascribe to the field ${\bf n}$ the scaling dimension $0$. We also assume that the coefficient before the gradient term in (\[7\]) is $1/(2T)$ at any scale. Then in the $(4-\epsilon)$-dimensional space the scaling dimension of the temperature $\Delta_T=-2+O(\epsilon)$. Any operator $A$ containing $m$ different replica indices is proportional [@FRG] to $1/T^m$. Hence, the scaling dimension $\Delta_A$ of the operator $A$ satisfies the relation $\Delta_A=4-n+m\Delta_T+O(\epsilon)$, where $n$ is the number of the spatial derivatives in the operator. The relevant operators have $\Delta_A\ge 0$. Hence, the relevant operators can not contain more than two different replica indices. A symmetry consideration shows that all the possible relevant operators are included into Eq. (\[7\]). The function $R(z)$ is arbitrary in the RF case. In the RA case $R(z)$ is even due to the symmetry with respect to changing the sign of the magnetization. The one-loop RG equations for the $N$-component model in $4-\epsilon$ dimensions are obtained by a straightforward combination of the methods of Refs. [@FRG] and [@Pol]. We express each replica ${\bf n}^a({\bf x})$ of the magnetization as a combination of fast fields $\phi_i^a({\bf x}), i=1,...,N-1$ and a slow field ${\bf n}'^a({\bf x})$ of the unit length. We use the representation \[dec\] [**n**]{}\^a([**x**]{})=[**n**]{}’\^a([**x**]{})+ \_i\_i\^a([**x**]{})[**e**]{}\_i\^a([**x**]{}), where the unit vectors ${\bf e}_i^a({\bf x})$ are perpendicular to each other and the vector ${\bf n}'^a({\bf x})$. The slow field ${\bf n}'^a$ can be chosen in different ways. For example, one can cut the system into blocks of size $L\gg a$, where $a$ is the ultra-violet cut-off. In the center ${\bf x}$ of a block the vector ${\bf n}'^a({\bf x})$ should be parallel to the total magnetization of the block. In the other points the field ${\bf n}'^a$ should be interpolated. We assume that the fluctuations of the magnetization are weak, that is $\langle\phi_i^2\rangle\ll 1$. Then the fluctuations of the field ${\bf n}^a$ are orthogonal to the vector ${\bf n}'^a$ because of the fixed length constraint $({\bf n}^a)^2=1$. To substitute the expression (\[dec\]) into the Hamiltonian we have to differentiate the vectors ${\bf e}_i^a$. We use the following definition \[dife\] = -c\^a\_[i]{}[**n**]{}’\^a+\_k f\^a\_[, ik]{}[**e**]{}\_k\^a. It is easy to show [@Pol] that $\sum_{\mu i}(c^a_{\mu i})^2= \sum_{\mu}(\partial_{\mu}{\bf n}'^a)^2$. With the accuracy up to the second order in $\phi$ the replica Hamiltonian (\[7\]) can be represented as follows \[Hphi\] H\_[R]{}=d\^D x \[ \_[a]{} { (\_[**n**]{}’\^a)\^2 (1-(\_i\^a)\^2)+ c\^a\_[i]{}c\^a\_[k]{}\^a\_i\^a\_k + ( \_ \_i\^a - f\^a\_[, ik]{} \^a\_k)\^2 } & &\ - \_[ab]{} { R([**n**]{}’\^a[**n**]{}’\^b)+A\^[ab]{}(\^a\_i)\^2+ B\^[ab]{}\_[ij]{}\^a\_i\^a\_j + C\^[ab]{}\_[ij]{}\_i\^a\^b\_j } \], & & where the summation over the repeated indices $i,j,k,\mu$ is assumed and \[coef\] A\^[ab]{}=-([**n**]{}’\^a[**n**]{}’\^b)R’([**n**]{}’\^a[**n**]{}’\^b); B\^[ab]{}\_[ij]{}= ([**n**]{}’\^b[**e**]{}\^a\_i)([**n**]{}’\^b[**e**]{}\^a\_j) R”([**n**]{}’\^a[**n**]{}’\^b); & &\ C\^[ab]{}\_[ij]{}= ([**e**]{}\^a\_i[**e**]{}\_j\^b)R’([**n**]{}’\^a[**n**]{}’\^b) + ([**n**]{}’\^a[**e**]{}\^b\_j)([**n**]{}’\^b[**e**]{}\^a\_i) R”([**n**]{}’\^a[**n**]{}’\^b). & & In the last formula $R'$ and $R''$ denote the first and second derivatives of the function $R(z)$. We have omitted the terms of the first order in $\phi$ in Eq. (\[Hphi\]). These terms are proportional to the products of the fast field $\phi$ and some slow fields. Hence, they are non-zero only in narrow shells of the momentum space. One can show that their contributions to the RG equations are negligible. To obtain the RG equations we have to integrate out the fast variables $\phi^a_i$. Near a zero-temperature fixed point the Jacobian of the transformation ${\bf n}\rightarrow ({\bf n}', \phi_i)$ can be ignored, since the Jacobian does not depend on the temperature. The integration measure is determined from the condition that the field ${\bf n}'^a$ is a slow part of the magnetization. This condition imposes restrictions on the fields $\phi$. The expression (\[Hphi\]) depends on the choice of the vectors ${\bf e}^a_i$ (\[dec\]). However, after integrating out the fields $\phi$ the Hamiltonian can depend only on the slow part ${\bf n}'^a$ of the magnetization. One can make the calculations simpler, considering special realizations of the field ${\bf n}'^a$. To find the renormalization of the disorder-induced term $R(z)$ (\[7\]) we can assume that the field ${\bf n}'^a$ does not depend on the coordinates. The renormalization of the gradient energy can be determined, assuming that the vectors ${\bf n}'^a({\bf x})$ depend on one spatial coordinate only and lie in the same plane. In both cases the vectors ${\bf e}_i^a$ can be chosen such that in Eq. (\[dife\]) $f^a_{\mu, ik}=0$. At such a choice the integration measure can be omitted and the fields $\phi_i^a$ can be considered as weakly interacting fields with the wave vectors from the interval $1/a>q>1/L$. To derive the one-loop RG equations we express the free energy via the Hamiltonian (\[Hphi\]). Then we expand the exponent in the partition function up to the second order in $(H_R-\int d^D x \sum_{\mu i}(\partial_{\mu}\phi_i)^2/(2T))$ and integrate over $\phi^a_i$. Finally, we make a rescaling. The vectors ${\bf e}^a_i$ can be excluded from the final expressions with the relation $\sum_i({\bf ae}^a_i)({\bf be}^a_i)= ({\bf ab})-({\bf an}'^a)({\bf bn}'^a)$. In a zero-temperature fixed point the RG equations are \[Tz\] = -(D-2) + 2(N-2)R’(1)+O(R\^2,T); $$\begin{aligned} \label{Rz} 0=\frac{dR(z)}{d \ln L}=\epsilon R(z) + 4(N-2)R(z)R'(1)- 2(N-1)zR'(1)R'(z) +2(1-z^2)R'(1)R''(z) & & \nonumber \\ + (R'(z))^2(N-2+z^2)-2R'(z)R''(z)z(1-z^2)+ (R''(z))^2(1-z^2)^2, & &\end{aligned}$$ where the factor $1/(8\pi^2)$ is absorbed into $R(z)$ to simplify notations. The RG equations become simpler after the substitution of the argument of the function $R(z)$: $z=\cos\phi$. In terms of this new variable one has to find even periodic solutions $R(\phi)$. The period is $2\pi$ in the RF case and $\pi$ in the RA case due to the additional symmetry of the RA model. The one-loop equations get the form \[8\] = -(D-2) - 2(N-2)R”(0)+O(R\^2,T); $$\begin{aligned} 0=\frac{dR(\phi)}{d \ln L}=\epsilon R(\phi) + (R''(\phi))^2 - 2R''(\phi) R''(0) - & & \nonumber\\ \label{9} (N-2)[4R(\phi)R''(0)+2{\rm ctg}\phi R'(\phi) R''(0) - \left(\frac{R'(\phi)}{\sin\phi}\right)^2] + O(R^3,T) & &\end{aligned}$$ Eq. (\[8\]) provides the following result for the scaling dimension $\Delta_T$ of the temperature \[DT\] \_T=-2+-2(N-2)R”(0). The two-spin correlation function is given in the one-loop order [@Pol] by the expression \[cf\] \^a([**x**]{})[**n**]{}\^a([**x**]{}’)= ’\^a([**x**]{})[**n**]{}’\^a([**x**]{}’)(1-\_i(\_i\^a)\^2). Hence, in the fixed point $\langle{\bf n}({\bf x}){\bf n}({\bf x}')\rangle\sim|{\bf x}-{\bf x}'|^{-\eta}$, where \[11\] =-2(N-1)R”(=0) Let us find the magnetic susceptibility in the weak uniform external field $H$. We add to the Hamiltonian (\[7\]) the term $-\sum_a\int d^Dx Hn^a_z/T$ (the field is directed along the z-axis). The renormalization of the field $H$ is determined by the renormalization of the temperature (\[8\]) and the field ${\bf n}$. In the zero-loop order the renormalized magnetic field $h(L)$ depends on the scale as $h(L)=H\times(L/a)^2$. Hence, the correlation length $R_c\sim H^{-1/2}$. The magnetization, averaged over a block of size $R_c$, is oriented along the field. The absolute value of this average magnetization is proportional to $R_c^{-\eta/2}$. This allows us to calculate the critical exponent $\gamma$ of the magnetic susceptibility $\chi(H)\sim H^{-\gamma}$ in a zero-temperature fixed point: \[15\] =1+(N-1)R”(=0)/2 . In Ref. [@DF] Eqs. (\[Tz\],\[Rz\]) were derived with a different method. In that paper the critical behavior in $4+\epsilon$ dimensions was studied by considering analytical fixed point solutions $R(z)$. In the Heisenberg model, analytical solutions are absent and they are unphysical for $N\ne 3$ [@DF]. In $4-\epsilon$ dimensions appropriate analytical solutions are absent for any $N$. To demonstrate this let us differentiate Eq. (\[Rz\]) over $z$ at $z=1$. For any analytical $R(z)$ we obtain the following flow equation \[flow\] =R’(z=1) + 2(N-2)(R’(z=1))\^2. At $N> 2$ the fixed point of this equation $R'(z=1)=-\epsilon/[2(N-2)]<0$. It corresponds to the negative critical exponent $\eta$ (\[11\]) and hence is unphysical. However, we shall see that in the RA model some appropriate non-analytical fixed points $R(z)$ appear. In these fixed points $R''(z=1)=\infty$. In Ref. [@DF] the RG charges are the derivatives of the function $R(z)$ at $z=1$. Thus, in a non-analytical fixed point these charges diverge. In the systems with a finite number of the charges their divergence implies the absence of a fixed point. However, if the number of the RG charges is infinite such a criterion does not work and is even ambiguous. Indeed, the set of charges can be chosen in different ways and e.g. the coefficients of the Taylor expansion about $z=0$ remain finite in our problem. Random field {#sec:IV} ============ For the RF XY model the one-loop RG equations (\[8\],\[9\]) can be solved exactly [@12a]. The solution corresponds to QLRO with the critical exponents $\eta=\pi^2/9\epsilon, \gamma=1-\pi^2/18\epsilon$. In the first order in $\epsilon$ the exponent $\eta$ equals the prefactor $C$ before the logarithm in the correlation function [@12a] of the angles $\phi({\bf x})$ between the spins ${\bf n}({\bf x})$ and some fixed direction: $\langle(\phi({\bf x}_1)-\phi({\bf x}_2))^2\rangle=C\ln|{\bf x}_1-{\bf x}_2|$. We expect that this coincidence does not extend to the higher orders. If $N\ne 2$ the RG equation (\[9\]) is more complex. Fortunately, at $N>3$ there is still a simple method to study the large-distance behavior. The method is based on the Schwartz-Soffer inequality [@SwSo] and shows that QLRO is absent. In Ref. [@SwSo] the inequality is proven for the Gaussian distribution of the random field. It can also be proved for the arbitrary RF distribution (Appendix A). Let us demonstrate the absence of physically acceptable fixed points in the RF case at $N>3$. We derive some inequality for critical exponents. Then we show that the inequality has no solutions. We use a rigorous inequality for the connected and disconnected correlation functions [@SwSo] \[17\] ([**q**]{})[**n**]{}(-[**q**]{}) = \_a([**q**]{})[**n**]{}\_a(-[**q**]{})- \_a([**q**]{})[**n**]{}\_b(-[**q**]{}), where ${\bf n}({\bf q})$ is a Fourier-component of the magnetization, $a,b$ are replica indices. The disconnected correlation function is described by the critical exponent (\[11\]). The large-distance behavior of the connected correlation function in a zero-temperature fixed point can be derived from the expression \[eqchi\] \~([**0**]{})[**n**]{}([**x**]{})d\^D x and the critical exponent of the susceptibility (\[15\]). The integral in the right hand side of Eq. (\[eqchi\]) is proportional to $R_c^{D-\eta_1}$, where $R_c$ is the correlation length in the external field $H$, $\eta_1$ the critical exponent of the connected correlation function. For the calculation of the exponent $\gamma$ (\[15\]) we used the zero-loop expression of $R_c$ via $H$. Now we need the one-loop accuracy. In this order $R_c\sim H^{-1/[2-(N-3)R''(0)]}$. This allows us to get the following equation for the exponent $\eta_1$ \[e1\] \_1=D-2-2R”(0). In a fixed point Eq. (\[17\]) provides an inequality for the critical exponents of the connected and disconnected correlation functions [@SwSo]. The inequality has the form \[ein\] 2(2-D+\_1)4-D+. This allow us to obtain the following relation \[18\] 4-D +o(R), where $\eta$ is given by Eq. (\[11\]). The two-spin correlation function can not increase up to the infinity as the distance increases. Hence, the critical exponent $\eta$ is positive. At $N>3$ this is incompatible with Eq. (\[18\]) at small $\epsilon$. Thus, there are no accessible fixed points for $N>3$. This suggests the strong coupling regime with a presumably finite correlation length. Certainly, in the RF XY model [@RFXY; @12a] Eq. (\[18\]) is satisfied. However, the unstable fixed points of the RG equations [@12a] do not satisfy the inequality. The marginal Heisenberg case $N=3$ is the most difficult, since in the one-loop order the right hand side of Eq. (\[18\]) equals zero at $N=3$. Hence, the two-loop corrections may be relevant. The RF Heisenberg model is beyond the scope of the present paper. Random anisotropy ================= In this section we investigate the possibility of QLRO in the RA $O(N)$ model. The first subsection is devoted to the simplest case of the XY model. The second subsection contains an inequality for the critical exponent $\eta$. The derivation of the inequality is analogous to Eq. (\[18\]). This inequality is applied in the next subsections. The third subsection contains the results for the Heisenberg model. In the last subsection we consider the case $N>3$. $N=2$ ----- This case is studied analogously to the RF XY model [@12a]. At $N=2$ the RG equation (\[9\]) can be solved analytically. Its solution is a periodical function with period $\pi$. In interval $0<\phi<\pi$ the fixed point solution $R(\phi)$ is given by the formula \[RAXYsol\] R()=. It is a stable fixed point. This can be verified with the linearization of the flow equation (\[9\]) for the small deviations from the fixed point. Another proof of the stability is based on the inequality of the next subsection. The stable fixed point corresponds to the QLRO phase at low temperatures and weak disorder. The critical exponents $\eta=\pi^2\epsilon/36, \gamma=1-\pi^2\epsilon/72$. The solution (\[RAXYsol\]) is non-analytical at $\phi=0$, since $R^{IV}(\phi=0)=\infty$. Hence, the Taylor expansion over $\phi$ is absent. However, a power expansion over $|\phi|$ exists. We shall see below that the same behavior at small $\phi$ conserves also at other $N$. An inequality for a critical exponent {#sec:V.B} ------------------------------------- We use the same approach as in the RF model. Since in the RA case the random field is conjugated with a second order polynomial of the magnetization, the Schwartz-Soffer inequality [@SwSo] should be applied to correlation functions of the field $m({\bf x})=(n_z({\bf x}))^2-1/N$, where $n_z$ denotes one of the magnetization components, $1/N$ is subtracted to ensure the relation $\langle m\rangle=0$. To calculate the critical exponent $\mu$ of the disconnected correlation function we use the representation (\[dec\]) and obtain the relation \[RGm\] m\^a([**x**]{})m\^a([**x**]{}’)= m’\^a([**x**]{})m’\^a([**x**]{}’)(1-), where $a$ is a replica index, $m'=(n'_z)^2-1/N$ the slow part of the field $m$. One finds $\mu=-4NR''(0)$. The critical exponent $\mu_1$ of the connected correlation function is determined analogously to the RF case. We apply a weak uniform field $\tilde H$, conjugated with the field $m$, and calculate the susceptibility $dm/d\tilde H$ in two ways. The result for the critical exponent is $\mu_1=D-2-2(N+2)R''(0)$. The Schwartz-Soffer inequality provides a relation between the exponents $\mu$ and $\mu_1$. It has the same structure as Eq. (\[ein\]). Finally, we obtain the following equation \[RAineq\] (N-1)+o(R). In terms of the RG charge $R(\phi)$ this inequality can be rewritten in the form \[Rin\] R”(0)-/8+o(R). $N=3$ {#sec:V.C} ----- In this case we solve Eq. (\[9\]) numerically. Since coefficients of Eq. (\[9\]) are large as $\phi\rightarrow 0$, it is convenient to use a series expansion of the fixed-point solution $R(\phi)$ at small $\phi$. At the larger $\phi$ the equation can be integrated with the Runge-Kutta method. The following expansion over $t=\sqrt{(1-z)/2}=|\sin(\phi/2)|$ holds \[smphi\] R()/=+2a\^2 |\^3| & &\ +(-)\^4+ O(|\^5|), & & where $a=R''(\phi=0)/\epsilon$. We see that the RG charge $R(\phi)$ is non-analytical at small $\phi$. Similar to the random manifold [@FRG] and random field XY [@12a] models $R^{IV}(0)= \infty$. Numerical calculations show that at any $N$ the solutions, compatible with the inequality (\[Rin\]), have sign “$+$” before the third term of Eq. (\[smphi\]). The solutions to be found are even periodical functions with period $\pi$. Hence, their derivative is zero at $\phi=\pi/2$. At $N=3$ there is only one solution that satisfies Eq. (\[Rin\]). It corresponds to $R''(\phi=0)=-0.1543\epsilon$. If this solution is stable Eqs. (\[11\],\[15\]) provide the following results for the critical exponents \[crexp\] =0.62; =1-0.15. All the other solutions of Eq. (\[9\]) do not satisfy Eq. (\[Rin\]) and hence are unstable. We have still to test the stability of the solution found. For this aim we use an approximate method. First, we find an approximate analytical solution of Eq. (\[9\]). We rewrite Eq. (\[9\]), substituting $\omega(R''(\phi))^2$ for $(R''(\phi))^2$. The case of interest is $\omega=1$ but at $\omega=0$ the equation can be solved exactly. The solution at $\omega=1$ can then be found with the perturbation theory over $\omega$. The exact solution at $\omega=0$ is $R_{\omega=0}(\phi)=\epsilon(\cos 2\phi/24+1/120)$. The corrections of order $\omega^k$ are trigonometric polynomials of order $2(k+1)$. The first correction is \[fc\] R\_1()=-2+4+[const]{} After the calculation of the corrections we can write an asymptotic series for the critical exponent $\eta$ (\[11\]): $\eta=\epsilon(0.67-0.08\omega+0.14\omega^2-\dots)$. The resulting estimation $\eta=\epsilon(0.67\pm0.08)$ agrees with the numerical result (\[crexp\]) well. This allows us to expect that the stability analysis of the solution $R_{\omega=0}$ of the equation with $\omega=0$ provides information about the stability of the solution of Eq. (\[9\]). To study the stability of the exact solution of the equation with $\omega=0$ is a simple problem. We introduce a small deviation $r(\phi)$: $R(\phi)=R_{\omega=0}(\phi)+r(\phi)$ and write the flow equation for this deviation: \[fl\] =(5r()+r”()+ r”(0)2)/3+[const]{}r”(0). It is convenient to use the Fourier expansion $r(\phi)= \sum_m a_m\cos2m\phi$. The flow equations for the Fourier harmonics can be easily integrated. We see that $a_m\rightarrow 0$ as $L\rightarrow\infty$ for any $m>0$. The solution is unstable with respect to the constant shift $a_0$. However, this instability has no interest for us, since the correlation functions do not change at such shifts [@FRG]. Indeed, the constant shift corresponds to the addition of just a random term, independent of the magnetization, to the Hamiltonian (\[1\]). Thus, the RG equation possesses a stable fixed point. This fixed point describes the QLRO phase with the critical exponents (\[crexp\]). In the Abelian systems the results of the functional RG are supported by the variational method [@MP]. In our problem this method can not be applied. However, it is interesting that in the Abelian systems the functional RG equations without $(R''(\phi))^2$ reproduce the variational results. As usual in critical phenomena, in $4$ dimensions the one-loop RG equations allow one to obtain the exact large-distance asymptotics of the correlation function. In the 4-dimensional case $R(\phi)=\tilde R(\phi)/\ln L$, where $\tilde R(\phi)$ satisfies Eq. (\[9\]) at $\epsilon=1$. We obtain the following result for the two-spin correlation function with Eq. (\[cf\]) \[elg\] ([**x**]{})[**n**]{}([**x**]{}’)\~\^[-0.62]{}|[**x**]{}-[**x**]{}’|. $N>3$ {#n3} ----- Numerical analysis of Eq. (\[9\]) shows that solutions, compatible with Eq. (\[Rin\]), are absent at $N\ge 10$. Hence, QLRO is absent for any $N\ge 10$. In the spherical model ($N=\infty$) the absence of fixed points can be demonstrated analytically (Appendix C). This agrees with the previous results [@P; @spher]. For each integer $N<10$ the RG equation (\[9\]) has exactly one solution, satisfying the inequality of section \[sec:V.B\]. These solutions are described in Table I. In the table, $\eta$ is the critical exponent of the two-spin correlation function, $\Delta_T$ the scaling dimension of the temperature (\[DT\]). Unfortunately, it is not clear if the fixed points, found at $N>3$, survive in 3 dimensions. A zero-temperature fixed point can exist only if the scaling dimension of the temperature is negative. Table \[table1\] shows that scaling dimension is positive in the one-loop approximation at $\epsilon=1$ and $N\ge 5$. In the 3-dimensional $O(4)$ model the one-loop correction to the scaling dimension $-2(N-2)R''(0)\approx 0.7\epsilon$ is close to the zero-loop approximation $-2+\epsilon$. Thus, the next orders of the perturbation theory are crucial to understand what happens in 3 dimensions. In the $O(2)$ model the scaling dimension $\Delta_T=-2+\epsilon$ is exact [@12a; @FRG]. Hence, QLRO disappears in 2 dimensions. In systems with a larger numbers of magnetization components fluctuations become stronger. Thus, one expects the absence of QLRO in all the two-dimensional $O(N)$ models. At the zero temperature Eq. (\[9\]) is valid independently of the scaling dimension $\Delta_T$. It is tempting to assume that at the zero temperature QLRO still exists in the RA $O(N>3)$ models below the critical dimension, in which $\Delta_T=0$. However, the experience of the two-dimensional RF XY model does not support such an expectation. Recent numerical simulations show that QLRO is absent even in the ground state of that model [@2DGSsim]. ------------ -------------------- ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ----------------- $N$ 2 3 4 5 6 7 8 9 $\eta$ $\pi^2\epsilon/36$ $0.62\epsilon$ $1.1\epsilon$ $1.7\epsilon$ $2.7\epsilon$ $4.6\epsilon$ $9.0\epsilon$ $33\epsilon$ $\Delta_T$ -2+$\epsilon$ -2+$1.3\epsilon$ -2+$1.7\epsilon$ -2+$2.3\epsilon$ -2+$3.2\epsilon$ -2+$4.8\epsilon$ -2+$8.7\epsilon$ -2+$30\epsilon$ ------------ -------------------- ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ----------------- : Critical exponents of the RA $O(N)$ model.[]{data-label="table1"} Conclusion ========== We have obtained QLRO in the RA Heisenberg model. This is the first example of QLRO in a non-Abelian system. The RF disorder tends to destroy the ordering which exists in the RA case. This difference between the RF and RA models is not surprising, since the same difference was already obtained in Ref. [@F] for the two-dimensional RF and RA XY models with the dipole forces. We have not yet discussed the role of the topological defects. The contribution of the topological excitations to the RG equations (\[8\],\[9\]) is determined by the rare regions where the random field is sufficiently strong to compensate the core energy. Hence, similar to the pure system in $2+\epsilon$ dimensions they are responsible for the non-perturbative corrections of order $\exp(-1/\epsilon)$. Thus, their effect is negligible at small $\epsilon$. Several studies were devoted to the role of the vortices in the RF XY model [@vortex]. The theoretical prediction of QLRO in this system is based on the vortexless version of the model [@RFXY; @12a]. A qualitative estimation [@12a] and variational calculations [@vortex] suggest that the topological defects do not change the behavior of the RF XY model at the weak disorder. Our approach allows us to consider the XY model, including vortices. We see that QLRO does exist in the model with the defects. However, in our problem there may be a more important source of the non-perturbative corrections. The effect of the multiple energy minima can lead to corrections of order $\epsilon^{5/2}$ to the RG equations [@FRG]. Unfortunately, the non-perturbative effects in the RF systems are not well understood. The present paper uses a systematic RG approach. However, some results can be reproduced more simply with an approximate Migdal-Kadanoff renormalization group (Appendix B). The question of the large-distance behavior of the RF and RA Heisenberg models was discussed in Ref. [@AP] on the basis of an approximate equation of state. In that paper QLRO was also obtained in the RA case. However, we believe that this is an accidental coincidence, since the equation of state [@AP] is valid only in the first order in the strength of the disorder, while higher orders are crucial for critical properties [@G]. In particular, the approach [@AP] incorrectly predicts the absence of QLRO in the RF XY model and its presence in the exactly solvable RA spherical model. It also provides incorrect critical exponents in the Heisenberg case. The reason of the mistakes is the fact that in the weak external uniform field the perturbation parameter of Ref. [@AP] is large. The RA Heisenberg model is relevant for the amorphous magnets [@HPZ]. At the same time, for their large-distance behavior the dipole interaction may be important [@B]. Besides, a weak nonrandom anisotropy is inevitably present due to mechanical stresses. In conclusion, we have found that the random anisotropy Heisenberg model has an infinite correlation length and a power dependence of the correlation function of the magnetization on the distance at low temperatures and weak disorder in $4-\epsilon$ dimensions. On the other hand, the correlation length of the random field $O(N>3)$ model is always finite. The author thanks E. Domany, G. Falkovich, M.V. Feigelman, Y. Gefen, S.E. Korshunov, Y.B. Levinson, A.I. Larkin, V.L. Pokrovsky and A.V. Shytov for useful discussions. This work was supported by RFBR grant 96-02-18985 and by grant 96-15-96756 of the Russian Program of Leading Scientific Schools. INEQUALITY FOR CORRELATION FUNCTIONS ==================================== In this appendix we derive an inequality for the correlation functions of the disordered systems. We consider the system with the Hamiltonian \[A1\] H=dx\^D \[ H\_1(([**x**]{})) - h([**x**]{})m(([**x**]{}))\], where $\phi$ is the order parameter, $h$ the random field with short range correlations, $H_1$ may depend on some other random fields. We prove an inequality for the Fourier components of the field $m$: \[A2\] G\_[con]{}([**q**]{}), where $G_{dis}({\bf q})=\overline{\langle{\bf m}({\bf q}){\bf m}(-{\bf q}) \rangle} , G_{con}({\bf q})= \overline{\langle{\bf m}({\bf q}){\bf m}(-{\bf q})\rangle} - \overline{\langle{\bf m}({\bf q})\rangle\langle{\bf m}(-{\bf q})\rangle}$, the angular brackets denote the thermal averaging, the bar denotes the disorder averaging. This inequality can be easily obtained in the case of the Gaussian distribution $P(h)$ of the field $h$ [@SwSo]. Indeed, in the Gaussian case G\_[dis]{}([**q**]{})= ( P(h) m\_[**q**]{}(h)) D{h}= & &\ \[A3\] -(P(h) m\_[**q**]{}(h)) D{h} =[const]{}(P(h)h(-[**q**]{})m\_[**q**]{}(h))D{h}, where $\int D\{h\}$ denotes the integration over the realizations of the random field, $m_{\bf q}(h)$ $=$ ${\int D\{\phi\}\exp(-{H}/{T})m({\bf q})}$ $/$ ${\int D\{\phi\}\exp(-{H}/{T})}$. Applying the Cauchy-Bunyakovsky inequality to Eq. (\[A3\]) one gets Eq. (\[A2\]). However, the assumption about the Gaussian distribution of the random field is not necessary. The inequality (\[A2\]) can also be extended to a more general situation, corresponding to the effective replica Hamiltonian (\[7\]). Indeed, if one adds to any Hamiltonian a weak Gaussian random field $\tilde h$ , conjugated with the field $m$, it suffices for Eq. (\[A2\]) to become valid. The addition of the Gaussian random field corresponds to the transformation $R({\bf n}_a{\bf n}_b)\rightarrow R({\bf n}_a{\bf n}_b)+\Delta{\bf n}_a{\bf n}_b$ in Eq. (\[7\]) where $\Delta\sim\tilde h^2$ is a positive constant. Thus, Eq. (\[A2\]) is invalid only, if for the arbitrarily small $\Delta$ the replica Hamiltonians can not contain the two-replica contribution $\tilde R({\bf n}_a{\bf n}_b) = R({\bf n}_a{\bf n}_b) - \Delta{\bf n}_a{\bf n}_b$. This corresponds to the border of the region of the possible Hamiltonians and has zero probability. For systems in the critical domain there is a simple way to understand why the inequality is valid not only in the Gaussian case but also in the general situation. This is just a consequence of the universality. MIGDAL-KADANOFF RENORMALIZATION GROUP ===================================== This appendix contains a simple approximate version of the renormalization group. The results for the critical exponents of the XY and Heisenberg models have a very good accuracy. The value of the magnetization component number $N_c$, at which QLRO disappears in the RF model, is probably exact. However, the critical number of the components in the RA model is underestimated. Random field {#random-field} ------------ We use the following ansatz for the disorder-induced term in the Hamiltonian (\[1\]): $R({\bf n}_a{\bf n}_b)=\alpha{\bf n}_a {\bf n}_b+\beta$, where $\alpha$ and $\beta$ are constants. This expression corresponds to the Gaussian RF disorder (\[2\]). Below we ignore the generation of the other contributions to the function $R(z)$. The missed contributions are related with random anisotropies of different orders. In terms of the angle variable $\phi$ (\[8\],\[9\]) \[B1\] R()=+. To ensure consistency we have to truncate the RG equation (\[9\]). We substitute the ansatz (\[B1\]) into Eq. (\[9\]) and retain only the terms, proportional to $\cos\phi$ or independent of $\phi$. This leads to the following RG equation for the constant $\alpha$ (\[B1\]) \[B2\] =+2\^2(N-3). For $N<3$ Eq. (\[B2\]) has a stable solution $\alpha=\epsilon/[2(3-N)]$. The critical exponent (\[11\]) equals \[B3\] =. At $N=2$ this result has less than ten percent difference with the systematic theory [@12a]. QLRO disappears at $N=3$. This is the same critical number which is found in section \[sec:IV\]. For $N>3$ a fixed point exists in $4+\epsilon$ dimensions. It describes the transition between the ferromagnetic and paramagnetic phases. In this fixed point the critical exponent (\[B3\]) satisfies the modified dimensional reduction hypothesis [@mdr]. However, we believe that this is an artifact of the Migdal-Kadanoff approximation, since the correct value of the critical exponent differs form Eq. (\[B3\]). Random anisotropy ----------------- In this case we use the ansatz $R({\bf n}_a{\bf n}_b)= A({\bf n}_a{\bf n}_b)^2+B$. In terms of the variable $\phi$ (\[8\],\[9\]) $R(\phi)=\alpha\cos2\phi+\beta$. We again substitute our ansatz into Eq. (\[9\]) and retain the terms, proportional to $\cos 2\phi$, and the terms, independent of $\phi$. The RG equation for the constant $\alpha$ has the form \[B4\] =+8(N-6)\^2. The fixed point solution of this equation is $\alpha=\epsilon/[8(6-N)]$. It describes the QLRO phase at $N<6$. At $N=3$ the function $R(\phi)=\alpha\cos2\phi+\beta$ is just $R_{\omega=0}$ of section \[sec:V.C\]. The critical exponent of the two-spin correlation function is given by the following equation \[B5\] =. At $N=2,3$ this value is close to the results of the systematic approach (Table I). SPHERICAL MODEL =============== In this appendix we consider the spherical RA model with the functional RG. We show that QLRO is absent in this model. In the spherical limit $N=\infty$ only the terms, proportional to $N$, and the term $\epsilon R(z)$ should be retained in the right hand side of Eq. (\[Rz\]). After the change of the variable $R(z)=\epsilon r(z)/N$ one obtains \[RGr\] 0=r(z) \[1+4r’(1)\] - 2zr’(1)r’(z) + (r’(z))\^2. It is convenient to differentiate Eq. (\[RGr\]) over $z$. One gets \[difRG\] 0=r’(z)\[1+2r’(1)\]+2r”(z)\[r’(z)-zr’(1)\]. Analytical functions $r(z)$ can satisfy Eq. (\[difRG\]) at $z=1$ only if $r'(1)=0$ or $r'(1)=-1/2$. In both cases Eq. (\[difRG\]) can be easily solved. There are three analytical non-zero solutions: $r(z)=-z/2+1/4; r(z)=-(1-z)^2/4; r(z)=-z^2/4$. The last solution only has the necessary symmetry. The non-analytical solutions are absent. Indeed, Eq. (\[difRG\]) can be integrated with the substitution $r'(z)=zt(z)$. The general integral has the form \[genint\] =Cz. Besides, there are special solutions. They all satisfy the relation $t(z)=t(1)$. Hence, the special solutions are analytical. Thus, the function $t(z)$ can be non-analytical at $z=1$ only under the condition that $z=1$ is a peculiar point of Eq. (\[genint\]). This means that $t(1)=0$ or $t(1)=-1/2$. However, it is easy to verify that in both cases the solution is one of the found above. We see that the only fixed point of the spherical RA model is $R(z)=-\epsilon z^2/(4N)$. With Eq. (\[11\]) one finds the critical exponent $\eta=-\epsilon/2$. Since $\eta>0$ the solution found is applicable at $D>4$. At $D<4$ the fixed points are absent. Thus, QLRO is absent too. Y. Imry and S.K. Ma, Phys. Rev. Lett. [**35**]{}, 1399 (1975). R. Harris, M. Plischke, and M.J. Zuckermann, Phys. Rev. Lett. [**31**]{}, 160 (1973). D.J. Sellmyer and M.J. O’Shea, in [*Recent Progress in Random Magnetism*]{}, ed. D. Ryan (World Scientific, Singapore, 1992) p. 71. N.A. Clark, T. Bellini, R.M. Malzbender, B.N. Thomas, A.G. Rappaport, C.D. Muzny, D.W. Shaefer, and L. Hrubesh, Phys. Rev. Lett. [**71**]{}, 3505 (1993). T.Bellini, N.A. Clark, and D.W. Schaefer, Phys. Rev. Lett. [**74**]{}, 2740 (1995). H. Haga and C.W. Garland, Liq. Cryst. [**22**]{}, 275 (1997). S.V. Fridrikh and E.M. Terentjev, Phys. Rev. Lett. [**79**]{}, 4661 (1997). T. Emig, Phys. Rev. Lett. [**82**]{}, C3380 (1999). J.V. Porto III and J.M. Parpia, Phys. Rev. Lett. [**74**]{}, 4667 (1995). K. Matsumoto, J.V. Porto, L. Pollak, E.N. Smith, T.L. Ho, and J.M. Parpia, Phys. Rev. Lett. [**79**]{}, 253 (1997). G. Blatter, M.V. Feigel’man, V.B. Geshkenbein, A.I. Larkin, V.M. Vinokur, Rev. Mod. Phys. [**66**]{}, 1125 (1994). A.I. Larkin, Zh. Eksp. Teor. Fiz. [**58**]{}, 1466 (1970) \[Sov. Phys. JETP [**31**]{}, 784 (1970)\]. R.A. Pelcovits, E. Pytte, and J. Rudnik, Phys. Rev. Lett. [**40**]{}, 476 (1978). M. Aizeman and J. Wehr, Phys. Rev. Lett. [**62**]{}, 2503 (1989); Commun. Math. Phys. [**150**]{}, 489 (1990). U. Yaron, P.L. Gammel, D.A. Huse, R.N. Kleiman, C.S. Oglesby, E. Bucher, B. Batlogg, D.J. Bishop, K. Mortensen, K. Clausen, C.A. Bolle, and F. De La Cruz, Phys. Rev. Lett. [**73**]{}, 2748 (1994). S.E. Korshunov, Phys. Rev. B [**48**]{}, 3969 (1993). T. Giamarchi and P. Le Doussal, Phys. Rev. Lett. [**72**]{}, 1530 (1994); Phys. Rev. B [**52**]{}, 1242 (1995). M.J.P. Gingras and D.A. Huse, Phys. Rev. B [**53**]{}, 15193 (1996). D.S. Fisher, Phys. Rev. Lett. [**56**]{}, 1964 (1986). L. Balents and D.S. Fisher, Phys. Rev. B [**48**]{}, 5949 (1993). M. Mezard and G. Parisi, J. Phys. A [**23**]{}, L1229 (1990); J. Phys. I France [**1**]{}, 809 (1991). D.E. Feldman, Pis’ma ZhETF [**65**]{}, 108 (1997) \[JETP Lett. [**65**]{}, 114 (1997)\]; Phys. Rev. B [**56**]{}, 3167 (1997). T. Emig and T. Nattermann, Phys. Rev. Lett. [**81**]{}, 1469 (1998); cond-mat/9810367. A. Hazareesing and J.-P. Bouchaud, cond-mat/9810097. L. Radzihovsky and J. Toner, cond-mat/9811105. A. Aharony and E. Pytte, Phys. Rev. Lett. [**45**]{}, 1583 (1980). B. Barbara, M. Coauch, and B. Dieny, Europhys. Lett. [**3**]{}, 1129 (1987). R. Fisch, Phys. Rev. B [**57**]{}, 269 (1998); [*ibid*]{} [**58**]{}, 5684 (1998). J. Chakrabaty, Phys. Rev. Lett. [**81**]{}, 385 (1998). P. Lacour-Gayet and G. Toulouse, J. Phys. (Paris) [**35**]{}, 425 (1974). S.L. Ginzburg, Zh. Eksp. Teor. Fiz. [**80**]{}, 244 (1981). A. Khurana, A. Jagannathan, and J.M. Kosterlitz, Nucl. Phys. B [**240**]{}, 1 (1984). M.V. Feigelman and M.V. Tsodyks. Zh. Eksp. Teor. Fiz. [**91**]{}, 955 (1986) \[Sov. Phys. JETP [**64**]{}, 562 (1986)\]. Y.Y. Goldshmidt, Nucl. Phys. B [**225**]{}, 123 (1983). M. Schwartz and A. Soffer, Phys. Rev. Lett. [**55**]{}, 2499 (1985). A.M. Polyakov, Phys. Lett. [**59B**]{}, 79 (1975); [*Gauge Fields and Strings*]{} (Harwood Academic Publishers, Chur, 1987). The results about QLRO, obtained for the RF XY model in Ref. [@RFXY; @12a], can be easily extended to the RA XY model, since in terms of the angles $\phi({\bf x})$ between the spins ${\bf n}({\bf x})$ and some fixed direction the Hamiltonians of these models are almost identical. D.S. Fisher, Phys. Rev. B [**31**]{}, 7233 (1985). J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{} (Oxford University Press, Oxford, 1993). C. Zeng, P.L. Leath, and D.S. Fisher, cond-mat/9807281. T. Garel, G. Iori, and H. Orland, Phys. Rev. B [**53**]{}, R2941 (1996). D. Carpentier, P. Le Doussal, and T. Giamarchi, Europhys. Lett. [**35**]{}, 379 (1996). J. Kierfeld, T. Nattermann, and T. Hwa, Phys. Rev. B [**55**]{}, 626 (1997). M. Schwartz and A. Soffer, Phys. Rev. B [**33**]{}, 2059 (1986).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove two new estimates for the level set flow of mean convex domains in Riemannian manifolds. Our estimates give control - exponential in time - for the infimum of the mean curvature, and the ratio between the norm of the second fundamental form and the mean curvature. In particular, the estimates remove a stumbling block that has been left after the work of White [@white_size; @white_nature; @white_subsequent], and Haslhofer-Kleiner [@HK_meanconvex], and thus allow us to extend the structure theory for mean convex level set flow to general ambient manifolds of arbitrary dimension.' author: - Robert Haslhofer and Or Hershkovits bibliography: - 'HaslhoferHershkovits\_levelset.bib' title: Singularities of mean convex level set flow in general ambient manifolds --- Introduction ============ Let $N$ be a Riemannian manifold. For any mean convex domain $K_0 \subset N$ we consider the level set flow $\{K_t\}_{t\geq 0}$ starting at $K_0$, i.e. the maximal family of closed sets starting at $K_0$ that satisfies the avoidance principle when compared with any smooth mean curvature flow [@evans-spruck; @CGG; @Ilmanen]. The level set flow of $K_0$ coincides with the smooth mean curvature flow of $K_0$ for as long as the latter is defined, but provides a canonical way to continue the evolution beyond the first singular time. Mean convexity is preserved also beyond the first singular time in the sense that $K_{t_2}\subseteq K_{t_1}$ whenever $t_2\geq t_1$. In the last 15 years, Brian White developed a deep regularity and structure theory for mean convex level set flow [@white_size; @white_nature; @white_subsequent], and recently the first author and Kleiner gave a new treatment of this theory [@HK_meanconvex]. Concerning the size of the singular set, White proved that the singular set $\mathcal{S}\subset N^n\times \mathbb{R}$ of any mean convex flow has parabolic Hausdorff dimension at most $n-2$ [@white_size Thm. 1.1], see also [@HK_meanconvex Thm. 1.15]. Concerning the structure of the singular set, the main assertion one wants to prove is that all blowup limits of a mean convex flow are smooth and convex until they become extinct. In particular, one wants to conclude that all tangent flows of a mean convex flow are round shrinking spheres, round shrinking cylinders, or static planes of multiplicity one. While the theorem about the size of the singular set is known in full generality, the structure theorem has been proved up to now only under some additional assumptions [@white_nature Thm. 1], [@white_subsequent Thm. 3] and [@HK_meanconvex Thm. 1.14]. Namely one has to restrict either to blowups at the first singular time, or to low dimensions, or to the case where the ambient manifold is Euclidean space. As explained in [@white_subsequent Appendix B], the missing step to extend the structure theorem to general ambient manifolds of arbitrary dimension is to prove that the ratio between the smallest principal curvature $\lambda_1$ and the mean curvature $H$ has a finite lower bound on the regular points contained in any compact subset of space-time. The purpose of this work is to remove this stumbling block. To this end, we prove two new estimates for the level set flow of mean convex domains in Riemannian manifolds. To state our estimates, we denote by $\partial K_t^\textrm{reg}$ the set of regular boundary points at time $t$. Our first main estimate gives a lower bound for the mean curvature. \[estimate1\] There exist constants $H_0=H_0(K_0)>0$ and $\rho=\rho(K_0)<\infty$ such that $$\inf_{\partial K_t^\textrm{reg}}H\geq H_0 e^{-\rho t}.$$ Our estimate from Theorem \[estimate1\], as well as our second main estimate below, depends exponentially on time. It is clear from simple examples (e.g. flows in hyperbolic space), that this exponential behavior in time is the best one can possibly get. Our second main estimate controls the ratio between the norm of the second fundamental form and the mean curvature. \[estimate2\] There exist constants $C=C(K_0)<\infty$ and $\rho=\rho(K_0)<\infty$ such that $$\inf_{ \partial K_t^\textrm{reg}}\frac{{\lvert A\rvert}}{H}\leq C e^{\rho t}.$$ Theorem \[estimate2\] shows that all principal curvatures are controlled by the mean curvature, and thus in particular provides a (two-sided) bound for the ratio $\lambda_1/H$. As explained above, this exactly fills in the missing piece that is needed to extend the structure theorem for mean convex level set flow to the general case without restrictions on subsequent singularities, the ambient manifold, and the dimension. We thus obtain: \[structurethm\] Let $K_0\subset N$ be a mean convex domain in a Riemannian manifold. Then all blowup limits of its level set flow $\{K_t\}_{t\geq 0}$ are smooth and convex until they become extinct. In particular, all backwardly selfsimilar blowup limits are round shrinking spheres, round shrinking cylinders, or static planes of multiplicity one. Theorem \[structurethm\] gives a general description of the nature of singularities of mean convex level set flow in arbitrary ambient manifolds. As mentioned above, this generalizes the structure theorems from [@white_nature Thm. 1], [@white_subsequent Thm. 3] and [@HK_meanconvex Thm. 1.14].\ **Applications.** Let us now discuss some applications of the above theorems. Our first application concerns topological changes in mean convex mean curvature flow. In [@White_topo_change], White proved that under mean convex level set flow elements of the $m$-th homotopy group of the complementary region can die only if there is a shrinking $S^k\times \mathbb{R}^{n-1-k}$ singularity for some $k\leq m$, assuming that $n\leq7$ or that the ambient manifold is Euclidean. Thanks to Theorem \[structurethm\] we can remove the assumption on the dimension and the ambient manifold, and thus obtain: Let $K_0\subset N^{n}$ be a mean convex domain in a Riemannian manifold. If for some $0\leq t_1<t_2$ there is a map of the $m$-sphere into $N\setminus K_{t_1}$ that is homotopically trivial in $N\setminus K_{t_2}$ but not in $N\setminus K_{t_1}$, then at some $t\in [t_1,t_2)$ there is a singularity of the flow at which the tangent flow is a shrinking $S^k\times \mathbb{R}^{n-1-k}$ for some $k\in \{1,\ldots,m\}$. Our second application concerns the estimates for mean convex level set flow in the setting of Haslhofer-Kleiner [@HK_meanconvex]. These estimates are based on the noncollapsing condition that each boundary point admits interior and exterior balls of radius comparable to the reciprocal of the mean curvature at that point [@white_size; @sheng_wang; @andrews1]. It has been unknown up to now if this noncollapsing condition holds for mean convex level set flow in general ambient manifolds of arbitrary dimension. Combining Theorem \[estimate1\] and Theorem \[structurethm\] we can answer this in the affirmative: \[cor\_noncollapsing\] Let $K_0\subset N^{n}$ be a mean convex domain in a Riemannian manifold. Then there exists a positive nonincreasing function $\alpha:[0,\infty)\to (0,\infty)$ such that each $p\in \partial K_t^\textrm{reg}$ admits an interior and exterior ball tangent at $p$ of radius at least $\alpha(t)/H(p,t)$. In particular, all estimates from [@HK_meanconvex] apply in the setting of mean convex level set flow in general ambient manifolds of arbitrary dimension. We conjecture that the conclusion of Corollary \[cor\_noncollapsing\] actually holds for some $\alpha(t)\geq \alpha_0e^{-\rho t}$ for some $\alpha_0=\alpha_0(K_0)>0$ and $\rho=\rho(K_0)<\infty$. It would also be interesting to find a proof of the noncollapsing which is independent of Theorem \[structurethm\]. Our third application concerns a sharp estimate for the inscribed and outer radius for mean convex level set flow in Riemannian manifolds. In [@Brendle_inscribed] and [@Brendle_inscribed_manifolds], Brendle proved sharp bounds for the inscribed radius and outer radius at points in a smooth mean convex mean curvature flow where the mean curvature is large. The first author and Kleiner [@HK_inscribed] found a shorter proof of Brendle’s estimate, which also works in the nonsmooth setting provided that one has some noncollapsing parameter to get started. Thanks to Corollary \[cor\_noncollapsing\] the argument from [@HK_inscribed] is applicable for mean convex level set flow in general ambient manifolds, and we thus obtain: Let $K_0\subset N$ be a mean convex domain in a Riemannian manifold. Then for any positive nonincreasing function $\delta:[0,\infty)\to (0,\infty)$, there exists a positive nonincreasing function $H_0:[0,\infty)\to (0,\infty)$ depending only on $K_0$ and $\delta$ such that every $p\in K_t^\textrm{reg}$ with $H(p,t)\geq H_0(t)$ admits an interior ball of radius at least $\tfrac{1}{(1+\delta(t))H(p,t)}$ and an exterior ball of radius at least $\tfrac{1}{\delta(t)H(p,t)}$. **Outline.** To finish this introduction, let us now describe some of the key ideas behind the proofs of our two main estimates (Theorem \[estimate1\] and Theorem \[estimate2\]). The estimates are very easy to prove for smooth flows, so let us start by explaining this: First, from the evolution equation for the mean curvature [@Huisken_manifolds Cor. 3.5], $$\partial_t H={\Delta}H+{\lvert A\rvert}^2 H + \textrm{Rc}(\nu,\nu)H,$$ and the maximum principle, one sees that the minimum of the mean curvature can deteriorate at most exponentially in time. Second, combining the evolution equation for the square norm of the second fundamental form [@Huisken_manifolds Cor. 3.5], $$\begin{aligned} \partial_t {\lvert A\rvert}^2=&{\Delta}{\lvert A\rvert}^2-2{\lvert \nabla A\rvert}^2+2{\lvert A\rvert}^4+2 \textrm{Rc}(\nu,\nu){\lvert A\rvert}^2\nonumber\\ &-4(h_{ij}h_{jm}R_{mlil}-h_{ij}h_{lm}R_{milj})-2h_{ij}(\nabla R_{0lil}+\nabla_l R_{0ijl}),\end{aligned}$$ and the evolution equation for the mean curvature, one sees that the maximum of ${\lvert A\rvert}/H$ increases at most exponentially in time. We emphasize that the above estimates crucially rely on one another. Namely, to control the reaction terms in the evolution for ${\lvert A\rvert}/H$ we need the lower bound for $H$ from the first step. Having sketched the argument in the smooth case, the main difficulty is to generalize this argument to the level set flow beyond the first singular time. As in White [@white_subsequent], a natural first approach to try would be to use elliptic regularization. Recall that the time of arrival function $u$ of a mean convex flow $\{ K_t\}_{t\geq 0}$ is defined by $u(x)=t$ if and only if $x\in\partial K_t$. For mean convex flows in Euclidean space, the time of arrival function $u:K_0\to \mathbb{R}$ is a bounded real valued function with domain $K_0$, and can be approximated by solutions of the Dirichlet problem $$\begin{aligned} \textrm{div}\left(\frac{Du_{\varepsilon}}{\sqrt{{\varepsilon}^2+{\lvert Du_{\varepsilon}\rvert}^2}}\right)+\frac{1}{\sqrt{{\varepsilon}^2+{\lvert Du_{\varepsilon}\rvert}^2}}=0 & \qquad \textrm{in} \,\,\textrm{Int}(K_0),\nonumber\\ u_{\varepsilon}=0 &\qquad \textrm{on} \,\,\partial K_0.\end{aligned}$$ The elliptic regularization technique has been known for a long time [@evans-spruck; @CGG], see also [@Ilmanen], and arguing as in [@white_subsequent; @HK_meanconvex] can be used to prove that the two main estimates (with $\rho=0$) hold for the level set flow in Euclidean space. However, extending these arguments to level set flow in Riemannian manifolds is not straightforward. The key difference between level set flow in Euclidean space and level set in general ambient manifolds, is that in the latter case the flow generally does not become extinct in finite time, but converges to a nonempty limit $K_\infty$ for $t\to\infty$. Consequently, the time of arrival function $u$ is only defined on the set $K_0\setminus K_\infty$. Thus, it is (a) not clear a priori how to approximate $u$ by smooth solutions, and (b) even if one succeeds in approximating $u$ by smooth solutions it is not obvious how to prove our main estimates using the approximators, since one would have to somehow bring in the exponential in time factor and would have to cut off all quantities under consideration for $t\to \infty$. To overcome the above difficulties, we consider a new double-approximation scheme. Namely, we consider functions $u_{{\varepsilon},\sigma}$ solving the Dirichlet problem $$\begin{aligned} \label{double_reg} \textrm{div}\left(\frac{Du_{{\varepsilon},\sigma}}{\sqrt{{\varepsilon}^2+{\lvert Du_{{\varepsilon},\sigma}\rvert}^2}}\right)+\frac{1}{\sqrt{{\varepsilon}^2+{\lvert Du_{{\varepsilon},\sigma}\rvert}^2}}&=\sigma u_{{\varepsilon},\sigma} & \textrm{in} \,\,\textrm{Int}(K_0),\nonumber\\ u_{{\varepsilon},\sigma}&= 0 & \textrm{on} \,\,\partial K_0.\end{aligned}$$ The idea, inspired in part by the Schoen-Yau proof of the positive mass theorem [@SchoenYau], is that for $\sigma>0$ the maximum principle gives the a-priori sup bound $u_{{\varepsilon},\sigma}\leq \frac{1}{{\varepsilon}\sigma}$. Thus, as we will see in Section \[double\_app\_ex\_sec\], for positive $\sigma$ the Dirichlet problem can be solved using a standard continuity argument. We then argue that for $\sigma\to 0$ we have convergence in an appropriate sense to functions $u_{\varepsilon}$, which in turn for ${\varepsilon}\to 0$ converge to the time of arrival function $u:K_0\setminus K_\infty\to \mathbb{R}$, see Section \[pass\_lim\]. This solves the above difficulty (a). More fundamentally, we use our double approximation to also solve the difficulty (b). Namely, in Section \[sec\_estforH\] and Section \[sec\_estforAH\] we prove two estimates for carefully chosen quantities at the level of the double approximators $M^{{\varepsilon},\sigma}=\textrm{graph}(u_{{\varepsilon},\sigma}/{\varepsilon})$. We choose our quantities in such a way, that on the one hand they satisfy the maximum principle and on the other hand taking the limits $\sigma\to 0$ and ${\varepsilon}\to 0$ of the estimates for the double approximators yields the two main estimates for the actual level set flow. There is obviously quite some tension between these two desired properties, and we thus have to design our quantities for the double approximate estimates very carefully. For example, to estimate ${\lvert A\rvert}/H$ we consider the quantity $$\frac{{\lvert A\rvert}+\Lambda\sigma u_{{\varepsilon},\sigma}}{(H+\sigma u_{{\varepsilon},\sigma})e^{\rho u_{{\varepsilon},\sigma}}},$$ which turns out to indeed satisfy the maximum principle after taking in account also an improved Kato inequality at points where ${\lvert A\rvert}/H$ is large, see Section \[sec\_estforAH\]. Finally, in Section \[pass\_lim\] we show that taking the limits $\sigma\to 0$ and ${\varepsilon}\to 0$ of our double approximate estimates indeed yields Theorem \[estimate1\] and Theorem \[estimate2\], and thus Theorem \[structurethm\].\ **Acknowledgements.** We thank Brian White for bringing the problem of subsequent singularities in Riemannian manifolds to our attention. This work has been partially supported by the NSF grants DMS-1406394 and DMS-1406407. The second author wishes to thank Jeff Cheeger for his generous support during the work on this project. Existence of double approximators {#double_app_ex_sec} ================================= The goal of this section is to prove the existence of double approximators. \[existence\_thm\] If $K_0\subset N$ is a mean convex domain in a Riemannian manifold, then the Dirichlet problem has a unique smooth solution $u_{{\varepsilon},\sigma}$ for every ${\varepsilon},\sigma>0$. To prove Theorem \[existence\_thm\] we will use the continuity method (see e.g. [@SchoenYau; @Schulze] for the continuity method for related equations). Namely, we consider the Dirichlet problem $$\begin{aligned} \label{trip_app_eq} \textrm{div}\left(\frac{Du_{{\varepsilon},\sigma,\kappa}}{\sqrt{{\varepsilon}^2+{\lvert Du_{{\varepsilon},\sigma,\kappa}\rvert}^2}}\right)+\frac{\kappa}{\sqrt{{\varepsilon}^2+{\lvert Du_{{\varepsilon},\sigma}\rvert}^2}}&=\sigma u_{{\varepsilon},\sigma,\kappa} & \textrm{in} \,\,\textrm{Int}(K_0),\nonumber\\ u_{{\varepsilon},\sigma,\kappa}&= 0 & \textrm{on} \,\,\partial K_0.\end{aligned}$$ For $\kappa=0$ the problem has the obvious solution $u_{{\varepsilon},\sigma,0}=0$. We will now derive the needed a priori estimates for $\kappa\in[0,1]$. Note first that we have the sup-bound $$\label{easy_sup_bound} 0 \leq u_{{\varepsilon},{\sigma},{\kappa}}\leq \frac{{\kappa}}{{\sigma}{\varepsilon}},$$ which follows directly from the maximum principle. To proceed further, we consider the graph $M^{{\varepsilon},{\sigma},{\kappa}}={\mathrm{graph}}(u_{{\varepsilon},{\sigma},{\kappa}}/{\varepsilon})\subset N\times \mathbb{R}_+$. We write $\tau=\tfrac{\partial}{\partial z}$ for the unit vector in $\mathbb{R}_+$ direction, and $\nu$ for the upward pointing unit normal of $M$ (here and in the following we drop the dependence on $({\varepsilon},\sigma,\kappa)$ in the notation when there is no risk of confusion). Written more geometrically, equation takes the form $$\label{equation_geom} H+\sigma u=\kappa V,$$ where $H$ is the mean curvature of $M\subset N\times \mathbb{R}_+$, and $V=\tfrac{1}{{\varepsilon}}\langle \tau,\nu\rangle$. We write $\langle\cdot,\cdot\rangle$ for the product metric on $N\times \mathbb{R}_+$, and $\nabla$ for the covariant derivative on $M$. We will frequently use the following general lemma about graphs. \[lap\_calc\] On any graph $M\subset N\times\mathbb{R}_+$ we have $$\label{prod_lap} \Delta \langle \tau,\nu \rangle=\langle \tau, \nabla H \rangle-\left(\textrm{Rc}(\nu,\nu)+|A|^2\right)\langle\tau,\nu\rangle.$$ Moreover, the weight function $w=e^{mz}$, where $m$ is a constant, satisfies $$\label{dump_der} \nabla w= mw\tau^\top,\qquad\qquad \Delta w=\left(m^2{\lvert \tau^\top\rvert}^2-m\langle \tau, \nu \rangle H\right)w,$$ where $\tau^\top=\tau-\langle \tau,\nu\rangle \nu$ denotes the tangential part of $\tau$. Let $e_i$ be an orthonormal frame with $\nabla_{e_i} e_j=0$ at the point in consideration, and let $h_{ij}=A(e_i,e_j)$ be the components of the second fundamental form. Note that $$\nabla \langle \tau,\nu\rangle=\nabla_{e_i}\langle \tau,\nu\rangle\, e_i=h_{ij}\langle \tau, e_j\rangle\, e_i,$$ where here and in the following repeated indices are summed over. Using this, we compute $$\Delta \langle \tau,\nu \rangle =\textrm{div}(\nabla \langle \tau,\nu\rangle)= \nabla_{e_i}h_{ij}\langle \tau, e_j \rangle-h_{ij}h_{ij}\langle \tau, \nu \rangle.$$ The Codazzi identity gives $\nabla_{e_i}h_{ij}=\nabla_{e_j}H+\textrm{Rc}(\nu,e_j)$. Since there is no curvature in $\tau$-direction we have $\textrm{Rc}(\nu,\tau^\top)=- \langle \tau,\nu\rangle\, \textrm{Rc}(\nu,\nu)$, and equation follows. Arguing similarly, we compute $\nabla w= \nabla_{e_i} w\, e_i=mw\langle \tau,e_i \rangle\, e_i$, and $$\Delta w=\textrm{div}(\nabla w)=m^2w\langle \tau,e_i \rangle\langle \tau,e_i \rangle-mwh_{ii}\langle \tau,\nu \rangle.$$ This proves the lemma. \[cor\_evolution\_vw\] On $M^{{\varepsilon},\sigma,\kappa}={\mathrm{graph}}(u_{{\varepsilon},\sigma,\kappa}/{\varepsilon})\subset N\times\mathbb{R}_+$ we have $$\begin{gathered} \Delta (V w)=-\left(\textrm{Rc}(\nu,\nu)+|A|^2+m^2{\lvert \tau^\top\rvert}^2+m (\kappa V-\sigma u){\varepsilon}V\right)Vw\\ +2m\langle \tau,\nabla(V w)\rangle+\tfrac{1}{{\varepsilon}}\langle \tau, \kappa\nabla V-\sigma\nabla u \rangle w.\end{gathered}$$ Recall that $V=\tfrac{1}{{\varepsilon}}\langle \tau,\nu\rangle$ and that $H=\kappa V-\sigma u$. Using this and the formula $$\Delta (Vw)=V\Delta w+w\Delta V+\frac{2}{w}\langle \nabla w,\nabla (Vw) \rangle -\frac{2V}{w}{\lvert \nabla w\rvert}^2,$$ the claim follows from a short computation. \[V\_bd\_1\] Choosing $m> 2\max_{K_0}{\lvert {\textrm{Rc}}\rvert}^{1/2}$ the function $V:M^{{\varepsilon},\sigma,\kappa}\to \mathbb{R}$ satisfies $$V(x,z) \geq \min\left(\frac{1}{2{\varepsilon}},\min_{\partial K_0}V\right)\cdot e^{-mz}.$$ If $Vw$ attains its minimum on $\partial M=\partial K_0$ we are done. Suppose now $Vw$ attains its minimum at an interior point $(x_0,z_0)\in M\setminus \partial M$. If $V(x_0,z_0)\geq \tfrac{1}{2{\varepsilon}}$ there is nothing to prove. Suppose now $V(x_0,z_0)< \tfrac{1}{2{\varepsilon}}$. Since $M$ is the graph of $u/{\varepsilon}$, we have $\langle \nabla u,\tau\rangle\geq0$, and since $(x_0,z_0)$ is a critical point of $Vw$, we have $\nabla V=-mV\tau^\top$, and thus $\langle \tau,\nabla V\rangle=-mV{\lvert \tau^\top\rvert}^2$. Using this, and dropping some terms with the good sign, Corollary \[cor\_evolution\_vw\] implies that $$\label{Vw_inequatmin} {\textrm{Rc}}(\nu,\nu)+m^2{\lvert \tau^\top\rvert}^2-m {\varepsilon}\sigma u V+\tfrac{m\kappa}{{\varepsilon}}{\lvert \tau^\top\rvert}^2\leq 0$$ at $(x_0,z_0)$. On the other hand, recalling that ${\varepsilon}V< \tfrac12$, we have ${\lvert \tau^\top\rvert}^2=1-{\varepsilon}^2 V^2\geq \tfrac34$. Together with the bound $\max_{K_0}{\lvert Rc\rvert}\leq \tfrac{1}{4}m^2$ and the estimate we thus obtain $${\textrm{Rc}}(\nu,\nu)+m^2{\lvert \tau^\top\rvert}^2-m {\varepsilon}\sigma u V+\tfrac{m\kappa}{{\varepsilon}}{\lvert \tau^\top\rvert}^2\geq \tfrac{1}{2}m^2+\tfrac{1}{4{\varepsilon}}m\kappa>0;$$ a contradiction. This proves the proposition. \[remark\_upperlower\] Recalling that $V=({\varepsilon}^2+{\lvert Du\rvert}^2)^{-1/2}$, we see that the lower bound for $V$ from Proposition \[V\_bd\_1\] is equivalent to an upper bound for ${\lvert D u\rvert}$. \[boundary\_grad\_est1\] There exists a constant $C=C({\varepsilon},{\sigma},K_0)<\infty$, such that $$\sup_{\partial K_0}|D u_{{\varepsilon},{\sigma},{\kappa}}| \leq C.$$ Let $r$ be the distance function to $\partial K_0$, and let $\delta>0$, to be chosen later, be such that $r$ is smooth on $T_{\delta}=\{x\in K_0\, |\, r(x)<\delta\}$. By estimate , for any $C\geq\frac{1}{{\sigma}{\varepsilon}\delta}$, the quantity $v=C r$ satisfies $v\geq u_{{\varepsilon},{\sigma},{\kappa}}$ on $\partial T_\delta$. We will now show that, for $C$ large enough, $v$ is a supersolution of equation . To this end we compute $$\mathrm{div}\left(\frac{D v}{\sqrt{{\varepsilon}^2+|D v|^2}}\right)+{\kappa}\frac{1}{\sqrt{{\varepsilon}^2+|D v|^2}}-{\sigma}v \leq C \frac{\Delta r}{\sqrt{{\varepsilon}^2+C^2}}+\frac{1}{\sqrt{{\varepsilon}^2+C^2}}.$$ Note that $\Delta r=-H_{S_r}$ where $S_r=\{x\in \Omega \, |\, d(x,\partial \Omega)=r\}$. Since $H_0:=\min_{\partial K_0}H>0$, by the smoothness of $K_0$ and the Riccati equation, there exists a $\delta=\delta(K_0)>0$ such that $r$ is smooth on $T_\delta$ and $\Delta r \leq -\tfrac{1}{2}H_0$ there. Thus, for $C = \max\{\tfrac{1}{{\sigma}{\varepsilon}\delta},\tfrac{2}{H_0}\}$, the function $v$ is a supersolution of . Since $|D r|=1$, this implies that $\sup_{\partial K_0}|D u|\leq C$. We can now prove the main theorem of this section. Note first that equation is of the form $$a_{ij}(D u_{{\varepsilon},{\sigma}})D_iD_ju_{{\varepsilon},{\sigma}}+b(D u_{{\varepsilon},{\sigma}})-{\sigma}u_{{\varepsilon},{\sigma}}=0.$$ If $u_{{\varepsilon},{\sigma}}$ and $\hat{u}_{{\varepsilon},{\sigma}}$ are two solutions of the Dirichlet problem, then at an interior minimum of $v=u_{{\varepsilon},{\sigma}}-\hat{u}_{{\varepsilon},{\sigma}}$ we have $D u_{{\varepsilon},{\sigma}}=D \hat{u}_{{\varepsilon},{\sigma}}$ and thus $$a_{ij}(D u_{{\varepsilon},{\sigma}})D_iD_j v-{\sigma}v=0,$$ which implies $v \geq 0$. Changing the roles of $u_{{\varepsilon},{\sigma}}$ and $\hat{u}_{{\varepsilon},{\sigma}}$, this proves uniqueness. To prove existence, fix ${\varepsilon},{\sigma}>0$, and let $$I=\{{\kappa}\in [0,1]\,|\, \textrm{equation \eqref{trip_app_eq} has a solution with the parameters } ({\varepsilon},{\sigma},{\kappa})\}.$$ We want to show that $1\in I$. Since $0\in I$, it sufficies to show that $I$ is open and closed. To show closeness, we first recall the sup-bound $u\leq \tfrac{1}{{\varepsilon}\sigma}$ from , and observe that Proposition \[V\_bd\_1\], Remark \[remark\_upperlower\] and Lemma \[boundary\_grad\_est1\] give the estimate $$\sup_{K_0}{\lvert Du\rvert}\leq C,$$ where $C$ is independent of $\kappa$. By DeGiorgi-Nash-Moser and Schauder estimates we get ${\kappa}$-independent higher derivative bounds up to the boundary for solutions of the $({\varepsilon},{\sigma},{\kappa})$-problem if ${\kappa}\in I$. If $\{{\kappa}_m\}\subseteq I$ and ${\kappa}_m \rightarrow {\kappa}$, it follows that a subsequence of $u_{{\varepsilon},{\sigma},{\kappa}_m}$ converges to a solution $u_{{\varepsilon},{\sigma},{\kappa}}$ of the $({\varepsilon},{\sigma},{\kappa})$-problem, which implies that ${\kappa}\in I$.\ To show that $I$ is open, consider the operator $\mathcal{M}_{\kappa}:C^{2,\alpha}_0(K_0)\rightarrow C^{\alpha}(K_0)$ given by $$\mathcal{M}_{\kappa}(u)=\mathrm{div}\left(\frac{D u}{\sqrt{{\varepsilon}^2+|D u|^2}}\right)+\frac{{\kappa}}{\sqrt{{\varepsilon}^2+|D u|^2}}-{\sigma}u.$$ Assuming ${\kappa}\in I$, its linearization at $u_{{\varepsilon},{\sigma},{\kappa}}$ is given by $$\mathcal{L}_{{\kappa}}(v)=\mathrm{div}\left(\frac{D v}{\sqrt{{\varepsilon}^2+|D u_{{\varepsilon},{\sigma},{\kappa}}|^2}}-\frac{\langle D u_{{\varepsilon},{\sigma},{\kappa}},D v \rangle D u_{{\varepsilon},{\sigma},{\kappa}}}{\left({\varepsilon}^2+|D u_{{\varepsilon},{\sigma},{\kappa}}|^2\right)^{3/2}} \right)-\frac{{\kappa}\langle D u_{{\varepsilon},{\sigma},{\kappa}},D v \rangle}{\left({\varepsilon}^2+|D u_{{\varepsilon},{\sigma},{\kappa}}|^2\right)^{3/2}}-{\sigma}v.$$ Note that at a positive maximum of $v$, $$\mathcal{L}_{{\kappa}}(v)\leq \frac{1}{\sqrt{{\varepsilon}^2+|D u_{{\varepsilon},{\sigma},{\kappa}}|^2}}\left(\Delta v-\frac{\mathrm{Hess}\,v\,(D u_{{\varepsilon},{\sigma},{\kappa}},D u_{{\varepsilon},{\sigma},{\kappa}})}{{\varepsilon}^2+|D u_{{\varepsilon},{\sigma},{\kappa}}|^2}\right)-\sigma v< 0,$$ and similarly at a negative minimum point, $\mathcal{L}_{{\kappa}}(v)> 0$. Hence, $v=0$ is the unique solution to $\mathcal{L}_{{\kappa}}(v)=0$ with zero boundary. Thus, by standard elliptic theory, the map $\mathcal{L}_{{\kappa}}:C^{2,\alpha}_0(K_0)\rightarrow C^{\alpha}(K_0)$ is invertible, and by the inverse function theorem, the map $\mathcal{M}:[0,1]\times C^{2,\alpha}_0(K_0)\rightarrow [0,1]\times C^{\alpha}(K_0)$ given by $\mathcal{M}({\kappa},u)=({\kappa},\mathcal{M}_{\kappa}(u))$ is locally invertible. Taking also into account the higher derivative estimates we conclude that $I$ is open, and we are done. Double approximate estimate for $H$ {#sec_estforH} =================================== The goal of this section is to derive a lower bound for the mean curvature. As explained in the introduction, we will work at the level of the double approximators $M^{{\varepsilon},{\sigma}}={\mathrm{graph}}(u_{{\varepsilon},\sigma}/{\varepsilon})$, where $u_{{\varepsilon},\sigma}$ is a solution of with ${\varepsilon},\sigma\in(0,1)$. The task is then to find a suitable quantity that on the one hand satisfies the maximum principle and on the other hand gives the desired mean curvature bound in the limit $\sigma,{\varepsilon}\to 0$. It turns out that for the mean curvature estimate the quantity $H+\sigma u_{{\varepsilon},\sigma}$ does the job. \[V\_bd\_2\] There exist constants $c=c(K_0)>0$ and $\rho=\rho(K_0)<\infty$ such that $$H\left(x,\tfrac{1}{{\varepsilon}}u_{{\varepsilon},\sigma}(x)\right)+\sigma u_{{\varepsilon},\sigma}(x) \geq c e^{-\rho u_{{\varepsilon},{\sigma}}(x)}.$$ for every $x\in K_0$, whenever $u_{{\varepsilon},\sigma}$ is a solution of with ${\varepsilon},\sigma\in(0,1)$. \[lim\_H\_rk\] Taking the limits $\sigma\to 0$ and ${\varepsilon}\to 0$ the estimate from Theorem \[V\_bd\_2\] yields the mean curvature lower bound from Theorem \[estimate1\], see Section \[pass\_lim\] for the proof. In view of the equation $V=H+\sigma u$, proving Theorem \[V\_bd\_2\] amounts to improving the lower bound for $V$ from Section \[double\_app\_ex\_sec\] in two ways. Namely, we will argue that in the case $\kappa=1$ the factor $e^{-mz}$ in Proposition \[V\_bd\_1\] can be replaced by the better factor $e^{-\rho{\varepsilon}z}$, and we will replace Lemma \[boundary\_grad\_est1\] by a boundary estimate which is uniform in ${\varepsilon}$ and $\sigma$. \[V\_better\_bd\] Choosing $\rho>4\max_{K_0}{\lvert {\textrm{Rc}}\rvert}$ the function $V:M^{{\varepsilon},\sigma}\to \mathbb{R}$ satisfies $$V(x,z) \geq \min\left(\frac{1}{2{\varepsilon}},\min_{\partial K_0}V\right)\cdot e^{-{\varepsilon}\rho z}.$$ Consider the function $Vw$ where $w=e^{\rho{\varepsilon}z}$. As in the proof of Proposition \[V\_bd\_1\] we can assume that $Vw$ attains its minimum at an interior point $(x_0,z_0)\in M\setminus\partial M$ and that $V(x_0,z_0)<\tfrac{1}{2{\varepsilon}}$ (otherwise there is nothing to prove). The estimate with $\kappa=1$ and $m=\rho{\varepsilon}$ reads $${\textrm{Rc}}(\nu,\nu)- {\varepsilon}^2\rho \sigma u V+(\rho+{\varepsilon}^2\rho^2){\lvert \tau^\top\rvert}^2\leq 0.$$ Combining this with the inequalities ${\varepsilon}\sigma u\leq 1$, $V<\tfrac{1}{2{\varepsilon}}$, and ${\lvert \tau^\top\rvert}\geq \tfrac34$ yields $${\textrm{Rc}}(\nu,\nu)+\tfrac14 \rho< 0,$$ which contradicts our choice of $\rho$. This proves the proposition. \[boundary\_grad\_est2\] There exists a constant $C=C(K_0)<\infty $ such that $$\sup_{\partial K_0}|D u_{{\varepsilon},{\sigma}}| \leq C.$$ As in the proof of Lemma \[boundary\_grad\_est1\] we will construct a suitable barrier function, but this time by bending the smooth solution to infinity (c.f. [@BM Lemma 18]). By mean convexity, for $T_0=T_0(K_0)>0$ small enough the restricted time of arrival function $u:K_0\setminus K_{T_0}\to \mathbb{R}$ is smooth and satisfies the estimates $$\label{smooth_der_bds} C^{-1} \leq |D u| \leq C,\;\;\;\; |\mathrm{Hess} u|\leq C,$$ for some $C=C(K_0)<\infty$. Recall also that $u$ satisfies the equation $$\label{levelseteq} \mathrm{div}\left(\frac{Du}{{\lvert Du\rvert}}\right)+\frac{1}{{\lvert Du\rvert}}=0.$$ For $T\in(0,T_0)$, let ${\phi}:[0,T)\rightarrow [0,\infty)$, ${\phi}(t)=\frac{1}{T-t}-\frac{1}{T}$. We will now show that for $T$ small enough the function $v={\phi}(u)$ is a supersolution of equation . To this end, we compute $$\begin{aligned} &\mathrm{div}\left(\frac{D v}{\sqrt{{\varepsilon}^2+|D v|^2}}\right)=\mathrm{div}\left(\frac{{\phi}' Du }{\sqrt{{\varepsilon}^2+|{\phi}' D u|^2}}\right)\\ &\qquad=\frac{{\phi}''|D u|^2}{\sqrt{{\varepsilon}^2+|D v|^2}}+\frac{{\phi}'}{\sqrt{{\varepsilon}^2+|D v|^2}}\left(\Delta u -\frac{{\phi}'{\phi}''|D u|^4+{\phi}'^2\mathrm{Hess}\,u\, (D u,D u)}{{\varepsilon}^2+|D v|^2} \right)\nonumber\\ &\qquad=\frac{{\varepsilon}^2{\phi}''|D u|^2}{\left({\varepsilon}^2+|D v|^2\right)^{3/2}}-\frac{{\phi}'}{\sqrt{{\varepsilon}^2+|D v|^2}}+\frac{{\varepsilon}^2{\phi}'}{({\varepsilon}^2+{\lvert Dv\rvert}^2)^{3/2}} \mathrm{Hess}\,u\left(\frac{Du}{{\lvert Du\rvert}},\frac{Du}{{\lvert Du\rvert}}\right),\nonumber\end{aligned}$$ where we used equation in the last step. Now observe that $$\frac{|D u|^2}{{\varepsilon}^2+|D v|^2} \leq \frac{1}{{\phi}'^2},\qquad\qquad \frac{{\phi}'}{{\varepsilon}^2+{\lvert Dv\rvert}^2}\leq \frac{1}{{\phi}' {\lvert Du\rvert}^2}.$$ Thus, taking also into account we conclude that $$\begin{gathered} \sqrt{{\varepsilon}^2+|D v|^2}\left(\textrm{div}\left( \frac{Dv}{\sqrt{{\varepsilon}^2+{\lvert Dv\rvert}^2}}\right)-\sigma v\right)+1,\\ \leq 2{\varepsilon}^2(T-t)-\frac{1}{(T-t)^2}+C{\varepsilon}^2 (T-t)^2+1\end{gathered}$$ which is negative if $T=T(K_0)$ is sufficiently small. Thus, for such $T$, the function $v$ is a supersolution of equation with $v=0$ on $\partial K_0$ and $v\to\infty$ on $\partial K_T$. Therefore, $$\sup_{\partial K_0}|D u_{{\varepsilon},{\sigma}}| \leq \sup_{\partial K_0}|D v| \leq \frac{C}{T^2}.$$ This proves the lemma. \[remark\_uniformlower\] Similarly, considering the function ${\phi}(t)=ct(T-t)$ we see that there is a constant $c=c(K_0)>0$ such that $\inf_{\partial K_0}{\lvert Du_{{\varepsilon},\sigma}\rvert}\geq c>0$. Recalling that $V=H+\sigma u_{{\varepsilon},\sigma}=({\varepsilon}^2+{\lvert Du_{{\varepsilon},\sigma}\rvert}^2)^{-1/2}$, the theorem follows by combining Proposition \[V\_better\_bd\] and Lemma \[boundary\_grad\_est2\]. Double approximate estimate for $|A|/H$ {#sec_estforAH} ======================================= The purpose of this section is to prove the following estimate. \[main\_curv\_bound\] There exist constants $\rho=\rho(K_0)<\infty$, $C=C(K_0)<\infty$, ${\varepsilon}_0={\varepsilon}_0(K_0)>0$ and ${\sigma}_0={\sigma}_0(K_0)>0$, such that $$\frac{{\lvert A\rvert}\left(x,\tfrac{1}{{\varepsilon}}u_{{\varepsilon},\sigma}(x)\right)}{H\left(x,\tfrac{1}{{\varepsilon}}u_{{\varepsilon},\sigma}(x)\right)+\sigma u_{{\varepsilon},\sigma}(x)} \leq C e^{\rho u_{{\varepsilon},{\sigma}}(x)}$$ for all $x\in K_0$, whenever $u_{{\varepsilon},\sigma}$ is a solution of with ${\varepsilon}<{\varepsilon}_0$ and $\sigma<\sigma_0$. Taking the limits $\sigma\to 0$ and ${\varepsilon}\to 0$ the estimate from Theorem \[main\_curv\_bound\] yields the estimate for ${\lvert A\rvert}/H$ from Theorem \[estimate2\], see Section \[pass\_lim\] for the proof. We will prove Theorem \[main\_curv\_bound\] by applying the maximum principle to the function $$\label{def_ofg} G=\frac{{\lvert A\rvert}+\Lambda\sigma u}{Vw},$$ where $V=H+\sigma u$, $w=e^{{\varepsilon}\rho z}$, and where $\rho<\infty$ and $\Lambda<\infty$ will be specified later. As will become clear below, the extra term $\Lambda \sigma u$ is crucial for the maximum principle. We begin by computing the Laplacian of the norm of the second fundamental form. \[A\_lap\] At any interior point with ${\lvert A\rvert}\neq 0$ we have $$\Delta |A| -\tfrac{|\nabla A|^2-|\nabla|A||^2}{|A|}\geq \tfrac{1}{{\varepsilon}}\langle \tau,\nabla |A| \rangle-{\lvert A\rvert}^3 -C{\sigma}u|A|^2-C\max\left(1,\sigma u,{\lvert A\rvert}\right).$$ We recall Simon’s inequality for hypersurfaces in Riemannian manifolds [@Simons_identity], $$\tfrac{1}{2}\Delta |A|^2-|\nabla A|^2 \geq \langle A,\nabla^2 H \rangle -|A|^4+H\textrm{tr}(A^3)-C(|A|+|A|^2),$$ where $C=C(\max_{K_0}{\lvert \textrm{Rm}\rvert},\max_{K_0}{\lvert \nabla\textrm{Rm}\rvert})$. To find the Hessian of the mean curvature in our case we use the formula $H=\tfrac{1}{{\varepsilon}}\langle \tau,\nu\rangle-\sigma u$, and compute (c.f. Lemma \[lap\_calc\]): $$\nabla^2 \langle \tau,\nu\rangle=\nabla_{\tau^\top} A-\left(A^2+\textrm{Rm}(\nu,\cdot,\nu,\cdot)\right)\langle \tau,\nu\rangle,$$ and $$\label{hessianofu} \nabla^2 u=-{\varepsilon}\langle \tau,\nu\rangle A.$$ It follows that $$\langle A,\nabla^2 H\rangle \geq \tfrac{1}{{\varepsilon}} \langle A,\nabla_{\tau^\top} A\rangle -(H+\sigma u)\textrm{tr}(A^3)-C(\sigma u+{\lvert A\rvert}){\lvert A\rvert},$$ and thus $$\tfrac{1}{2}\Delta |A|^2 -{\lvert \nabla A\rvert}^2\geq \frac{1}{2{\varepsilon}}\langle \tau,\nabla |A|^2 \rangle-{\lvert A\rvert}^4-\sigma u\textrm{tr}(A^3)-C(1+\sigma u+{\lvert A\rvert}){\lvert A\rvert}.$$ This implies the claim. To make use of the gradient term, we prove the following improved Kato inequality. \[strong\_kato\] There exist constants $c=c(n)<1$ and $C=C(\max_{K_0}{\lvert \textrm{Rm}\rvert})<\infty$ such that $${\lvert {\nabla}{{\lvert A\rvert}}\rvert}\leq c {\lvert {\nabla}A\rvert}+2{\lvert {\nabla}H\rvert}+C.$$ For any unit vector $X$, we will derive an estimate for the quantity $${\lvert A\rvert}{\lvert {\nabla}_X{\lvert A\rvert}\rvert}=\frac{1}{2}{\lvert {\nabla}_X{\lvert A\rvert}^2\rvert}={\lvert \langle {\nabla}A,X\otimes A\rangle\rvert}\, .$$ Let $({\nabla}A)^{\textrm{sym}}$ be the totally symmetric part of the 3-tensor ${\nabla}A$, i.e. $$({\nabla}A)^{\textrm{sym}}_{ijk}=\tfrac{1}{3}({\nabla}_i A_{jk}+{\nabla}_j A_{ik}+{\nabla}_k A_{ij}).$$ Using the Codazzi identity and the bound ${\lvert \textrm{Rm}\rvert}\leq C$ we see that $$\label{est_cod1} {\lvert ({\nabla}A)^{\textrm{sym}}-{\nabla}A\rvert}\leq C.$$ Next, observe that any totally symmetric 3-tensor $T$ can be decomposed as $T=T^{\textrm{tr}}+T^0$, where $$T^{\textrm{tr}}_{ijk}=\frac{1}{n+2}\left( T_{ppi}g_{jk}+T_{ppj}g_{ik}+T_{ppk}g_{ij} \right)$$ is the trace-part, and $T^0$ is the totally traceless part.\ Using again the Codazzi identity and the bound ${\lvert \textrm{Rm}\rvert}\leq C$ we see that $$\label{est_cod2} {\lvert ({\nabla}A)^{\textrm{sym,tr}}\rvert}\leq \frac{3\sqrt{n}}{n+2}{\lvert \nabla H\rvert}+C.$$ Combining and we obtain the estimate $${\lvert \langle {\nabla}A,X\otimes A\rangle\rvert}\leq {\lvert \langle ({\nabla}A)^{\textrm{sym,0}},X\otimes A\rangle\rvert}+\frac{3\sqrt{n}}{n+2}{\lvert \nabla H\rvert}{\lvert A\rvert}+C{\lvert A\rvert}.$$ Observing that ${\lvert ({\nabla}A)^{\textrm{sym,0}},(X\otimes A)\rangle\rvert}\leq {\lvert {\nabla}A\rvert} {\lvert (X\otimes A)^{\textrm{sym,0}}\rvert}$, the remaining task is to estimate the norm of $(X\otimes A)^{\textrm{sym,0}}$. This can be done by a straightforward computation: $$\begin{aligned} {\lvert (X\otimes A)^{\textrm{sym,0}}\rvert}^2&={\lvert (X\otimes A)^{\textrm{sym}}\rvert}^2-\frac{4}{3(n+2)}{\lvert A(X,.)\rvert}^2\\ &=\frac{1}{3}{\lvert A\rvert}^2+\left(\frac{2}{3}-\frac{4}{3(n+2)}\right){\lvert A(X,.)\rvert}^2\, .\end{aligned}$$ Putting everything together, the proposition follows. We will apply Proposition \[strong\_kato\] in combination with the following lemma. \[lemma\_gradh\] At any critical point of $G$ we have the estimate $${\lvert \nabla H\rvert}\leq \frac{V}{{\lvert A\rvert}}{\lvert \nabla{{\lvert A\rvert}}\rvert}+\frac{1}{{\lvert A\rvert}}{\varepsilon}\sigma \Lambda V+{\varepsilon}\rho V+{\varepsilon}\sigma.$$ The equation $\nabla \log G=0$ can be written in the form $$\label{eq_critical} \frac{1}{V}(\nabla H+\sigma \nabla u)=\nabla\log({\lvert A\rvert}+\Lambda\sigma u)-\nabla\log w.$$ Observing that $\nabla\log w={\varepsilon}\rho\tau^\top$ and $\nabla u={\varepsilon}\tau^\top$, and solving for $\nabla H$ we obtain $$\nabla{H}=\left(\frac{\nabla{{\lvert A\rvert}}+\Lambda\sigma{\varepsilon}\tau^\top}{{\lvert A\rvert}+\Lambda\sigma u}-{\varepsilon}\rho\tau^\top\right)V-{\varepsilon}\sigma \tau^\top.$$ The claim follows. We are now ready to prove the main theorem of this section. Throughout the proof we write $C=C(K_0)<\infty$ for a constant that can change from line to line. This should not be confused with $c=c(n)<1$, which is a fixed dimensional constant given by Proposition \[strong\_kato\]. Consider the function $G$ defined in . The parameters $\rho$ and $\Lambda$ will be specified in the last line of the proof (depending only on the dimension and geometry of $K_0$). For now, we only impose the condition that $\rho\geq 2\rho_1$, where $\rho_1=\rho_1(K_0)$ is the constant from Theorem \[V\_bd\_2\]. We will choose ${\varepsilon}_0=\sigma_0=\max(\rho,\Lambda)^{-1}$. Thus, tacitly assuming that ${\varepsilon}<{\varepsilon}_0$ and $\sigma<\sigma_0$, we have inequalities like $\sigma \Lambda< 1$ and ${\varepsilon}\rho< 1$ at our disposal. Theorem \[V\_bd\_2\], Remark \[remark\_uniformlower\] and DeGiorgi-Nash-Moser and Schauder estimates up to the boundary give a uniform upper bound for $\sup_{\partial K_0}G$. Thus, if the maximum of $G$ occurs at the boundary $\partial K_0$ we are done. Suppose now the maximum of $G$ is attained at an interior point $(x_0,z_0)\in M\setminus \partial M$. If ${\lvert A\rvert}< \tfrac{4}{1-c} \max(1,\sigma u,V)$ at $(x_0,z_0)$, then Theorem \[V\_bd\_2\] together with the constraint $\rho\geq 2\rho_1$ yields $G\leq C$ and we are done. Suppose now $$\label{ass_max} {\lvert A\rvert}\geq \tfrac{4}{1-c} \max(1,\sigma u,V).$$ Condition will allow us to absorb lower order terms. For example, all lower order terms in the inequality from Lemma \[lemma\_gradh\] can be safely estimated by $C{\lvert A\rvert}$, giving: $${\lvert \nabla{H}\rvert}\leq \tfrac{1-c}{4}{\lvert \nabla{{\lvert A\rvert}}\rvert}+C{\lvert A\rvert}.$$ Combining this with Proposition \[strong\_kato\] and using again condition we infer that $$\label{improved_kato_version} {\lvert \nabla A\rvert}^2-{\lvert \nabla{\lvert A\rvert}\rvert}^2\geq \delta {\lvert \nabla{\lvert A\rvert}\rvert}^2-C{\lvert A\rvert}^2,$$ for some $\delta=\delta(n)>0$. This will be an important ingredient for the estimate below. Since $(x_0,z_0)$ is a maximum point of $G$ we have ${\Delta}G\leq 0$ and thus $$\label{longeq1} \Delta(|A|+\Lambda{\sigma}u)Vw-(|A|+\Lambda{\sigma}u)\Delta(Vw) \leq 0,$$ where we also used that $\nabla G=0$. Using Proposition \[A\_lap\], the improved Kato estimate , the trace of equation and condition we obtain $$\label{longeq2} \Delta(|A|+\Lambda{\sigma}u)\geq \tfrac{\delta {\lvert \nabla {\lvert A\rvert}\rvert}^2}{|A|}+ \tfrac{1}{{\varepsilon}}\langle \tau,\nabla |A| \rangle-{\lvert A\rvert}^3 -C{\sigma}u |A|^2-C{\lvert A\rvert}.$$ Similarly, by Corollary \[cor\_evolution\_vw\] and condition we have $$\label{longeq3} -\Delta (V w)\geq\left(-C+|A|^2+\rho\langle\tau,\nu\rangle^2-\sigma u\right)Vw-2{\varepsilon}\rho\langle \tau,\nabla(V w)\rangle-\tfrac{1}{{\varepsilon}}\langle \tau, \nabla V \rangle w.$$ When substitution and into we will use the following claim. \[claim\_gradientterms\] The contribution from the $\langle \tau,\nabla\,\cdot\,\rangle$-terms can be estimated as: $$\begin{gathered} \tfrac{1}{{\varepsilon}}\langle\tau,\nabla{\lvert A\rvert}\rangle Vw-\left({\lvert A\rvert}+\Lambda\sigma u\right)\left(2{\varepsilon}\rho \langle \tau,\nabla (Vw)\rangle+\tfrac{1}{{\varepsilon}}\langle \tau,\nabla V\rangle w\right)\\ \geq \left(-2{\varepsilon}\rho\langle \tau,\nabla{\lvert A\rvert}\rangle+(\rho{\lvert A\rvert}-3){\lvert \tau^\top\rvert}^2 \right)Vw.\end{gathered}$$ The equation $\nabla \log G=0$ can be written in the form $$({\lvert A\rvert}+\Lambda\sigma u)\nabla(Vw)=Vw(\nabla{\lvert A\rvert}+\Lambda\sigma{\varepsilon}\tau^\top).$$ Using this, and the formula $\nabla(Vw)=w\nabla V+Vw{\varepsilon}\rho \tau^\top$, we compute $$\begin{gathered} \tfrac{1}{{\varepsilon}}\langle\tau,\nabla{\lvert A\rvert}\rangle Vw-\left({\lvert A\rvert}+\Lambda\sigma u\right)\left(2{\varepsilon}\rho \langle \tau,\nabla (Vw)\rangle+\tfrac{1}{{\varepsilon}}\langle \tau,\nabla V\rangle w\right)\\ =\left(-2{\varepsilon}\rho \langle \tau,\nabla{\lvert A\rvert}\rangle +\left(\rho({\lvert A\rvert}+\Lambda\sigma u)-(1+2{\varepsilon}^2\rho)\sigma \Lambda\right){\lvert \tau^\top\rvert}^2 \right)Vw.\end{gathered}$$ Dropping the term $\rho \Lambda\sigma u$ and estimating $(1+2{\varepsilon}^2\rho)\sigma \Lambda<3$, the claim follows. Now, substituting and into , and using Claim \[claim\_gradientterms\], we arrive at $$\begin{gathered} \label{long_ineq} 0\geq \tfrac{\delta {\lvert \nabla {\lvert A\rvert}\rvert}^2}{|A|}-{\lvert A\rvert}^3 -C{\sigma}u |A|^2-C{\lvert A\rvert}\\ +\left({\lvert A\rvert}+\Lambda\sigma u\right)\left(-C+{\lvert A\rvert}^2+\rho\langle \tau,\nu\rangle^2-\sigma u\right) -2{\varepsilon}\rho\langle \tau,\nabla{\lvert A\rvert}\rangle +(\rho{\lvert A\rvert}-3){\lvert \tau^\top\rvert}^2.\end{gathered}$$ Observe that the ${\lvert A\rvert}^3$-terms cancel, and that we have the estimate $$-2{\varepsilon}\rho\langle \tau,\nabla{\lvert A\rvert}\rangle\geq - \tfrac{\delta {\lvert \nabla {\lvert A\rvert}\rvert}^2}{|A|}-\delta^{-1}{\lvert A\rvert}.$$ Also note that the identity $\langle \tau,\nu\rangle^2+{\lvert \tau^\top\rvert}^2=1$ enables us to extract a positive term $\rho{\lvert A\rvert}$. The idea is now that the good terms $\Lambda\sigma u{\lvert A\rvert}^2$ and $\rho {\lvert A\rvert}$ win against all other terms. Namely, from , the discussion following it, and condition we obtain $$(\tfrac{1}{2}\Lambda-C)\sigma u {\lvert A\rvert}^2+(\rho -C-C\Lambda){\lvert A\rvert}\leq 0.$$ Choosing $\Lambda=3C$ and $\rho=2(C+C\Lambda)$ this gives the desired contradiction. Passing to the limits {#pass_lim} ===================== In this final section, we explain how the double approximators $u_{{\varepsilon},{\sigma}}$ converge to the arrival time $u$ of the mean curvature flow of $K_0$, and how the estimates of Theorem \[V\_bd\_2\] and Theorem \[main\_curv\_bound\] can be passed to the limit. This will be done in two steps, first taking the limit as ${\sigma}\rightarrow 0$ to obtain approximating translators, then taking the limit as ${\varepsilon}\rightarrow 0$. \[app\_existence\] For every ${\varepsilon}\in(0,{\varepsilon}_0)$ there exists a relatively open set $\Omega_{\varepsilon}\subseteq K_0$ containing the boundary $\partial K_0$ such that the following holds. 1. For $\sigma\to 0$, we can take a limit $u_{{\varepsilon},{\sigma}}\to u_{{\varepsilon}}$ in $C^\infty_{\textrm{loc}}(\Omega_{\varepsilon})$, and the limit solves the equation $$\label{eq_ueps} \begin{aligned} \mathrm{div}\left(\frac{D u_{\varepsilon}}{\sqrt{{\varepsilon}^2+|D u_{\varepsilon}|^2}}\right)+\frac{1}{\sqrt{{\varepsilon}^2+|D u_{\varepsilon}|^2}}=0\qquad \textrm{in} \,\, \Omega_{\varepsilon}. \end{aligned}$$ 2. We have $u_{\varepsilon}=0$ on $\partial K_0$, and $u_{\varepsilon}(x)\to \infty$ uniformly as $x\to\partial\Omega_{\varepsilon}\setminus\partial K_0$. 3. For $(x,z)\in {\mathrm{graph}}({u_{\varepsilon}}/{{\varepsilon}})$ we have the estimates $$\begin{aligned} \label{est_ueps} H\left(x,z\right) \geq c e^{-\rho u_{{\varepsilon}}(x)},\quad \frac{{\lvert A\rvert}}{H}\left(x,z\right) \leq C e^{\rho u_{{\varepsilon}}(x)}.\end{aligned}$$ Equation says that $L^{\varepsilon}_t=\{(x,z)\in \Omega_{\varepsilon}\,|\, z\leq \tfrac{u_{\varepsilon}(x)-t}{{\varepsilon}}\}$ is a selfsimilar solution of the mean curvature flow in $N\times \mathbb{R}$, translating downwards with speed $1/{\varepsilon}$. Fix ${\varepsilon}\in(0,{\varepsilon}_0)$. First observe that we have the monotonicity $$u_{{\varepsilon},{\sigma}_1}\geq u_{{\varepsilon},{\sigma}_2}\qquad \textrm{for}\quad {\sigma}_1\leq {\sigma}_2,$$ since $u_{{\sigma}_1}$ is a supersolution to the $({\varepsilon},{\sigma}_2)$-equation . Thus, for every $x\in K_0$ we can pass to a pointwise (possibly improper) limit $u_{\varepsilon}(x)=\lim_{\sigma\to 0} u_{{\varepsilon},\sigma}(x)\in[0,\infty]$. Let $\Omega_{\varepsilon}=\{x\in K_0 \,| \, u_{\varepsilon}(x)<\infty\}$. By Theorem \[V\_bd\_2\] we have the gradient estimate $$\label{gradestagain} {\lvert D u_{{\varepsilon},\sigma}\rvert}\leq Ce^{\rho u_{{\varepsilon},\sigma}},$$ where $C=C(K_0)<\infty$. By the gradient estimate, if $u_{{\varepsilon},\sigma}\leq \Lambda$ at some point, then $u_{{\varepsilon},\sigma}\leq 2 \Lambda$ in a neighborhood of definite size. In particular, $\Omega_{\varepsilon}\subset K_0$ is open and contains a neighborhood of the boundary $\partial K_0$. Moreover, combining the gradient estimate with DeGiorgi-Nash-Moser and Schauder estimates we see that the convergence $u_{{\varepsilon},\sigma}\to u_{\varepsilon}$ is locally smooth in $\Omega_{\varepsilon}$. In particular, since we have smooth convergence we can easily take the limit $\sigma\to 0$ in to obtain , and take the limit $\sigma\to 0$ in Theorem \[V\_bd\_2\] and Theorem \[main\_curv\_bound\] to obtain . Finally, suppose there is a sequence $x_i\in \Omega_{\varepsilon}$ with $x_i\to x\in\partial \Omega_{\varepsilon}\setminus\partial K_0$, but $\sup_i u_{{\varepsilon}}(x_i)<\infty$. Then the gradient estimate gives an open neighborhood of $x$ where $u_{\varepsilon}$ is bounded; this contradicts $x\in\partial \Omega_{\varepsilon}\setminus\partial K_0$, and thus proves property 2. \[u\_hat\_lim\] Let $K_0\subset N$ be a mean convex domain, and let $u:K_0\setminus K_\infty\to \mathbb{R}$ be the time of arrival function of its level set flow $\{K_t\}_{t\geq 0}$. Let $u_{\varepsilon}:\Omega_{\varepsilon}\to\mathbb{R}$, ${\varepsilon}\in(0,{\varepsilon}_0)$, be the family of functions given by Theorem \[app\_existence\], and let $L^{\varepsilon}_t=\{(x,z)\in \Omega_{\varepsilon}\,|\, z\leq \tfrac{u_{\varepsilon}(x)-t}{{\varepsilon}}\}$. 1. For ${\varepsilon}\to 0$, the functions $u_{\varepsilon}$ converge locally uniformly to $u$, and the family of mean curvature flows $\{L^{\varepsilon}_t\}$ converges to the mean curvature flow $\{K_t\times \mathbb{R}\}$ in the strong Hausdorff sense (see [@HK_meanconvex]) and in the sense of Brakke flows (see [@Ilmanen]). 2. The level set flow $\{K_t\}$ satisfies the estimates $$\label{mainesttoprove} \inf_{\partial K_t^\textrm{reg}}H\geq c e^{-\rho t},\qquad \inf_{ \partial K_t^\textrm{reg}}\frac{{\lvert A\rvert}}{H}\leq C e^{\rho t}.$$ By the first item of Theorem \[app\_existence\] and equation we have the gradient estimate $${\lvert D u_{{\varepsilon}}\rvert}(x) \leq Ce^{\rho u_{{\varepsilon}}(x)},$$ where $x\in\Omega_{\varepsilon}$, and $C=C(K_0)<\infty$. The gradient estimate implies that for every sequence ${\varepsilon}_k\to 0$ there exists a subsequence ${\varepsilon}_k'\to 0$ and a relatively open set $\Omega \subseteq K_0$ containing the boundary $\partial K_0$ such that $u_{{\varepsilon}_k'}\to \hat{u}$ locally uniformly in $\Omega$ and $u_{{\varepsilon}_k'}\to\infty$ uniformly as $x\to\partial \Omega\setminus \partial K_0$. Since $\hat{u}$ arises as a limit of locally uniform Lipschitz functions, we can take the limit ${\varepsilon}_k'\to 0$ in and infer that $\hat{u}$ solves the equation $$\mathrm{div}\left(\frac{D \hat{u}}{|D \hat{u}|}\right)+\frac{1}{|D \hat{u}|}=0,$$ in the viscosity sense. By the definition of viscosity solutions, the family of closed sets $\widehat{M}_t=\{x\in K_0\, | \, \hat{u}(x)= t\}_{t\geq 0}$ satisfies the avoidance principle, and thus is a set-theoretic subsolution of the mean curvature flow. Since $\{\partial K_t\}_{t\geq 0}$ is the maximal set theoretic subsolution starting at $\partial K_0$, we have the inclusion $\widehat{M}_t\subseteq \partial K_t$. Let $I=\{ t\in [0,\infty) \, | \, \widehat{M}_t=\partial K_t\}$. We will show that $I=[0,\infty)$. Clearly $0\in I$. Consider $\{t_n\}\subseteq I$ with $t_n \nearrow t<\infty$, and let $x\in \partial K_t$. Choose $x_n\in \partial K_{t_n}$ with $x_n\to x$. Since $\hat{u}(x_n)= t_n$ and $(x_n,t_n)\to (x,t)$ it follows that $\hat{u}(x)= t$, and thus $x\in \widehat{M}_t$. Consider now $T\in I$ and $x\in \partial K_t$ for $t\in(T,T+\delta)$. If $\delta$ is small enough, then by the gradient estimate $x\in \Omega$ and $\hat{u}(x)=t'$ for some $t'$ close to $T$. Thus, $x\in \widehat{M}_{t'}\subseteq \partial K_{t'}$. Since by mean convexity $\partial K_{t}\cap \partial K_{t'}=\emptyset$ for $t\neq t'$, it follows that $t=t'$, and thus $x\in \widehat{M}_t$. We have thus identified the limit with the unique mean convex level set flow, namely $\Omega=K_0\setminus K_\infty$, $\hat{u}=u$ and $\widehat{K}_t=K_t$. By uniqueness of the limit, the subsequential convergence $u_{{\varepsilon}_k'}\to u$ actually entails a full limit. Note that the time of arrival function of $\{L_t^{\varepsilon}\}$ is given by $U_{\varepsilon}(x,z)=u_{{\varepsilon}}(x)-{\varepsilon}z$. For ${\varepsilon}\to 0$ it converges locally uniformly to $U(x,z)=u(x)$, which is the time of arrival function of $\{{K}_t\times \mathbb{R}\}$. In particular, $\{L_t^{\varepsilon}\}$ converges to $\{K_t\times \mathbb{R}\}$ in the strong Hausdorff sense [@HK_meanconvex Def. 4.10]. By the compactness theorem for Brakke flows [@Ilmanen Thm. 7.1] and the uniqueness of the limit it also converges in the sense of Brakke flows. Finally, having established the convergence, we can now use the local regularity theorem for the mean curvature flow [@brakke; @White_regularity] to conclude that the limit for ${\varepsilon}\to 0$ of the estimates in yields the estimates in . \ *E-mail:* robert.haslhofer@cims.nyu.edu, or.hershkovits@cims.nyu.edu
{ "pile_set_name": "ArXiv" }
--- abstract: 'We discuss a novel sampling theorem on the sphere developed by McEwen & Wiaux recently through an association between the sphere and the torus. To represent a band-limited signal exactly, this new sampling theorem requires less than half the number of samples of other equiangular sampling theorems on the sphere, such as the canonical Driscoll & Healy sampling theorem. A reduction in the number of samples required to represent a band-limited signal on the sphere has important implications for compressive sensing, both in terms of the dimensionality and sparsity of signals. We illustrate the impact of this property with an inpainting problem on the sphere, where we show superior reconstruction performance when adopting the new sampling theorem.' author: - | Jason D. McEwen, Gilles Puy, Jean-Philippe Thiran, Pierre Vandergheynst,\ Dimitri Van De Ville and Yves Wiaux Institute of Electrical Engineering, Ecole Polytechnique F[é]{}d[é]{}rale de Lausanne (EPFL),\ Lausanne 1015, Switzerland;\ Institute of Bioengineering, Ecole Polytechnique F[é]{}d[é]{}rale de Lausanne (EPFL),\ Lausanne 1015, Switzerland;\ Department of Radiology and Medical Informatics, University of Geneva (UniGE),\ Geneva 1211, Switzerland bibliography: - 'bib.bib' title: Sampling theorems and compressive sensing on the sphere --- INTRODUCTION ============ The fast Fourier transform[@cooley:1965] (FFT) is arguably the most important and widely used numerical algorithm of our era, rendering the frequency content of signals accessible. Moreover, Shannon’s sampling theorem[@shannon:1949] states that all of the information content of a band-limited continuous signal can be captured through a finite number of samples. Typically, standard Fourier analyses are performed in Euclidean space where Shannon’s theory holds and where FFTs are directly applicable. However, in many applications data are observed on non-Euclidean manifolds, such as the sphere. Fourier analysis is performed on the sphere in the basis of spherical harmonics, which are the eigenfunctions of the spherical Laplacian operator and form the canonical orthonormal basis on the sphere. Sampling theorems and fast algorithms to perform spherical harmonic analyses exist but the field is much less mature that its elder Euclidean sibling. A novel sampling theorem on the sphere has been developed recently by two of the authors of this article [@mcewen:fssht] (hereafter referred to as the MW sampling theorem). From an information theoretic viewpoint, the fundamental property of any sampling theorem is the number of samples required to capture all of the information content of a band-limited signal. To represent exactly a signal on the sphere band-limited at , all sampling theorems on the sphere require $\order(\elmax^2)$ samples. However, the MW sampling theorem requires $\sim2\elmax^2$ samples only, less than half of the $\sim4\elmax^2$ samples of other equiangular sampling theorems on the sphere, such as the canonical Driscoll & Healy sampling theorem[@driscoll:1994] (hereafter referred to as the DH sampling theorem). Not only is the MW sampling theorem of theoretical interest, particularly regarding the information content of signals on the sphere, but it also has important practical implications in the emerging field of compressive sensing. The theory of compressive sensing states that it is possible to acquire sparse or compressible signals with fewer samples than standard sampling theorems would suggest [@candes:2006a; @donoho:2006]. In these settings, the ratio of the required number of measurements to the dimensionality of the signal scales linearly with its sparsity [@candes:2006a]. The more efficient sampling of the MW sampling theorem reduces the dimensionality of the signal in the spatial domain, thereby improving the performance of compressive sensing reconstruction on the sphere when compared to alternative sampling theorems.[@mcewen:css2] Furthermore, for sparsity priors defined in the spatial domain, such as signals sparse in the magnitude of their gradient, sparsity is also directly related to the sampling of the signal. For this class of signals, an additional enhancement in compressive sensing reconstruction performance is thus achieved when adopting the MW sampling theorem. [@mcewen:css2] In this article we first review sampling theorems on the sphere in , focussing on the MW and DH sampling theorems. In we discuss the superior performance achieved when solving compressive sensing problems on the sphere using the MW sampling theorem, as opposed to the DH sampling theorem. We illustrate our arguments with an inpainting problem on the sphere, where we adopt the prior assumption that the signal to be recovered is sparse in the magnitude of its gradient. Simulations are performed, verifying our theoretical arguments. Finally, concluding remarks are made in . SAMPLING THEOREMS ON THE SPHERE {#sec:sampling_theorems} =============================== Sampling theorems on the sphere state that all of the information contained in a band-limited signal may be represented by a finite set of samples in the spatial domain. On the sphere, unlike Euclidean space, the number of samples required in the harmonic and spatial domains differ, with different sampling theorems on the sphere requiring a different number of samples in the spatial domain. For an equiangular sampling of the sphere, the DH sampling theorem has become the standard, while the MW sampling theorem has emerged only recently.[^1] The MW sampling theorem achieves a more efficient sampling of the sphere, with a reduction by a factor of two in the number of samples required to represent a band-limited signal.[^2] In this section we outline the harmonic structure of the sphere in the continuous setting, before reviewing concisely the DH and MW sampling theorems. Harmonic Analysis on the Sphere ------------------------------- We consider the space of square integrable functions on the sphere $\ltwo(\sphere)$, with the inner product of $f,g\in\ltwo(\sphere)$ defined by $$\innerp{f}{g} = \int_\sphere \dmu{\sas} \: f(\sas) \: g^\cconj(\sas) \spcend ,$$ where $\dmu{\sas} = \sin \saa \dx \saa \dx \sab$ is the usual invariant measure on the sphere and $(\sas)$ define spherical coordinates with colatitude $\saa \in [0,\pi]$ and longitude $\sab \in [0,2\pi)$. Complex conjugation is denoted by the superscript ${}^\cconj$. The spherical harmonic functions form the canonical orthogonal basis for the space of $\ltwo(\sphere)$ functions on the sphere and are defined by $$\shfarg{\el}{\m}{\sas} = \sqrt{\frac{2\el+1}{4\pi} \elmfact} \: \aleg{\el}{\m}{\cos\saa} \: \exp{\img \m \sab} \spcend ,$$ for natural $\el\in\naturals$ and integer $\m\in\integers$, $|\m|\leq\el$, where $\aleg{\el}{\m}{x}$ are the associated Legendre functions. We adopt the Condon-Shortley phase convention, with the $(-1)^\m$ phase factor included in the definition of the associated Legendre functions. The orthogonality and completeness relations for the spherical harmonics read $ \innerp{\shf{\el}{\m}}{\shf{\el\p}{\m\p}} = \kron{\el}{\el\p} \kron{\m}{\m\p} $ and $$\sumlm \shfarg{\el}{\m}{\saa,\sab} \: \shfargc{\el}{\m}{\saa\p,\sab\p} = \delta(\cos\saa - \cos\saa\p) \: \delta(\sab - \sab\p)$$ respectively, where $\kron{i}{j}$ is the Kronecker delta symbol and $\delta(x)$ is the Dirac delta function. Due to the orthogonality and completeness of the spherical harmonics, any square integrable function on the sphere $\f \in \ltwo(\sphere)$ may be represented by its spherical harmonic expansion $$\label{eqn:sht_inverse} \f(\sas) = \sum_{\el=0}^\infty \sum_{\m=-\el}^\el \flm \: \shfarg{\el}{\m}{\sas} \spcend ,$$ where the spherical harmonic coefficients are given by the usual projection onto each basis function: $$\label{eqn:sht_forward} \shc{\f}{\el}{\m} = \innerp{f}{\shf{\el}{\m}} = \int_\sphere \dmu{\sas} \: f(\sas) \: \shfargc{\el}{\m}{\sas} \spcend .$$ Throughout, we consider signals on the sphere band-limited at $\elmax$, that is signals such that $\shc{f}{\el}{\m}=0$, $\forall \el\geq\elmax$. In this case the summation over  in [Eqn. (\[eqn:sht\_inverse\])]{} may be truncated to $\elmax-1$. We also adopt the convention that harmonic coefficients are defined to be zero for $|\m|>\el$ (which enforces the contraint $|\m| \leq \el$ when summations are interchanged). The forward and inverse spherical harmonic transforms, given by [Eqn. (\[eqn:sht\_forward\])]{} and [Eqn. (\[eqn:sht\_inverse\])]{} respectively, are exact in the continuous setting. A sampling theorem on the sphere states how to sample a band-limited function $\f(\sas)$ at a finite number of locations, such that all of the information content of the continuous function is captured. Since the frequency domain of a function on the sphere is discrete, the spherical harmonic coefficients describe the continuous function exactly. A sampling theorem thus describes how to exactly recover the spherical harmonic coefficients  of the continuous function from its samples. Consequently, sampling theorems effectively encode (often implicitly) an exact quadrature rule for evaluating the integral of a band-limited function on the sphere. Driscoll & Healy Sampling Theorem --------------------------------- The DH sampling theorem[@driscoll:1994] gives an explicit quadrature rule for the evaluation of spherical harmonic coefficients: $$\label{eqn:dh_quad} \shc{\f}{\el}{\m} = \sum_{\saai=0}^{2\elmax-1} \: \sum_{\sabi=0}^{2\elmax-1} \: \qweightdh(\saaiang) \: \f(\saisang) \: \shfargc{\el}{\m}{\saisang} \spcend ,$$ where the equiangular sample positions are defined by $\saaiang= \pi \saai / 2\elmax $, for $\saai = 0, \dotsc, 2\elmax-1$, and $\sabiang = \pi\sabi / \elmax$, for $\sabi = 0, \dotsc, 2\elmax -1$, giving $\Ndh=(2\elmax-1)2\elmax + 1 \sim 4\elmax^2$ samples on the sphere.[^3] The quadrature used to evaluate [Eqn. (\[eqn:sht\_forward\])]{} is exact for a function band-limited at , with quadrature weights given implicitly by the solution to $$\label{eqn:dh_quad_weight_implicit} \sum_{\saai=0}^{2\elmax-1} \: \qweightdh(\saaiang) \: \leg{\el}{\cos \saaiang} = \frac{2\pi}{\elmax} \: \kron{\el}{0} \spcend , \quad \forall \el < 2\elmax \spcend .$$ The quadrature weights satisfying [Eqn. (\[eqn:dh\_quad\_weight\_implicit\])]{} are given by $$\qweightdh(\saaiang) = \frac{2\pi}{\elmax^2} \: \sin\saaiang \: \sum_{k=0}^{\elmax-1} \: \frac{\sin((2k+1)\saaiang)}{2k+1} \spcend .$$ The exactness of the quadrature rule is proved by considering the sampling distribution of Dirac delta functions defined by $$\label{eqn:dh_sampling_distn} s(\sas) = \sum_{\saai=0}^{2\elmax-1} \: \sum_{\sabi=0}^{2\elmax-1} \: \qweightdh(\saaiang) \: \delta(\cos\saa - \cos\saaiang) \: \delta(\sab - \sabiang) \spcend .$$ For quadrature weights satisfying [Eqn. (\[eqn:dh\_quad\_weight\_implicit\])]{}, it can be shown that $\shc{s}{0}{0}=\sqrt{4\pi}$ and $\shc{s}{\el}{\m}=0$ for $0<\el<2\elmax$, $\forall \m$. Consequently, the sampling distribution may be written $$\label{eqn:dh_sampling_distn_2} s(\sas) = 1 + \sum_{\el=2\elmax}^{\infty} \: \summ \: \shc{s}{\el}{\m} \: \shfarg{\el}{\m}{\sas} \spcend .$$ The harmonic coefficients of the product of the original band-limited function and the sampling distribution $\f^s = \f \cdot s $ are given by $$\shc{\f}{\el}{\m}^s = \sum_{\saai=0}^{2\elmax-1} \: \sum_{\sabi=0}^{2\elmax-1} \: \qweightdh(\saaiang) \: \f(\saisang) \: \shfargc{\el}{\m}{\saisang} \spcend ,$$ which follows from [Eqn. (\[eqn:dh\_sampling\_distn\])]{}. Notice that these harmonic coefficients are given by the quadrature rule specified in [Eqn. (\[eqn:dh\_quad\])]{} and it simply remains to prove that the harmonic coefficients of $\f^s$ agree with those of  for the harmonic range of interest ( for $0\leq\el<\elmax$). Noting [Eqn. (\[eqn:dh\_sampling\_distn\_2\])]{}, we may write $\f^s(\sas) = \f(\sas) + \alpha(\sas)$, where $$\alpha(\sas) = \sum_{\el=2\elmax}^{\infty} \: \summ \: \shc{s}{\el}{\m} \: \shfarg{\el}{\m}{\sas} \: \sum_{\el\p=0}^{\elmax-1} \: \sum_{\m\p=-\el\p}^{\el\p} \: \shc{\f}{\el\p}{\m\p} \: \shfarg{\el\p}{\m\p}{\sas} \spcend .$$ Since the product of two spherical harmonic functions $\shfarg{\el}{\m}{\sas} \: \shfarg{\el\p}{\m\p}{\sas}$ can be written as a sum of spherical harmonics with minimum degree $| \el - \el\p| $, [@driscoll:1994] the aliasing error $\alpha(\sas)$ contains non-zero harmonic content for $\el>\elmax$ only. Aliasing is therefore outside of the harmonic range of interest and $\shc{\f}{\el}{\m}^s = \shc{\f}{\el}{\m}$ for $0\leq\el<\elmax$, $|m|<\el$, thus proving the exact quadrature rule given by [Eqn. (\[eqn:dh\_quad\])]{}. McEwen & Wiaux Sampling Theorem ------------------------------- The MW sampling theorem [@mcewen:fssht] follows by a factoring of rotations [@risbo:1996] and a periodic extension in colatitude , so that the orthogonality of the complex exponentials over $[0, 2\pi)$ may be exploited. This approach encodes an implicit quadrature rule on the sphere, which can then be made explicit. The spherical harmonics are related to the Wigner functions through [@goldberg:1967] $$\label{eqn:ssh_wigner} \sshfarg{\el}{\m}{\sas}{{{}}} = \sqrt{\frac{2\el+1}{4\pi} } \: \dmatbig_{\m 0}^{\el\:\cconj}(\sab,\saa ,0) \spcend ,$$ where the Wigner functions form the canonical orthogonal basis on the rotation group . The Wigner functions may be decomposed as [@varshalovich:1989] $$\label{eqn:d_decomp} \dmatbig_{\m\n}^{\el}(\euls) = {\rm e}^{-\img \m\eula} \: \dmatsmall_{\m\n}^\el(\eulb) \: {\rm e}^{-\img \n\eulc} \spcend ,$$ where the rotation group is parameterised by the Euler angles $(\euls)$, with $\eula \in [0,2\pi)$, $\eulb \in [0,\pi]$ and $\eulc \in [0,2\pi)$. The Fourier series decomposition of the -functions is given by [@nikiforov:1991] $$\label{eqn:wigner_sum_reln} \dlmnb = \img^{\n-\m} \sum_{\m\p=-\el}^\el \dlmnhalfpi{\el}{\m\p}{\m} \: \dlmnhalfpi{\el}{\m\p}{\n} \: \exp{\img \m\p \eulb} \spcend ,$$ with . This expression follows from a factoring of rotations. [@risbo:1996] The Fourier series representation of  given by [Eqn. (\[eqn:wigner\_sum\_reln\])]{} allows one to write the spherical harmonic expansion of $\f(\sas)$ in terms of a Fourier series expansion of the function extended appropriately to the two-torus . Noting [Eqn. (\[eqn:ssh\_wigner\])]{} – [Eqn. (\[eqn:wigner\_sum\_reln\])]{}, the forward spherical harmonic transform may be written $$\label{eqn:flm_wig} \fslm = \img^{\m} \nl \summptrunc \dlmnhalfpim \: \dlmnhalfpi{\el}{\m\p}{0} \: \Gsmm \spcend ,$$ where $$\label{eqn:Gmm} \Gsmm = \intsaa \: \Gsmt \: \exp{-\img \m\p \saa}$$ and $$\label{eqn:Gmt} \Gsmt = \intsab \: \fs(\sas) \: \exp{-\img \m \sab} \spcend .$$ Since [Eqn. (\[eqn:Gmt\])]{} is simply a Fourier transform, the discrete and continuous orthogonality of the complex exponentials may be exploited to evaluate this integral exactly by $$\Gsmti = \frac{2 \pi}{2\elmax-1} \sumsabi \: \fs(\saisang) \: \exp{-\img \m \sabiang} \spcend ,$$ where $\sab_\sabi = 2 \pi \sabi/(2\elmax-1)$, for $\sabi = 0,\dotsc,2\elmax-2$, and $\saa_\saai = \pi(2\saai+1)/(2\elmax-1)$, for $\saai = 0,\dotsc,\elmax-1$, giving $\Nmw = (\elmax - 1) (2 \elmax - 1) + 1 \sim 2 \elmax^2$ samples on the sphere. It remains to develop a quadrature rule to evaluate [Eqn. (\[eqn:Gmm\])]{}. This is achieved by extending  to the domain $\saa \in [0,2\pi)$ through the construction $$\begin{aligned} \rGsmti = &\quad \begin{cases} \Gsmti \: , & \saai \in \{ 0, 1, \dotsc, \elmax-1 \} \\ (-1)^{\m} \: \Gsm(\saa_{2\elmax-2-\saai}) \: , & \saai \in \{ \elmax, \dotsc, 2\elmax-2 \} \end{cases} \spcend ,\end{aligned}$$ so that  may be expressed by a Fourier series. The factor $(-1)^\m$ is required to ensure that the symmetry in the domain $[0,2\pi)$ dictated by the inverse transform is preserved. Substituting the Fourier expansion of  into [Eqn. (\[eqn:Gmm\])]{} yields $$\Gsmm = 2 \pi \sum_{\m{\p}{\p}=-(\elmax-1)}^{\elmax-1} \: \Fsmmp \: \weight(\m{\p}{\p} - \m\p) \spcend , \label{eqn:Gmm_convolution}$$ where the weights are given by $$\weight(\m\p) = \intsaa \: \exp{\img \m\p \saa} = \begin{cases} \: \pm \img \pi/2, & \m\p=\pm 1\\ \: 0, & \m\p \text{ odd},\:\m\p\neq\pm1\\ \: 2/(1-{\m\p}^2), & \m\p \text{ even} \end{cases} \spcend ,$$ with $$\Fsmm = \frac{1}{2 \pi(2\elmax-1)} \sumsaai \: \rGsmti \: \exp{- \img \m\p \saaiang} \spcend .$$ Since the spherical harmonic coefficients  are recovered exactly, all of the information content of the function $\f(\sas)$ is captured in the finite set of samples. The derivation above effectively gives an implicit quadrature rule for the exact integration of a band-limited function on the sphere. This quadrature rule can be written explicitly as [@mcewen:fssht] $$\label{eqn:quadrature} \int_\sphere \dmu{\sas} \: \fs(\sas) = \sum_{\saai=0}^{\elmax-1} \: \sum_{\sabi=0}^{2\elmax-2} \: \qweightmw(\saaiang) \: \fs(\saisang) \spcend ,$$ where the quadrature weights are defined by $$\qweightmw(\saaiang) = \frac{2\pi}{2\elmax-1} \Bigl[ \weighttrans(\saaiang) + (1 - \kron{t,}{\elmax-1}) \: \weighttrans(\saa_{2\elmax-2-\saai}) \Bigr] \spcend ,$$ and where $\weighttrans(\saaiang)$ is the inverse discrete Fourier transform of the reflected weights $\weight(-\m\p)$: $$\weighttrans(\saaiang) = \frac{1}{2\elmax-1} \: \sum_{\m\p=-(\elmax-1)}^{\elmax-1} \: \weight(-\m\p) \: \exp{\img \m\p \saaiang} \spcend .$$ COMPRESSIVE SENSING ON THE SPHERE {#sec:compressive_sensing} ================================= Compressive sensing on the sphere has been studied recently for signals sparse in harmonic space or in a redundant set of overcomplete dictionaries. [@abrial:2007; @rauhut:2011] However, many natural signals are sparse in measures defined in the spatial domain, such as in the magnitude of their gradient. A more efficient sampling of a band-limited signal on the sphere, as afforded by the MW sampling theorem, improves both the dimensionality and sparsity of the signal in the spatial domain, which has been shown to improve the quality of compressive sampling reconstruction.[@mcewen:css2] We review this very recent work, discussing the impact of efficient sampling on the sphere in the context of a total variation (TV) inpainting problem, after first defining the discrete TV norm on the sphere. TV Norm on the Sphere --------------------- The continuous TV norm on the sphere is defined by $$\| \f \|_{\rm TV} = \int_\sphere \dmun \: | \nabla \f | \spcend ,$$ where the magnitude of the gradient of the signal $\f$ is given by $$| \: \nabla \f \:| = \sqrt{ \Biggl ( \frac{\partial \f}{\partial \saa} \Biggr)^2 + \frac{1}{\sin^2\saa}\Biggl ( \frac{\partial \f}{\partial \sab} \Biggr )^2 } \spcend .$$ In practice, however, one must consider the TV norm of the sampled signal, where the samples of $\f(\sas)$ are denoted by the concatenated vector $\vect{x}\in\complex^{\N}$, where  is the number of samples on the sphere of the chosen sampling theorem (hereafter harmonic coefficients  are also represented by a concatenated vector, denoted $\vect{\hat{x}}\in\complex^{\elmax^2}$). A discrete TV norm on the sphere is defined by approximating the continuous norm in the context of either the DH or MW sampling theorem. The integral of the continuous TV norm can be approximated using the quadrature rule corresponding to the sampling theorem on the sphere adopted: $$\label{eqn:tvnorm_cts_approx} \| \f \|_{\rm TV} \simeq \sum_{\saai=0}^{\N_{\saa}-1} \: \sum_{\sabi=0}^{\N_{\sab}-1} \: | \nabla \f | \: \qweight(\saa_\saai) \spcend ,$$ where the number of samples in $(\sas)$, given by $\N_\saa$ and $\N_\sab$ respectively, and the quadrature weights $\qweight(\saa_\saai)$ depend on the choice of sampling theorem. If $| \nabla \f |$ were band-limited, then [Eqn. (\[eqn:tvnorm\_cts\_approx\])]{} would be exact. Although this is not likely to be the case, [Eqn. (\[eqn:tvnorm\_cts\_approx\])]{} is nevertheless a reasonable approximation of the continuous TV norm. The magnitude of the gradient $|\nabla \f|$ can be approximated from the samples $\vect{x}$ using finite differences, to give a discrete TV norm on the sphere that approximates the continuous norm closely: $\| \vect{x} \|_{\rm TV} \simeq \| \f \|_{\rm TV}$. Notice that the inclusion of the quadrature weights $\qweight(\saa_\saai)$ regularises the $\sin{\saa}$ term that arises from the definition of the gradient on the sphere, eliminating numerical instabilities that would otherwise occur. TV Inpainting on the Sphere --------------------------- We illustrate the impact of the number of samples of the DH and MW sampling theorems on compressive sensing reconstruction with an inpainting problem, where measurements are made in the spatial domain. A test signal sparse in its gradient is constructed from a binary Earth map, smoothed to give a signal band-limited at $\elmax=32$ (see ).[^4] The real inpainting problem $\vect{y} = \sensmat \vect{x} + \vect{n}$ is considered, where $\nmeas$ noisy measurements $\vect{y}\in\reals^{\nmeas}$ of the signal on the sphere $\vect{x}\in\reals^{\N}$ are made. The measurement operator $\sensmat\in\reals^{\nmeas \times \N}$ represents a random masking of the signal. The noise $\vect{n}\in\reals^\nmeas$ is assumed to be independent and identically distributed Gaussian noise, with zero mean. The TV inpainting problem is first solved directly on the sphere: $$\label{eqn:recon_spatial} \vect{x}^\star = \underset{\vect{x}}{\arg \min} \: \| \vect{x} \|_{\rm TV} \:\: \mbox{such that} \:\: \| \vect{y} - \sensmat \vect{x}\|_2 \leq \epsilon \spcend ,$$ where the bound $\epsilon$ is related to a residual noise level estimator. By adopting the MW sampling theorem in place of the DH sampling theorem, the dimensionality and sparsity of the signal in the spatial domain is optimised. However, no sampling theorem on the sphere reaches the optimal number of samples in the spatial domain suggested by the $\elmax^2$ dimensionality of the signal in the harmonic domain. Consequently, the dimensionality of the problem is reduced by recovering the harmonic coefficients $\vect{\hat{x}}$ directly: $$\label{eqn:recon_harmonic} \vect{\hat{x}}^\star = \underset{\vect{\hat{x}}}{\arg \min} \: \| {\ensuremath{\Lambda}}\vect{\hat{x}} \|_{\rm TV} \:\: \mbox{such that} \:\: \| \vect{y} - \sensmat {\ensuremath{\Lambda}}\vect{\hat{x}}\|_2 \leq \epsilon \spcend ,$$ where ${\ensuremath{\Lambda}}\in\complex^{\N\times\elmax^2}$ represents the inverse spherical harmonic transform; the signal on the sphere is recovered by $\vect{x}^\star = {\ensuremath{\Lambda}}\vect{\hat{x}}^\star$. For this problem the dimensionality of the signal directly recovered $\vect{\hat{x}}$ is identical for both sampling theorems, however sparsity in the spatial domain remains superior ( fewer non-zero values) for the MW sampling theorem. Reconstruction performance is plotted in when solving the inpainting problem in the spatial and harmonic domains, through [Eqn. (\[eqn:recon\_spatial\])]{} and [Eqn. (\[eqn:recon\_harmonic\])]{} respectively, for both sampling theorems (averaged over ten simulations of random measurement operators and independent and identically distributed Gaussian noise). Strictly speaking, compressed sensing corresponds to the measurement ratio $\nmeas/\elmax^2<1$ when considering the harmonic representation of the signal. Nevertheless, experiments are extended to $\nmeas/\elmax^2 \sim 2$, corresponding to the equivalent of complete sampling on the MW grid. When solving the inpainting problem in the spatial domain we see a large improvement in reconstruction quality for the MW sampling theorem when compared to the DH sampling theorem. This is due to the enhancement in both dimensionality and sparsity afforded by the MW sampling theorem in this setting. When solving the inpainting problem in the harmonic domain, we see a considerable improvement in reconstruction quality for each sampling theorem, since the dimensionality of the recovered signal is optimal in harmonic space. For harmonic reconstructions, the MW sampling theorem remains superior to the DH sampling theorem due to the enhancement in sparsity (but not dimensionality) that it affords in this setting. In all cases, the superior performance of the MW sampling theorem is evident. In example reconstructions are shown, where the superior quality of the MW reconstruction is again clear. ![Reconstruction performance for the DH and MW sampling theorems[]{data-label="fig:snr_vs_m"}](figures/SNRvsM){width="110mm"} CONCLUSIONS {#sec:conclusions} =========== Although compressive sensing states that sparse or compressible signals may be acquired with fewer samples than standard sampling theorems would suggest, the sampling theorem adopted nevertheless has an important influence on the performance of compressive sensing reconstruction. In Euclidean space, Shannon’s sampling theorem provides an optimal sampling for regular grids, leading to a unique sampling theorem. On the sphere, however, no sampling theorem is optimal, with different sampling theorems requiring a differing number of samples. The MW sampling theorem[@mcewen:fssht] has been developed only recently and achieves a more efficient sampling of the sphere than alternatives, requiring fewer than half as many samples as the canonical DH sampling theorem[@driscoll:1994], while still capturing all of the information content of a band-limited signal. A reduction by a factor of two in the number of samples between the DH and MW sampling theorems has important implications for compressive sensing on the sphere, both in terms of the dimensionality and sparsity of signals. The more efficient sampling of the MW sampling theorem has been shown to enhance the performance of compressed sensing reconstruction on the sphere, as illustrated with an inpainting problem.[@mcewen:css2] JDM is supported by the Swiss National Science Foundation (SNSF) under grant 200021-130359. YW is supported by the Center for Biomedical Imaging (CIBM) of the Geneva and Lausanne Universities, EPFL, and the Leenaards and Louis-Jeantet foundations, and by the SNSF under grant PP00P2-123438. [^1]: Fast algorithms have been developed to compute the forward and inverse transforms rapidly for both the DH [@driscoll:1994; @healy:2003] and MW [@mcewen:fssht] sampling theorems; these algorithms are essential to facilitate the application of these sampling theorems at high band-limits. [^2]: Gauss-Legendre (GL) quadrature can also be used to construct an efficient sampling theorem on the sphere, with samples [@mcewen:fssht]. The MW sampling theorem nevertheless requires fewer samples and so remains more efficient, especially at low band-limits. Furthermore, it is not so straightforward to define norms describing spatial priors on the GL grid since it is not equiangular. Finally, algorithms implementing the GL sampling theorem have been shown to be limited to lower band-limits and less accurate than the algorithms implementing the MW sampling theorem [@mcewen:fssht]. Consequently, we do not focus on the GL sampling theorem any further in this article. [^3]: The original DH sampling theorem has been revisited[@healy:2003], yielding an alternative formulation with only very minor differences and that also requires $\sim4\elmax^2$ samples. [^4]: The original Earth topography data are taken from the Earth Gravitational Model (EGM2008) publicly released by the U.S. National Geospatial-Intelligence Agency (NGA) EGM Development Team. These data were downloaded and extracted using the tools available from Frederik Simons’ webpage: <http://www.princeton.edu/geosciences/people/simons/>.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The article considers the generalized $k$-Bessel functions and represents it as Wright functions. Then we study the monotonicity properties of the ratio of two different orders $k$- Bessel functions, and the ratio of the $k$-Bessel and the $m$-Bessel functions. The log-convexity with respect to the order of the $k$-Bessel also given. An investigation regarding the monotonicity of the ratio of the $k$-Bessel and $k$-confluent hypergeometric functions are discussed.' address: - | Department of Mathematics\ King Faisal University, Al Ahsa 31982, Saudi Arabia - | Department of Mathematics\ Prince Sattam bin Abdulaziz University, Saudi Arabia author: - 'Saiful. R. Mondal' - 'Kottakkaran S. Nisar' title: 'Inequalities for the modified $k$- Bessel function' --- Introduction {#Intro} ============ One of the generalization of the classical gamma function $\Gamma$ studied in [@Diaz] is defined by the limit formula $$\begin{aligned} \label{eqn-1} \Gamma_k(x) := \lim_{n \to \infty} \frac{n! \; k^n (n^k)^{\tfrac{x}{k}-1}}{(x)_{n, k}}, \quad k>0,\end{aligned}$$ where $(x)_{n, k}:=x(x+k) (x+2k)\ldots (x+(n-1)k)$ is called $k$-Pochhammer symbol. The above $k-$gamma function also have an integral representation as $$\begin{aligned} \label{eqn-2} \Gamma_k(x)= \int_0^\infty t^{x-1} e^{-\frac{t^{k}}{k}} dt, \quad \operatorname{Re}(x)>0.\end{aligned}$$ Properties of the $k$-gamma functions have been studies by many researchers [@CGK; @CGK2; @VK; @MM; @Mubeen13]. Follwoing properties are required in sequel: - $\Gamma _{k}\left( x+k\right) =x\Gamma _{k}\left( x\right)$ - $\Gamma _{k}\left( x\right) =k^{\frac{x}{k}-1}\Gamma \left( \frac{x}{k}\right)$ - $\Gamma _{k}\left( k\right)=1$ - $\Gamma _{k}\left( x+nk\right)=\Gamma _{k}(x) (x)_{n, k}$ Motivated with the above generalization of the $k$-gamma functions, Romero et. al.[@Romero-Cerutti] introduced the $k-$Bessel function of the first kind defined by the series $$\begin{aligned} \label{k1} J_{k,\nu }^{\gamma ,\lambda }\left( x\right) :=\sum_{n=0}^{\infty }\frac{\left( \gamma \right) _{n,\;k}}{\Gamma _{k}\left( \lambda n+\upsilon +1\right) }\frac{\left( -1\right) ^{n}\left( x/2\right) ^{n}}{\left( n!\right) ^{2}},\end{aligned}$$where $k\in \mathbb{R^{+}}$; $\alpha,\lambda,\gamma,\upsilon \in C$; $\operatorname{Re}(\lambda)>0$ and $\operatorname{Re}(\upsilon) >0$. They also established two recurrence relations for $J_{k,\nu }^{\gamma ,\lambda }$. In this article, we are considering the following function: $$\begin{aligned} \label{eqn-modfb} I_{k,\nu }^{\gamma ,\lambda }\left( x\right) :=\sum_{n=0}^{\infty }\frac{\left( \gamma \right) _{n,\;k}}{\Gamma _{k}\left( \lambda n+\upsilon +1\right) }\frac{\left( x/2\right) ^{n}}{\left( n!\right) ^{2}},\end{aligned}$$ Since $$\lim_{k, \lambda, \gamma \to 1 }I_{k,\nu }^{\gamma ,\lambda }\left( x\right)= \sum_{n=0}^{\infty }\frac{1}{\Gamma\left( n+\upsilon +1\right) }\frac{\left( x/2\right) ^{n}}{n!}=\left(\frac{2}{x}\right)^{\frac{\nu}{2}} I_\nu(\sqrt{2x}),$$ the classical modified Bessel functions of first kind. In this sense, we can call $I_{k,\nu }^{\gamma ,\lambda }$ as the modified $k$-Bessel functions of first kind. In fact, we can express both $J_{k,\nu }^{\gamma ,\lambda }$ and $I_{k,\nu }^{\gamma ,\lambda }$ together in $$\label{eqn-genb} \mathtt{W}_{k,\nu, c }^{\gamma,\lambda }(x) :=\sum_{n=0}^{\infty}\frac{ ( \gamma )_{n,\;k}}{\Gamma_{k}(\lambda n+ \nu +1) }\frac{(-c)^n( x/2) ^{n}}{\left( n!\right) ^{2}}, \quad c \in \mathbb{R}.$$ We can termed $\mathtt{W}_{k,\nu }^{\gamma ,\lambda }$ as the generalized $k$-Bessel function. First we study the representation formulas for $\mathtt{W}_{k,\nu }^{\gamma ,\lambda }$ in term of the classical Wright functions. Then we will study about the monotonicity and log-convexity properties of $I_{k,\nu }^{\gamma ,\lambda }$. Representation formula for the generalized $k$-Bessel function {#sec1} ============================================================== The generalized hypergeometric function ${}_pF_q(a_1,\ldots, a_p;c_1,\ldots,c_q;x)$, is given by the power series $$\label{eqn:gen-hyp-func} {}_pF_q(a_1,\ldots, a_p;c_1,\ldots,c_q;z) = \sum_{k=0}^{\infty}\dfrac{(a_1)_{k} \cdots (a_p)_{k}}{(c_1)_{k}\cdots(c_q)_{k}(1)_{k}}z^k, \quad \quad |z|<1,$$ where the $c_{i}$ can not be zero or a negative integer. Here $p$ or $q$ or both are allowed to be zero. The series $(\ref{eqn:gen-hyp-func})$ is absolutely convergent for all finite $z$ if $p\leq q$ and for $|z|<1$ if $p=q+1$. When $p>q+1$, then the series diverge for $z \not=0$ and the series does not terminate. The generalized Wright hypergeometric function ${}_p\psi_q(z)$ is given by the series $$\label{eqn-9-bessel} {}_p\psi_q(z)={}_p\psi_q\left[\begin{array}{c} (a_i,\alpha_i)_{1,p} \\ (b_j,\beta_j)_{1,q} \end{array}\bigg|z\right]=\displaystyle\sum_{k=0}^{\infty}\frac{\prod_{i=1}^{p}\Gamma(a_{i}+\alpha_{i}k)} {\prod_{j=1}^{q}\Gamma(b_{j}+\beta_{j}k)} \frac{z^{k}}{k!},$$ where $a_i, b_j\in \mathbb{C}$, and real $\alpha_i, \beta_j\in \mathbb{R}$ ($i=1,2,\ldots,p; j =1,2,\ldots,q$). The asymptotic behavior of this function for large values of argument of $z\in \mathbb{C}$ were studied in [@CFox; @Kilbas] and under the condition $$\label{eqn-10-bessel} \displaystyle\sum_{j=1}^{q}\beta_{j}-\displaystyle\sum_{i=1}^{p}\alpha_{i}>-1$$ in literature [@Wright-2; @Wright-3]. The more properties of the Wright function are investigated in [@Kilbas; @Kilbas-itsf; @KST]. Now we will give the representation of the generalized $k$-Bessel functions in terms of the Wright and generalized hypergeometric functions. Let, $k \in \mathbb{R}$ and $\lambda ,\gamma ,\nu \in \mathbb{C}$ such that $\operatorname{Re}( \lambda) >0,\operatorname{Re}( \nu ) >0.$ Then $$\mathtt{W}_{k,\nu,c }^{\gamma ,\lambda }(x) =\frac{1}{k^{\frac{\nu+k+1}{k}}\Gamma\left(\frac{\gamma}{k}\right)} {}_1\psi_2\left[\begin{array}{ccc} \left(\frac{\gamma}{k}, 1\right)& \\ \left(\frac{\nu+1}{k}, \frac{\gamma}{k}\right) & (1, 1) \end{array}\bigg|-\frac{c x}{2 k^{\frac{\lambda}{k}-1}}\right]$$ Using the relations $\Gamma _{k}\left( x\right) =k^{\frac{x}{k}-1}\Gamma \left( \frac{x}{k}\right)$ and $\Gamma _{k}\left( x+nk\right)=\Gamma _{k}(x) (x)_{n, k}$, the generalized $k$-Bessel functions defined in $\eqref{eqn-genb}$ can be rewrite as $$\begin{aligned} \mathtt{W}_{k,\nu,c }^{\gamma ,\lambda }(x) &=\sum_{n=0}^\infty \frac{\Gamma_k(\gamma+n k)}{\Gamma_k(\lambda n+\nu+1) \Gamma_k(\gamma)} \frac{(-c)^n}{(n!)^2} \left(\frac{x}{2}\right)^n\\ &=\frac{1}{k^{\frac{\nu+k+1}{k}} \Gamma\left(\frac{\gamma}{k}\right)}\sum_{n=0}^\infty \frac{\Gamma\left(\frac{\gamma}{k}+n \right)}{\Gamma\left(\frac{\lambda}{k} n+\frac{\nu+1}{k}\right) \Gamma\left(\frac{\gamma}{k}\right)} \frac{(-c)^n}{\Gamma(n+1)\Gamma(n+1)} \left(\frac{x}{2 k^{\frac{\lambda}{k}-1}}\right)^n\\ &=\frac{1}{k^{\frac{\nu+k+1}{k}}\Gamma\left(\frac{\gamma}{k}\right)} {}_1\psi_2\left[\begin{array}{ccc} \left(\frac{\gamma}{k}, 1\right)& \\ \left(\frac{\nu+1}{k}, \frac{\gamma}{k}\right) & (1, 1) \end{array}\bigg|-\frac{c x}{2 k^{\frac{\lambda}{k}-1}}\right]\end{aligned}$$ Hence the result follows. Monotonicty and log-convexity properties {#sec2} ======================================== This section discuss the monotonicity and log-convexity properties for the modified $k$-Bessel functions $\mathtt{W}_{k,\nu,-1 }^{\gamma ,\lambda }(x)=\mathtt{I}_{k,\nu}^{\gamma ,\lambda }(x)$. Following lemma due to Biernacki and Krzyż [@Biernacki-Krzy] will be required. \[lemma:1\][@Biernacki-Krzy] Consider the power series $f(x)=\sum_{k=0}^\infty a_k x^k$ and $g(x)=\sum_{k=0}^\infty b_k x^k$, where $a_k \in \mathbb{R}$ and $b_k > 0$ for all $k$. Further suppose that both series converge on $|x|<r$. If the sequence $\{a_k/b_k\}_{k\geq 0}$ is increasing (or decreasing), then the function $x \mapsto f(x)/g(x)$ is also increasing (or decreasing) on $(0,r)$. The above lemma still holds when both $f$ and $g$ are even, or both are odd functions. The following results holds true for the modified $k$-Bessel functions. 1. For $\mu \geq \nu>-1$, the function $x \mapsto \mathtt{I}_{k,\mu}^{\gamma ,\lambda }(x)/\mathtt{I}_{k,\nu}^{\gamma ,\lambda }(x)$ is increasing on $(0, \infty)$ for some fixed $k >0$. 2. If $k\geq \lambda \geq m>0$, the function $x \mapsto \mathtt{I}_{k,\nu}^{\gamma ,\lambda }(x)/\mathtt{I}_{m,\nu}^{\gamma ,\lambda }(x)$ is increasing on $(0, \infty)$ for some fixed $\nu >-1$ and $\gamma \geq \nu+1$. 3. The function $\nu \mapsto \mathcal{I}_{k,\nu}^{\gamma ,\lambda }(x)$ is log-convex on $(0, \infty)$ for some fixed $k, \gamma>0$ and $x>0$. Here, $\mathcal{I}_{k,\nu}^{\gamma ,\lambda }(x):=\Gamma_k(\nu+1)\mathtt{I}_{k,\nu}^{\gamma ,\lambda }(x)$. 4. Suppose that $\lambda \geq k>0$ and $\nu>-1$. Then 1. The function $x \mapsto \mathtt{I}_{k,\nu}^{\gamma ,\lambda }(x)/\Phi _{k}\left( a,c;x\right)$ is decreasing on $(0, \infty)$ for $a \geq c >0$ and $0<\gamma \leq \nu+1$. Here, $\Phi _{k}\left( a; c; x\right)$ is the $k$-confluent hypergeometric functions. 2. The function $x \mapsto \mathtt{I}_{k,\nu}^{\gamma ,\lambda }(x)/\Phi _{k}\left( \gamma; \lambda; x/2\right)$ is decreasing on $(0, 1)$ for $\gamma >0$ and $0< k \leq \lambda \leq \nu+1$. 3. The function $x \mapsto \mathtt{I}_{k,\nu}^{\gamma ,\lambda }(x)/\Phi _{k}\left( \gamma; \lambda; x/2\right)$ is decreasing on $[1, \infty)$ for $\gamma >0$ and $0< k \leq \min\{\lambda,\nu+1\}$. ${\bf (1)}$ Form it follows that $$\mathtt{I}_{ k, \nu}^{\gamma ,\lambda }(x)= \sum_{n=0}^\infty a_n(\nu) x^n\quad \text{and} \quad \mathtt{I}_{ k, \nu}^{\gamma ,\lambda }(x)= \sum_{n=0}^\infty a_n(\mu) x^n,$$ where $$a_n(\nu)= \frac{(\gamma)_{n,k}}{\Gamma_k(\lambda n+\nu+1) (n!)^2 2^n} \quad \text{and} \quad a_n(\mu)= \frac{(\gamma)_{n,k}}{\Gamma_k(\lambda n+\mu+1) (n!)^2 2^n}$$ Consider the function $$f(t):= \frac{\Gamma_k(\lambda t+\mu+1)}{\Gamma_k(\lambda t+\nu+1)}.$$ Then the logarithmic differentiation yields $$\begin{aligned} \frac{f'(t)}{f(t)}= \lambda( \Psi_k(\lambda t+\mu+1)-\Psi_k(\lambda t+\nu+1)).\end{aligned}$$ Here, $\Psi_k=\Gamma_k'/\Gamma_k $ is the $k$-digamma functions studied in [@Kwara14] and defined by $$\begin{aligned} \label{def-digamma} \Psi_k(t)=\frac{\log(k)-\gamma_1}{k}-\frac{1}{t}+\sum_{n=1}^\infty \frac{t}{nk(nk+t)}\end{aligned}$$ where $\gamma_1$ is the Euler-Mascheroni’s constant. A calculation yields $$\begin{aligned} \label{def-digamma-2} \Psi_k'(t)=\sum_{n=0}^\infty \frac{1}{(nk+t)^2}, \quad k>0 \quad \text{and} \quad t>0.\end{aligned}$$ Clearly, $\Psi_k$ is increasing on $(0, \infty)$ and hence $f'(t)>0$ for all $t\geq0$ if $\mu \geq \nu>-1$. This, in particular, implies that the sequence $\{d_n\}_{n \geq 0}=\{a_n(\nu)/a_n(\mu)\}_{n \geq 0}$ is increasing and hence the conclusion follows from Lemma $\ref{lemma:1}$. [**(2)**]{}. This result also follows from Lemma $\ref{lemma:1}$ if the sequence $\{d_n\}_{n \geq 0}=\{a_n^k(\nu)/a_n^m(\mu)\}_{n \geq 0}$ is increasing for $k \geq m >0$. Here, $$a_{n}^{k}\left( \nu \right) =\frac{\left( \gamma \right) _{n,k}}{\Gamma _{k}\left( \lambda n+\nu+1\right) \left( n!\right) ^{2}} \quad \text{and} \quad a_{n}^{m}\left( \nu \right) =\frac{\left( \gamma \right) _{n,m}}{\Gamma _{m}\left( \lambda n+\nu+1\right) \left( n!\right) ^{2}},$$ which together with the identity $\Gamma _{k}\left( x+nk\right)=\Gamma _{k}(x) (x)_{n, k}$ gives $$\begin{aligned} d_n&=\frac{\left( \gamma \right) _{n,k}}{\left( \gamma \right) _{n,m}} \frac{ \Gamma _{m}\left( \lambda n+\nu+1\right) }{\Gamma _{k}\left( \lambda n+\nu+1\right)}\\ &= \frac{ \Gamma _{k}\left( \gamma +nk\right)\Gamma _{m}\left( \lambda n+\nu+1\right) }{\Gamma _{k}\left( \gamma +nm\right)\Gamma _{k}\left( \lambda n+\nu+1\right)}.\end{aligned}$$ Now to show that $\{d_n\}$ is increase, consider the function $$f(y):=\frac{ \Gamma _{k}\left( \gamma +yk\right)\Gamma _{m}\left( \lambda y+\nu+1\right) }{\Gamma _{k}\left( \gamma +ym\right)\Gamma _{k}\left( \lambda y+\nu+1\right)}$$ The logarithmic differentiation of $f$ yields $$\begin{aligned} \label{3} \frac{f'(y)}{f(y)}= k \Psi_k(\gamma +yk)+ \lambda \Psi_m\left( \lambda y+\nu+1\right)-m \Psi_m(\gamma +ym )-\lambda \Psi_k\left( \lambda y+\nu+1\right)\end{aligned}$$ If $\gamma \geq \nu+1$ and $k \geq \lambda \geq m $, then can be rewrite as $$\begin{aligned} \label{44} \frac{f'(y)}{f(y)}\geq \lambda \big(\Psi_k(\nu+1 +yk)- \Psi_k\left( \lambda y+\nu+1\right)\big)+ m\big( \Psi_m\left( \lambda y+\nu+1\right)- \Psi_m(\nu+1 +ym )\big) \geq 0.\end{aligned}$$ This conclude that $f$, and consequently the sequence $\{d_n\}_{n\geq 0}$, is increasing. Finally the result follows from the Lemma \[lemma:1\]. [**(3).**]{} It is known that sum of the log-convex functions is log-convex. Thus, to prove the result it is enough to show that $$\nu \mapsto a_{n}^{k}\left( \nu \right) :=\frac{\left( \gamma \right) _{n,k}\Gamma _{k}\left( \nu+1\right)}{\Gamma _{k}\left( \lambda n+\nu+1\right) \left( n!\right) ^{2}}$$ is log-convex. A logarithmic differentiation of $a_n(\nu)$ with respect to $\nu$ yields $$\begin{aligned} \frac{\partial}{\partial \nu} \log\left(a_{n}^{k}\left( \nu \right)\right)=\Psi_k\left(\nu+1\right) - \Psi_k\left( \lambda n+\nu+1\right).\end{aligned}$$ This along with gives $$\begin{aligned} \frac{\partial^2}{\partial\nu^2}\log\left(a_{n}^{k}\left( \nu \right)\right) &=\Psi'_k\left(\nu+1\right) - \Psi'_k\left( \lambda n+\nu+1\right)\\ &=\sum_{r=0}^\infty \frac{1}{(rk+\nu+1)^2} - \sum_{r=0}^\infty \frac{1}{(rk+\lambda n+\nu+1)^2}\\ &=\sum_{r=0}^\infty \frac{\lambda n(2 rk+\lambda n+2\nu+2)}{(rk+\nu+1)^2(rk+\lambda n+\nu+1)^2} >0,\end{aligned}$$ for all $n \geq 0$, $k >0$ and $\nu>-1$. Thus, $\nu \mapsto a_{n}^{k}\left( \nu \right)$ is log-convex and hence the conclusion.\ [**(4).**]{} Denote $\Phi _{k}\left( a,c;x\right)=\sum_{n=0}^\infty c_{n, k}( a, c) x^{n}$ and $\mathtt{I}_{ k, \nu}^{\gamma ,\lambda }(x)= \sum_{n=0}^\infty a_n(\nu) x^n,$ where $$a_n(\nu)= \frac{(\gamma)_{n,k}}{\Gamma_k(\lambda n+\nu+1) (n!)^2 2^n}\quad \text{and} \quad d_{n,k}\left( a,c\right) =\frac{\left( a\right) _{n,k}}{\left( c\right) _{n,k}n!}$$with $v>-1$ and $a,c, \lambda, \gamma, k>0.$ To apply Lemma \[lemma:1\], consider the sequence $\left\{ w_{n}\right\} _{n\geq 0}$ defined by $$\begin{aligned} w_{n} =\frac{a_{n}\left( \nu \right) }{d_{n, k}\left( a,c\right) }&=&\frac{\Gamma _{k}\left( \gamma +nk\right) }{2^{n}\Gamma _{k}\left( \gamma \right) \Gamma _{k}\left( \lambda n+\alpha +1\right) \left( n!\right) ^{2}}.\frac{\Gamma _{k}\left( a\right) \Gamma _{k}\left( c+nk\right) n!}{\Gamma _{k}\left( a+nk\right) \Gamma _{k}\left( c\right) } \\ &=&\frac{\Gamma _{k}\left( a\right) }{\Gamma _{k}\left( \gamma \right) \Gamma _{k}\left( c\right) }\rho _{k}\left( n\right)\end{aligned}$$where $$\rho _{k}\left( x\right) =\frac{\Gamma _{k}\left( \gamma +xk\right) \Gamma _{k}\left( c+xk\right) }{\Gamma _{k}\left( \lambda x+\nu +1\right) \Gamma _{k}\left( a+xk\right) 2^x \Gamma(x+1) }.$$In view of the increasing properties of $\Psi_k$ on $(0, \infty)$, and $$\frac{\rho ^{\prime }\left( x\right) }{\rho \left( x\right) }= k \psi _{k}\left( \gamma +xk\right) +k\psi _{k}\left( c+xk\right) -\lambda \psi _{k}\left( \lambda x+\alpha +1\right) -k \psi _{k}\left( a+xk\right),$$ it follows that for $a\geq c>0$, $\lambda \geq k$ and $\nu+1\geq \gamma$, the function $\rho$ is decreasing on $ \left( 0,\infty \right) $ and thus the sequence $\left\{ w_{n}\right\} _{n\geq 0} $ also decreasing. Finally the conclusion for $(a)$ follows from the Lemma \[lemma:1\]. In the case $(b)$ and $(c)$, the sequence $\{w_n\}$ reduces to $$\begin{aligned} w_{n} =\frac{a_{n}\left( \nu \right) }{d_{n, k}\left(\gamma, \lambda\right) }&=&\frac{\rho _{k}\left( n\right)}{\Gamma _{k}\left( \lambda \right) }\end{aligned}$$where $$\rho _{k}\left( x\right) =\frac{\Gamma _{k}\left( \lambda +xk\right) }{\Gamma _{k}(\nu +1+\lambda x) \Gamma(x+1) }.$$Now as in the proof of part (a) $$\frac{\rho _{k}'\left( x\right)}{\rho _{k}\left( x\right)} =k \Psi_k(\lambda +xk)-\lambda \Psi_k(\nu+1 +xk)-\Psi(x+1)>0,$$ if $\nu+1+ \lambda x \geq \lambda + xk$. Now for $x \in (0,1)$, this inequality holds if $0< k \leq \lambda \leq \nu+1$, while for $x \geq 1$, it is required that $k \leq \min\{\lambda, \nu+1\}.$ [999]{} L.G. Romero, G.A.Dorrego and R.A. Cerutti, The k-Bessel function of first kind, International Mathematical forum, 38(7)(2012), 1859–1854. GN. Watson, A Treatise on the Theory of Bessel Functions, Cambridge Mathematical Library Edition, Camdridge University Press, Camdridge (1995). Reprinted (1996) A. Erdélyi, W. Magnus, F. Oberhettinger and F.G. Tricomi, Higher transcendental functions, I, II, McGraw-Hill Book Company, Inc., New York, 1953. NewYork, Toronto, London, 1953. R. Diaz and E. Pariguan, On hypergeometric functions and k-Pochhammer symbol, Divulgaciones Matematicas 15(2) (2007), 179–192 Kwara Nantomah, Edward Prempeh, Some Inequalities for the k-Digamma Function, Mathematica Aeterna, 4(5) (2014), 521–525. S. Mubeen, M. Naz and G. Rahman, A note on k-hypergemetric differential equations, Journal of Inequalities and Special Functions, 4(3) (2013), 8–43. M. Biernacki and J. Krzyż, On the monotonity of certain functionals in the theory of analytic functions, Ann. Univ. Mariae Curie-Sk lodowska. Sect. A. 9 (1957), 135–147. C. G. Kokologiannaki, Properties and inequalities of generalized $k$-gamma, beta and zeta functions, Int. J. Contemp. Math. Sci. 5(13-16) (2010), 653–660. C. G. Kokologiannaki and V. Krasniqi, Some properties of the $k$-gamma function, Matematiche (Catania), 68(1) (2013), 13–22. V. Krasniqi, A limit for the $k$-gamma and $k$-beta function, Int. Math. Forum, 5(33-36) (2010), 1613–1617. M. Mansour, Determining the $k$-generalized gamma function $\Gamma\sb k(x)$ by functional equations, Int. J. Contemp. Math. Sci., 4(21-24) (2009), 1037–1042. G. E. Andrews, R. Askey and R. Roy, Special functions, Cambridge Univ. Press, Cambridge, 1999. C. Fox, The asymptotic expansion of generalized hypergeometric functions, Proc. London. Math. Soc., 27(4) (1928), 389–400. A. A. Kilbas, M. Saigo and J. J. Trujillo, On the generalized Wright function, Fract. Calc. Appl. Anal. 5(4) (2002), 437–460. A. A. Kilbas and N. Sebastian, Generalized fractional integration of Bessel function of the first kind, Integral Transforms Spec. Funct. 19 (11-12) (2008), 869–883. A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, North-Holland Mathematics Studies 204, Elsevier, Amsterdam, 2006. E. D. Rainville, Special functions, Macmillan, New York, 1960. E. M. Wright, The asymptotic expansion of integral functions defined by Taylor series, Philos. Trans. Roy. Soc. London, Ser. A. 238 (1940), 423–451. E. M. Wright, The asymptotic expansion of the generalized hypergeometric function, Proc. London Math. Soc. (2) 46(1940), 389–408.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high resolution imaging. The proposed reconstruction jointly recovers all the diffusion weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using an autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction and show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.' address: 'University of Iowa, Iowa, USA' bibliography: - 'Dmri\_HD.bib' title: 'Model-Based Deep Learning for Reconstruction of Joint k-q Under-sampled High Resolution Diffusion MRI' --- K-q space deep learning, diffusion MRI, autoencoder, model-based deep learning Introduction {#sec:intro} ============ Diffusion weighted magnetic resonance imaging (DWI) is a widely used neuroimaging technique, which can provide rich information about a variety of tissue microstructural information including brain connectivity, and density of neurons [@Novikov2019]. The acquisition of diffusion MRI (dMRI) at high spatial resolution and on a large number of q-space points are needed to probe the tissue microstructural information and resolve the ambiguities in the parameters related to tissue microstructure[@Sotiropoulos2013; @Jbabdi2015]. Conventional single-shot echo-planar (EPI) techniques have limited ability to improve the spatial resolution of dMRI. The long readouts required for higher resolution, often causes geometric distortions and blurring artifacts in the images. Several researchers have hence employed multi-shot EPI (msEPI) methods, where the k-space acquisition is segmented into multiple shots of shorter readout duration. However, a challenge with msEPI-based DWI acquisition scheme is the phase inconsistency between the shots. When the k-space data from the different shots are merged, these phase errors translate to ghosting artifacts in the images. Moreover, the multiple shots required to encode the images prolongs the acquisition time. Several acceleration methods have been introduced in diffusion MRI to overcome the above challenges. These include (a) spatial (k-space) acceleration methods that rely on parallel MRI and compressed sensing [@Shi2015; @Liao2017], (b) q-space acceleration methods that acquire only a subset of the q-space data and rely on data priors to fill in the missing information[@Michailovich2011b; @Welsh2013], and (c) k-q acceleration methods that jointly under sample both k- and q spaces [@Mani2014; @Schwab2018]. While the joint k-q under-sampling schemes can afford higher acceleration factors, the main challenges include (i) the high computational complexity of such scheme, resulting from the need to perform joint optimization, and (ii) the inability to account for complex diffusion models that do not conform with sparsity based models. We propose a deep-learning based joint reconstruction algorithm for multi-shot diffusion MRI. The proposed scheme rely on a model based reconstruction that simultaneously performs phase correction and jointly recovers artifact-free DWIs from highly under-sampled acquisition. Specifically, a data fidelity term performs phase correction using the generalized SENSE reconstruction with known phase maps while a deep-learned prior exploits the redundancy in the q-space data. To achieve this, we trained a denoising auto-encoder (DAE) using training data generated by a generalized diffusion model. The non-linear network is shown to learn a projection to the data-manifold, thus denoising the images. We propose to use the residual error of the network, which can be used as a prior in a model-based reconstruction scheme. The reconstructed DWIs can then be used for further analysis to estimate the diffusion microstructure model parameters. The proposed scheme has significant differences with deep-learning based q-space acceleration techniques [@Golkov2016]. This scheme rely on supervised learning to learn the mapping from the diffusion signal to the parameters of a specific model (e.g. NODDI) from fully sampled q-space images. By contrast, our focus is to recover the DWI data with high spatial and q-space resolution, which allows the fitting of any desired diffusion model. Methods {#sec:format} ======= Standard Multi-compartmental Diffusion Model -------------------------------------------- The diffusion signal in the brain is often modeled by a multi-compartment model [@Novikov2019] that accounts for the intra-, and extra-neurite tissue compartments for each voxels, in addition to a isotropic compartment. The signal model is given by $$ \rho(b, \mathbf g) = \rho_0\int_{\hat{ \bf{n}}} {\cal{P}}(\hat{ \bf{n}}) \circledast K(b,\hat{\bf{g}} \cdot \mathbf n) ~d\hat {\bf{n}} \label{model}$$ where $\mathcal P$ is the fiber orientation distribution function (ODF) and $\circledast$ denotes a spherical convolution operation with a kernel $K $. The kernel is specified by $$\hspace{0em} K(b,\zeta) = f_1e^{-bD_a\zeta^2} + f_2e^{-bD_e^{\perp} -b\left(D_e^{||} - D_e^{\perp}\right)\zeta^2}+f_{\rm iso} e^{-bD_{\rm iso}}, \notag$$ where $f_{i}$’s are the volume fractions, $D$’s are the compartmental diffusivities, $b$ is the diffusion gradient strength, $\rho(b, \mathbf g) $ and $\rho_0$ are the diffusion weighted and the reference non-diffusion weighted signal. The above diffusion signal model is very rich with several free model parameters. It has been noted to be useful for detailed microstructural analysis for the estimation of several tissue microstructure parameters when high quality diffusion data is available. Image Formation for msDWI -------------------------- Let $\mathbf \rho_{q}(x); q=1,..,Q$ represent the diffusion weighted image for the $q^{\rm th}$ location in q-space (the 3D space spanned by $b-\mathbf g$), where $\mathbf x$ represent the spatial co-ordinates. Then, the image acquisition model for an $S$-shot sampling in the presence of Gaussian noise $n$ can be represented as: $$\label{eq:model1} \mathbf {\hat y}_s = \mathcal{A}_s({\rho_{q}}) + \mathbf n, ~~ s=1:S $$ where $\mathbf {\hat y}_s$ is the measured k-space data from shot $S$, and $\mathcal{A}=\mathcal{S}_s\circ \mathcal{F} \circ \mathcal {C} $. Here, $\mathcal{F}$, $\mathcal{S}_s$, and $\mathcal{C}$ denotes Fourier transform, selection of the acquired k-space samples for a specific shot $s$, and weighting by coil sensitivities, respectively. For the phase compensated reconstruction for msDW data, we account the phase term into the coil sensitivity maps. In a fully sampled scenario, the sampling patterns for the different shots are complementary; the combination of the data from the different shots will result in a fully sampled k-space. However, such fully sampled acquisitions result in long acquisition times. To simultaneously achieve high spatial and angular resolution using multi-shot sequences in a reasonable scan time, we propose to under-sample the joint k-q space of dMRI. Figure \[fig:fig1\] represents the proposed joint k-q under-sampling that we pursue in the current work. This joint k-q acceleration scheme can be effectively achieved on the MRI scanners by randomly under-sampling the shots for each of the q-space sampling points. We compactly denote the acquisition process as $$\label{eq:compact} \widehat {\mathbf Y} = \mathcal{A}\left(\mathbf P\right) + \mathbf N,$$ where $\mathbf Y$ is the Casoratti matrix (of dimension $ N_1 \times N_2 \times Q$), of the data corresponding to the different q-space points.      Model-based Joint Reconstruction Algorithm ------------------------------------------ At high acceleration factors, the k-q under-sampled data needs to be jointly reconstructed. Denoting the k-space measurement matrix for the joint reconstruction as $\widehat{\mathbf Y}$, we propose to recover $\mathbf P$ by solving: $$\label{recon} \mathbf P = \operatorname*{arg\,min}_{\mathbf P} \norm{\mathcal {A}({\mathbf P})-\widehat{\mathbf Y }}_2^2 + \lambda ~~ \mathcal{R}({\mathbf P}). $$ Here, the joint reconstruction enforces data consistency (DC) to the measured data using the generalized SENSE encoding operator $\mathcal{A}$ in the first term. The second term is an arbitrary regularization prior $\mathcal{R}$. Priors including total variation spatial regularization and sparsity had been introduced by other researchers [@Michailovich2011b; @Welsh2013; @Mani2014; @Schwab2018]. In our previous work [@Mani2014], we employed sparsity priors, assuming a ball-and-stick diffusion dictionary model similar to MR fingerprinting. However, the extension of this idea for the recovery of the parameters directly from the acquired data using fingerprinting-like recovery is complicated for diffusion models such as the model in Eq . Evidently, the main challenge is the large size of the dictionary, resulting from the large number of free parameters, as well as the high coherence between the atoms that make $\ell_{1}$ minimization challenging. Denoising Autoencoder Prior --------------------------- We introduce a self-learning DMRI framework based on denoising autoencoders (DAE). DAEs were introduced as unsupervised schemes to learn the data manifold. Theoretical results show that the DAE representation error is a measure of the derivative of the smoothed log density [@Vincent2008] of the data; the derivative is zero if the point is on the manifold, while it is high when the point moves away from the *data-manifold*. Instead of using a dictionary based sparse prior, we propose to pre-learn a DAE from the dictionary $\mathbf Z$ such that: $$\label{daetraining} \Theta^* = \arg \min_{{\Theta}} \mathbb E_{I} \left(\mathbb E_{\mathbf S \sim \mathcal N(\mathbf 0,\sigma_i^2)}\|\mathcal D_{\Theta}\left(\mathbf Z+\mathbf S\right) - \mathbf Z\|_F^2\right)$$ Here, $\mathbb E$ denotes the expectation operator and $\mathbf S$ is a noise realization with a zero mean complex Gaussian density with variance $\sigma_{I}^{2}$; the $\sigma_{i}$ are chosen from a set of variances, indexed within the set $I$. Once the parameters $\Theta$ are learned, we use the trained denoiser as a regularizer in plug-and-play framework [@zhang2017learning] in as: $$\label{joint} \mathbf P^{*} = \arg \min_{\mathbf P} \|\mathcal A (\mathbf P)-\widehat{\mathbf Y}\|^{2}_{2}+ \lambda~ \|\mathbf P - \mathcal D_{\Theta}(\mathbf P)\|^{2},$$ where $\mathcal N_{\Theta}(\mathbf P) = \mathbf P-\mathcal D_{\Theta}(\mathbf P)$ is the DAE error. We solve the proposed joint recovery optimization using the alternating direction method of multipliers as follows: $$\begin{aligned} \label{jointsteps} \mathbf P_{n+1} &=& \arg \min_{\mathbf P} \|\mathcal A (\mathbf P)-\widehat{\mathbf Y}\|^{2}+ \lambda~ \|\mathbf P-\mathbf Q_n\|^2\\ \mathbf Q_{n+1} &=& \mathcal D_{\Theta}(\mathbf P_{n+1}).\end{aligned}$$ Experimental Setup ------------------ ### Dictionary generation To generate the dictionary $\mathbf Z$, we employ the DWI signal model in Eq and generate the diffusion signal $S(b, \mathbf g) $ for a range of model parameters. This model has 7 free parameters, all of which were varied within the physiological ranges to generate a dictionary that is a small subset of all possible diffusion signals. Specifically we used the ranges: $f_{i}$’s $\in [0,1]$, and $D$’s $\in [0.1,3]$ [@Novikov2019]. The fiber direction $\hat{ \bf{n}}$ was varied for 30 different unit vectors in 3D space with crossing fibers simulated from the linear combination of these unit vectors. Since the reconstruction concerns the recovery of complex data, the generated signals were modulated with random phase terms, which counts as an additional parameter. ### DAE architecture and training The generated diffusion signals were corrupted at various noise levels at $ $0$\%, $20$\%, $40$\%, \text{~and~} $60$\%$, and were used for training. The training data was fed to an autoencoder neural network. In this preliminary work, we employed an architecture with three fully connected layers, with ReLU activation function. The dimension of the input layer was the dimension of the q-space. The bottleneck layer was constrained to represent one fourth of the input dimension. ### Testing data To test the joint reconstruction, we used a synthesized brain MRI data. This ground truth data was generated as follows: A high quality brain diffusion data was collected on a human volunteer using a variable density interleaved spiral acquisition with 22 spatial interleaves to achieve a high spatial resolution of 1.1mm in-plane. The data was collected on a 3T MRI with 8-channel head coil. 60 DWIs were acquired using the fully sampled spiral acquisition, which were independently reconstructed using CG-NUFFT SENSE reconstruction. The fiber orientation distribution functions in each pixel of this data was estimated and stored. These fiber orientation were used to generate the synthetic brain data. Figure \[fig:fsim\] shows one DWI from this synthetic ground truth data, which displays crossing fibers in several voxels. The ground truth data was retrospectively under-sampled to generate the joint k-q under-sampled data for testing. Here, we assumed a Cartesian acquisition and the under-sampling was simulated using a multi-shot EPI scheme at different shot factors to study the various acceleration factors. Acceleration factors at R = 4, 6, and 8 were considered. Random phase values were added to each of the shot images to simulate phase errors of the multi-shot imaging.     Results {#sec:pagestyle} ======== The goal is to derive a regularization prior that can denoise the diffusion signal in the q-domain, which can be applied in a voxel-wise manner along the q-dimension during the joint reconstruction. Figure \[fig:denoise\] shows the successful learning of the q-space signal manifold by the DAE. The DAE was then used in the joint reconstruction in Eq to recover all 60 DWIs simultaneously for various undersampling factors following the alternating scheme discussed above. Figure \[fig:usfig\] shows the results of the proposed reconstruction for various acceleration factors. Here, the first row shows the 4-shot case, where only one shot per DWI was sampled; the shot was chosen randomly for each DWI. Similarly, the second and third row show 6-shot and 8-shot cases. In all cases, only one random shot per q-space point was sampled. The performance of the denoiser at the first iteration as well as the DC updates at various stages of the reconstruction are shown in Figure 4. The root-mean-square error (RMSE) and peak signal-to-noise ratio (PSNR) for various acceleration factors are reported in Table 1. It is clear from Figure \[fig:usfig\] and Table 1 that the proposed DAE regularizer is an efficient recovery prior for the reconstruction of highly under-sampled data. \ \ \  \ \[tab:time\] Acceleration RMSE PSNR -------------- ---------- --------- $R = 4$ $0.0176$ $35.04$ $R = 6$ $0.0548$ $25.19$ $R = 8$ $0.079$ $22.01$ : Reconstruction error of the proposed scheme for various undersampling factors. Discussion & Conclusion ======================= We introduced a model based deep learning framework for the joint recovery of DWIs from joint k-q under-sampled data. In this preliminary work, we show the feasibility of employing a DAE to prelearn the projection to q-space signal manifold. The prelearning was performed using simulated diffusion data using a general diffusion model with several degrees of freedom. We note that the accuracy of the DAE is determined by the training data; specifically, the more range of parameters used to simulate the data will result in improved denoising. The need to account for multiple fiber orientations per voxel significantly inflate the parameter space. In the current study, we only considered 30 unique fiber directions, which may have contributed to reduced accuracy. In future work, we would explore the scenario with larger dictionary with more fiber directions. We also plan to extend this work for the recovery of multi-shell dMRI data in the future.\
{ "pile_set_name": "ArXiv" }
--- author: - 'D. Eckert[^1]' - 'S. Ettori' - 'E. Pointecouteau' - 'S. Molendi' - 'S. Paltani' - 'C. Tchernin' - 'The X-COP collaboration' bibliography: - 'eckert\_xmm.bib' title: 'The XMM Cluster Outskirts Project (X-COP)' --- Introduction ============ In the hierarchical structure formation paradigm, galaxy clusters are expected to form through the continuous merging and accretion of smaller structures [see @kravtsov12 for a review]. In the local Universe, such processes should be observable in the outer regions of massive clusters, where galaxies and galaxy groups are infalling for the first time and smooth material is continuously accreted from the surrounding cosmic web. The hot plasma in galaxy clusters is expected to be heated to high temperatures ($10^7-10^8$ K) through shocks and adiabatic compression at the boundary between the free-falling gas and the virialized intra-cluster medium [ICM, @tozzi00]. The thermodynamical properties of the gas retain information on the processes leading to the thermalization of the gas in the cluster’s potential well, which is encoded in the gas entropy $K=kTn_e^{-2/3}$. Gravitational collapse models predict that the entropy of stratified cluster atmospheres increases steadily with radius, following a power law with index $\sim1.1$ [@voit05; @borgani05; @sembolini15]. However, non-gravitational processes induce an additional injection of entropy and can therefore be traced through the departures from the theoretical predictions [@chaudhuri12]. Such departures have been observed for a long time in cluster cores, where gas cooling and feedback from supernovae and active galactic nuclei are important [e.g. @david96; @ponman99; @pratt10]. More recently, several works reported a deficit of entropy in massive clusters around the virial radius [see @reiprich13 for a review], which has been interpreted as a lack of thermalization of the ICM induced, e.g., by an incomplete virialization of the gas [e.g. @kawa; @bonamente13; @ichikawa13], non-equilibration between electrons and ions [@hoshino], non-equilibrium ionization [e.g. @fujita08], or weakening of the accretion shocks [@lapi]. However, these models have received little support from cosmological simulations so far [e.g. @va10kp; @lau15; @avestruz15]. The gas content of infalling dark-matter halos interacts with the ICM and is stripped from its parent halo through the influence of the ram pressure applied by the ICM of the main cluster. This process is expected to be the main mechanism through which the infalling gas is heated up and virialized into the main dark-matter halo [@gunn72; @vollmer01; @heinz03; @roediger15] and it is believed to be key to the evolution of the cluster galaxy population by quenching rapidly the star formation activity in clusters [@roediger08; @bahe15]. Recent observational evidence suggest that thermal conduction in the ICM is strongly inhibited [e.g. @gaspari13; @sanders14]. The long conduction timescale therefore delays the virialization of the stripped, low-entropy gas inside the potential well of the main cluster [@eckert14b; @degrandi16], which causes the ICM in the outer regions of massive clusters to be clumpy [@mathiesen; @nagai; @vazza12c]. Since the X-ray emissivity depends on the squared gas density, inhomogeneities in the gas distribution lead to an overestimation of the mean gas density [@nagai; @simionescu; @eckert15], which biases the measured entropy low. This effect needs to be taken into account when measuring the entropy associated with the bulk of the ICM. In addition, large-scale accretion patterns in the direction of the filaments of the cosmic web induce asymmetries in the gas distribution [e.g. @vazza11a; @e12; @roncarelli13]. Such filaments are expected to host the densest and hottest phase of the warm-hot intergalactic medium [e.g. @cen99; @dave01; @eckert15b], which are expected to account for most of the missing baryons in the local Universe. In this paper, we present the *XMM-Newton* cluster outskirts project (X-COP), a very large programme on *XMM-Newton* that aims at advancing significantly our knowledge of the physical conditions in the outer regions of galaxy clusters ($R>R_{500}$[^2]). X-COP targets a sample of 13 massive, nearby clusters selected on the basis of their high signal-to-noise ratio (SNR) in the *Planck* all-sky survey of Sunyaev-Zeldovich [SZ, @sz] sources [@planckpsz1; @planckpsz2]. In the recent years, the progress achieved in the sensitivity of SZ instruments allowed to extend the measurements of the pressure profile of galaxy clusters out to the virial radius and beyond [@planck5; @sayers13]. The high SNR in the *Planck* survey ensures a detection of the SZ effect from our targets well beyond $R_{500}$. X-COP provides a uniform 25 ks mapping of these clusters out to $R_{200}$ and beyond, with the aim of combining high-quality X-ray and SZ imaging throughout the entire volume of these systems. Sample selection ================ To implement the strategy presented above, we selected a list of the most suitable targets to conduct our study. The criteria used for the selection are the following: 1. **SNR $\mathbf{>12}$ in the PSZ1 catalog [@planckpsz1]:** This condition is necessary to target the most significant *Planck* detections and ensure that the SZ effect from all clusters be detected beyond $R_{500}$; 2. **Apparent size $\mathbf{\theta_{500}>10}$ arcmin:** Given the limited angular resolution of our reconstructed SZ maps ($\sim7$ arcmin), this condition ensures that all the clusters are well-resolved, such that the contamination of SZ flux from the core is low beyond $R_{500}$ ; 3. **Redshift in the range $\mathbf{0.04<z<0.1}$:** This criterion allows us to cover most of the azimuth out to $R_{200}$ with 5 *XMM-Newton* pointings (one central and four offset) whilst remaining resolved by *Planck*; 4. **Galactic $\mathbf{N_H<10^{21}}$ cm$\mathbf{^{-2}}$:** Since we are aiming at maximizing the sensitivity in the soft band, this condition makes sure that the soft X-ray signal is weakly absorbed. This selection yields a set of the 15 most suitable targets for our goals. We excluded three clusters (A2256, A754, and A3667) because of very complicated morphologies induced by violent merging events, which might hamper the analysis of the *Planck* data given the broad *Planck* beam. The remaining 12 clusters selected for our study are listed in Table \[tab:master\], together with their main properties. A uniform 25 ks mapping with *XMM-Newton* was performed for 10 of these systems in the framework of the X-COP very large programme (Proposal ID 074441, PI: Eckert), which was approved during *XMM-Newton* AO-13 for a total observing time of 1.2 Ms. The remaining 2 systems (A3266 and A2142) were mapped by *XMM-Newton* previously. Although the available observations of A3266 do not extend all the way out to $R_{200}$, they are still sufficient for some of our objectives and we include them in the present sample. Finally, we add Hydra A/A780 to the final sample. While the SZ signal from this less massive cluster is not strong enough to be detected beyond $R_{500}$, a deep, uniform *XMM-Newton* mapping exists for this system [see @degrandi16 for more details]. Our final sample therefore comprises 13 clusters in the mass range $2\times10^{14}<M_{500}<10^{15}M_\odot$ and X-ray temperature $3<kT<10$ keV. In Table \[tab:master\] we also provide the values of the central entropy $K_0$ from the ACCEPT catalog [@cavagnolo], which is an excellent indicator of a cluster’s dynamical state [@hudson10]. According to this indicator, five of our clusters are classified as relaxed, cool-core systems ($K_0<30$ keV cm$^2$), while the remaining eight systems are dynamically active, non-cool-core clusters. ----------------- -------- ---------------- ---------------------------- ------------------------ -------------------------- ----------------------- ----------- ---------------- -------------------- ----- Name $z$ SNR $L_{X,500}$ $kT_{\rm vir}$ $Y_{500}$ $M_{500}$ $R_{500}$ $\theta_{500}$ $K_{0}$ Ref *Planck* \[$10^{44}$ergs s$^{-1}$\] \[keV\] \[$10^{-3}$ arcmin$^2$\] \[$10^{14} M_\odot$\] \[kpc\] \[arcmin\] \[keV cm$^2$\] A2319 0.0557 49.0 $5.66\pm0.02$ $9.60_{-0.30}^{+0.30}$ 43.17 10.56 1525 23.49 $270.23 \pm4.83$ 2 A3266$^{\star}$ 0.0589 40.0 $3.35\pm0.01$ $9.45_{-0.36}^{+0.35}$ 23.52 10.30 1510 22.09 $72.45 \pm 49.71 $ 1 A2142$^\star$ 0.090 28.4 $8.09\pm0.02$ $8.40_{-0.76}^{+1.01}$ 18.54 8.51 1403 13.92 $68.06 \pm 2.48$ 1 A2255 0.0809 26.5 $2.08\pm0.02$ $5.81_{-0.20}^{+0.19}$ 11.17 4.94 1172 12.80 $529.10 \pm 28.19$ 1 A2029 0.0766 23.2 $6.94\pm0.02$ $8.26_{-0.09}^{+0.09}$ 12.66 8.36 1399 16.08 $10.50 \pm 0.67$ 1 A85 0.0555 22.8 $3.74\pm0.01$ $6.00_{-0.11}^{+0.11}$ 16.97 5.24 1205 18.64 $12.50 \pm 0.53 $ 1 A3158 0.059 19.8 $2.01\pm0.01$ $4.99_{-0.07}^{+0.07}$ 10.62 3.98 1097 16.03 $166.01 \pm 11.74$ 1 A1795 0.0622 19.3 $4.43\pm0.01$ $6.08_{-0.07}^{+0.07}$ 6.43 5.33 1209 16.82 $18.99 \pm 1.05$ 1 A644 0.0704 17.3 $3.40\pm0.01$ $7.70_{-0.10}^{+0.10}$ 7.22 7.55 1356 16.82 $132.36 \pm 9.15$ 3 A1644 0.0473 16.1 $1.39\pm0.01$ $5.09_{-0.09}^{+0.09}$ 13.96 4.12 1115 20.02 $19.03 \pm 1.16$ 1 RXC J1825 0.065 15.2 $1.38\pm0.01$ $5.13_{-0.04}^{+0.04}$ 8.39 4.13 1109 14.81 $217.93\pm6.33$ 4 ZwCl 1215 0.0766 12.8$^\dagger$ $2.11\pm0.01$ $6.27_{-0.29}^{+0.32}$ - 5.54 1220 14.01 $163.23 \pm 35.62$ 1 A780$^\star$ 0.0538 -$^{\ddagger}$ $2.25\pm0.01$ $3.45_{-0.09}^{+0.08}$ - 2.75 872 13.87 $13.31 \pm 0.66$ 1 ----------------- -------- ---------------- ---------------------------- ------------------------ -------------------------- ----------------------- ----------- ---------------- -------------------- ----- **Column description:** 1. Cluster name. The clusters identified with an asterisk were mapped prior to X-COP. Abbreviated names: RXC J1825.3+3026, ZwCl 1215.1+0400, A780/Hydra A ; 2. Redshift (from NED); 3. Signal-to-noise ratio (SNR) in the *Planck* PSZ2 catalog [@planckpsz2]. $^\dagger$In PSZ1 [@planckpsz1], but not in PSZ2 as it falls into the PSZ2 point source mask (see Table E.4 in @planckpsz2). The SNR expected in PSZ2 from Eq. 6 and Table 3 in @planckpsz2 is about 16. $^{\ddagger}$Below both PSZ1 and PSZ2 detection threshold; 4. Luminosity in the \[0.5-2\] keV band (rest frame); 5. Virial temperature; 6. Integrated $Y$ parameter from the PSZ2 catalog; 7. Mass within an overdensity of 500, estimated using the $M-T$ relation of @arnaud05; 8. Corresponding value of $R_{500}$ (in kpc); 9. Apparent size of $R_{500}$ in arcmin; 10. Central entropy $K_0$, from @cavagnolo; 11. Reference for the cluster temperature. 1: @hudson10; 2: @molendi99; 3: @cavagnolo ; 4: This work (in prep.) Abell 2142: a pilot study ========================= Abell 2142 [$z=0.09$, @owers11] is the first cluster for which the X-COP observing strategy was applied. In @tchernin16 we presented our analysis of this system out to the virial radius, highlighting the capabilities of X-COP. The results of this program are summarized here. In Fig. \[fig:xmm2142\] we show an adaptively smoothed, background subtracted *XMM-Newton* mosaic image of Abell 2142 in the \[0.7-1.2\] keV range, with *Planck* contours overlayed. *XMM-Newton* surface-brightness profile --------------------------------------- We developed a new technique to model the *XMM-Newton* background by calculating two-dimensional models for all the relevant background components: the non X-ray background (NXB), the quiescent soft protons (QSP), and the cosmic components. To validate our background subtraction technique, we analyzed a set of 21 blank fields totaling 1.3 Ms of data. The analysis of this large dataset yields a flat surface-brightness profile, with a scatter of 5% around the mean value. This analysis allows us to conclude that the background level can be recovered with a precision of 5% in the \[0.7-1.2\] keV band [see Appendix A and B of @tchernin16]. To measure the average surface-brightness profile free of the clumping effect, we applied the technique developed in @eckert15. Namely, in each annulus we computed the distribution of surface-brightness values by applying a Voronoi tessellation technique [@cappellari03] and estimated the median surface brightness from the resulting distribution. The median of the surface brightness distribution was found to be a robust estimator of the mean gas density [@zhuravleva13], unlike the mean of the distribution, which is biased high by the presence of accreting clumps. The ratio between the mean and the median can thus be used as an estimator of the clumping factor, $$C=\frac{\langle\rho^2\rangle}{\langle\rho\rangle^2},$$ where $\langle\cdot\rangle$ denotes the mean over radial shells [see @eckert15 for a validation of this technique using numerical simulations]. This technique allows us to excise all clumps down to the size of the Voronoi bins, which in the case of Abell 2142 corresponds to 20 kpc. In Fig. \[fig:sb\] [reproduced from @tchernin16] we show the mean and median surface-brightness profiles of Abell 2142. The median profile is clearly below the mean at large radii, which highlights the importance of clumping in cluster outskirts. A significant X-ray signal is measured out to 3 Mpc from the cluster core ($\sim2R_{500}$), beyond which the systematics dominate. Note the significant improvement over previous *XMM-Newton* studies, which were typically limited to the region inside $R_{500}$ [e.g. @lm08; @pratt07]. *Planck* SZ pressure profile ---------------------------- Abell 2142 is one of the strongest detections in the *Planck* survey, with an overall signal-to-noise ratio of 28.4 using the data from the full *Planck* mission (see Table \[tab:master\]. A significant SZ signal is observed as well out to 3 Mpc from the cluster core, and it can be readily transformed into a high-quality pressure profile. For the details of the *Planck* analysis procedure we refer to @planck5. In Fig. \[fig:pressure\] we show the *Planck* pressure profile obtained using two different deprojection methods [see @tchernin16]. The results are compared with the pressure profile measured from a spectral X-ray analysis. An excellent agreement between X-ray and SZ pressure profiles is found over the range of overlap. This confirms that X-ray and SZ techniques return a consistent picture of the gas properties in galaxy clusters and further validates the method that is put forward in X-COP. Joint X-ray/SZ analysis ----------------------- Once the gas density and pressure profiles are determined, the radial profiles of temperature $kT=P_{e}/n_{e}$ and entropy $K=P_{e}n_{e}^{-5/3}$ can be inferred. The gravitating mass profile can also be recovered by solving the hydrostatic equilibrium equation, $$\frac{dP}{dr}=-\rho\frac{GM(<r)}{r^2}.\label{eq:hydro}$$ By comparing the gravitating mass with the gas mass obtained by integrating the gas density profile, the profile of intracluster gas fraction can also be recovered. In Fig. \[fig:entropy\] we show the radial entropy profile of Abell 2142 obtained by combining X-ray and SZ data. Once again, the data are compared with the results of the spectroscopic X-ray analysis, which highlights the much broader radial range accessible by the joint X-ray/SZ technique. The observed entropy profiles are compared with the prediction of numerical simulations using gravitational collapse only [@voit05]. Interestingly, when combining the SZ data with the median (clumping corrected) gas density profile, the recovered entropy profile is consistent with the theoretical expectation within $1\sigma$, whereas if the mean (biased) density profile is used, at large radii the entropy falls significantly below the expectations. In the latter case, the behavior of the entropy profile is very similar to a number of recent *Suzaku* studies, which found a deficit of entropy beyond $R_{500}$. Our analysis thus highlights the importance of gas clumping when interpreting the results of *Suzaku* observations of cluster outskirts. In Fig. \[fig:mass\] we show the gravitating mass profile obtained by solving the hydrostatic equilibrium equation (Eq. \[eq:hydro\]). In this case, we find that the mass profiles calculated with the mean and median profiles are consistent. We measure $M_{200}=(1.41\pm0.03)\times10^{15}M_\odot$, in good agreement with the values calculated with weak gravitational lensing [$1.24_{-0.16}^{+0.18}\times10^{15}M_\odot$, @umetsu09] and galaxy kinematics [$1.31_{-0.23}^{+0.26}\times10^{15}M_\odot$, @munari14]. Thus, our data do not show any sign of hydrostatic bias even when extending our measurements out to $R_{200}$. Conclusion ========== In this paper, we presented an overview of the *XMM-Newton* cluster outskirts project (X-COP), a very large programme on *XMM-Newton* that aims at performing a deep X-ray and SZ mapping for a sample of 13 massive, nearby galaxy clusters. The clusters of the X-COP sample were selected on the basis of their strong SZ effect in *Planck* data. The combination of X-ray and SZ data over the entire volume of X-COP clusters will allow us to improve our knowledge of the intracluster gas out to $R_{200}$ and beyond, in order to reach the following goals: *i)* measure the radial distribution of the thermodynamic properties of the ICM; *ii)* estimate the global non-thermal energy budget in galaxy clusters; *iii)* detect infalling gas clumps to study the virialization of infalling halos within the potential well of the main structure. We presented a pilot study on the galaxy cluster Abell 2142 [@tchernin16], demonstrating the full potential of X-COP for the study of cluster outskirts. The cluster is detected out to $2\times R_{500}\sim R_{100}$ both in X-ray and SZ. The two techniques provide a remarkably consistent picture of the gas properties, and they can be combined to recover the thermodynamic properties of the gas and the gravitating mass profile out to the cluster’s boundary. Our results highlight the importance of taking the effect of gas clumping into account when measuring the properties of the gas at large radii, where accretion from smaller structures is important. In the near future, X-COP will bring results of similar quality for a sizable sample of a dozen clusters, allowing us to determine universal profiles of the thermodynamic quantities and gas fraction out to the virial radius. Based on observations obtained with *XMM-Newton*, as ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. The development of *Planck* has been supported by: ESA; CNES and CNRS/INSU-IN2P3-INP (France); ASI, CNR, and INAF (Italy); NASA and DoE (USA); STFC and UKSA (UK); CSIC, MICINN, JA and RES (Spain); Tekes, AoF and CSC (Finland); DLR and MPG (Germany); CSA (Canada); DTU Space (Denmark); SER/SSO (Switzerland); RCN (Norway); SFI (Ireland); FCT/MCTES (Portugal); and PRACE (EU). [^1]: Corresponding author: [^2]: For a given overdensity $\Delta$, $R_\Delta$ is the radius for which $M_\Delta/(4/3\pi R_\Delta^3)=\Delta\rho_c$
{ "pile_set_name": "ArXiv" }
--- abstract: 'Reverses of Schwarz, triangle and Bessel inequalities in inner product spaces that improve some earlier results are pointed out. They are applied to obtain new Grüss type inequalities in inner product spaces. Some natural applications for integral inequalities are also pointed out.' address: | School of Computer Science and Mathematics\ Victoria University of Technology\ PO Box 14428, MCMC 8001\ Victoria, Australia. author: - 'S.S. Dragomir' date: 'August 04, 2003.' title: 'Reverses of Schwarz, Triangle and Bessel Inequalities in Inner Product Spaces' --- Introduction\[s1\] ================== Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an inner product over the real or complex number field $\mathbb{K}$. The following inequality is known in the literature as *Schwarz’s inequality*:$$\left\vert \left\langle x,y\right\rangle \right\vert ^{2}\leq \left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2},\ \ \ \ x,y\in H; \label{1.1}$$where $\left\Vert z\right\Vert ^{2}=\left\langle z,z\right\rangle ,$ $z\in H. $ The equality occurs in (\[1.1\]) if and only if $x$ and $y$ are linearly dependent. In [@SSD1], the following *reverse* of Schwarz’s inequality has been obtained:$$0\leq \left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}-\left\vert \left\langle x,y\right\rangle \right\vert ^{2}\leq \frac{1}{4}\left\vert A-a\right\vert ^{2}\left\Vert y\right\Vert ^{4}, \label{1.2}$$provided $x,y\in H$ and $a,A\in \mathbb{K}$ are so that either$$\func{Re}\left\langle Ay-x,x-ay\right\rangle \geq 0, \label{1.3}$$or, equivalently,$$\left\Vert x-\frac{a+A}{2}\cdot y\right\Vert \leq \frac{1}{2}\left\vert A-a\right\vert \left\Vert y\right\Vert , \label{1.4}$$holds. The constant $\frac{1}{4}$ is best possible in (\[1.2\]) in the sense that it cannot be replaced by a smaller constant. If $x,y,A,a$ satisfy either (\[1.3\]) or (\[1.4\]), then the following reverse of Schwarz’s inequality also holds [@SSD2]$$\begin{aligned} \left\Vert x\right\Vert \left\Vert y\right\Vert & \leq \frac{1}{2}\cdot \frac{\func{Re}\left[ A\overline{\left\langle x,y\right\rangle }+\overline{a}% \left\langle x,y\right\rangle \right] }{\left[ \func{Re}\left( \overline{a}% A\right) \right] ^{\frac{1}{2}}} \label{1.5} \\ & \leq \frac{1}{2}\cdot \frac{\left\vert A\right\vert +\left\vert a\right\vert }{\left[ \func{Re}\left( \overline{a}A\right) \right] ^{\frac{1% }{2}}}\left\vert \left\langle x,y\right\rangle \right\vert , \notag\end{aligned}$$provided that, the complex numbers $a$ and $A$ satisfy the condition $\func{% Re}\left( \overline{a}A\right) >0.$ In both inequalities in (\[1.5\]), the constant $\frac{1}{2}$ is best possible. An additive version of (\[1.5\]) may be stated as well (see also [SSD3]{})$$0\leq \left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}-\left\vert \left\langle x,y\right\rangle \right\vert ^{2}\leq \frac{1}{4}\cdot \frac{% \left( \left\vert A\right\vert -\left\vert a\right\vert \right) ^{2}+4\left[ \left\vert Aa\right\vert -\func{Re}\left( \overline{a}A\right) \right] }{% \func{Re}\left( \overline{a}A\right) }\left\vert \left\langle x,y\right\rangle \right\vert ^{2}. \label{1.6}$$In this inequality, $\frac{1}{4}$ is the best possible constant. It has been proven in [@SSD4], that$$0\leq \left\Vert x\right\Vert ^{2}-\left\vert \left\langle x,y\right\rangle \right\vert ^{2}\leq \frac{1}{4}\left\vert \phi -\varphi \right\vert ^{2}-\left\vert \frac{\phi +\varphi }{2}-\left\langle x,e\right\rangle \right\vert ^{2}; \label{1.7}$$provided, either $$\func{Re}\left\langle \phi e-x,x-\varphi e\right\rangle \geq 0, \label{1.8}$$or, equivalently,$$\left\Vert x-\frac{\phi +\varphi }{2}e\right\Vert \leq \frac{1}{2}\left\vert \phi -\varphi \right\vert , \label{1.9}$$where $e=H,$ $\left\Vert e\right\Vert =1.$ The constant $\frac{1}{4}$ in [1.7]{} is also best possible. If we choose $e=\frac{y}{\left\Vert y\right\Vert },$ $\phi =\Gamma \left\Vert y\right\Vert ,$ $\varphi =\gamma \left\Vert y\right\Vert $ $% \left( y\neq 0\right) ,$ $\Gamma ,\gamma \in \mathbb{K}$, then by (\[1.8\]), (\[1.9\]) we have,$$\func{Re}\left\langle \Gamma y-x,x-\gamma y\right\rangle \geq 0, \label{1.10}$$or, equivalently,$$\left\Vert x-\frac{\Gamma +\gamma }{2}y\right\Vert \leq \frac{1}{2}% \left\vert \Gamma -\gamma \right\vert \left\Vert y\right\Vert , \label{1.11}$$imply the following reverse of Schwarz’s inequality:$$0\leq \left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}-\left\vert \left\langle x,y\right\rangle \right\vert ^{2}\leq \frac{1}{4}\left\vert \Gamma -\gamma \right\vert ^{2}\left\Vert y\right\Vert ^{4}-\left\vert \frac{% \Gamma +\gamma }{2}\left\Vert y\right\Vert ^{2}-\left\langle x,y\right\rangle \right\vert ^{2}. \label{1.12}$$The constant $\frac{1}{4}$ in (\[1.12\]) is sharp. Note that this inequality is an improvement of (\[1.2\]), but it might not be very convenient for applications. Now, let $\left\{ e_{i}\right\} _{i\in I}$ be a finite or infinite family of orthornormal vectors in the inner product space $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) ,$ i.e., we recall that $$\left\langle e_{i},e_{j}\right\rangle =\left\{ \begin{array}{ll} 0 & \text{if \ }i\neq j \\ & \\ 1 & \text{if \ }i=j% \end{array}% \right. ,\ \ \ i,j\in I.$$In [@SSD5], we proved that if $\left\{ e_{i}\right\} _{i\in I}$ is as above, $F\subset I$ is a finite part of $I$ such that either$$\func{Re}\left\langle \sum_{i\in F}\phi _{i}e_{i}-x,x-\sum_{i\in F}\varphi _{i}e_{i}\right\rangle \geq 0, \label{1.13}$$or, equivalently,$$\left\Vert x-\sum_{i\in F}\frac{\phi _{i}+\varphi _{i}}{2}e_{i}\right\Vert \leq \frac{1}{2}\left( \sum_{i\in F}\left\vert \phi _{i}-\varphi _{i}\right\vert ^{2}\right) ^{\frac{1}{2}}, \label{1.14}$$holds, where $\left( \phi _{i}\right) _{i\in I},$ $\left( \varphi _{i}\right) _{i\in I}$ are real or complex numbers, then we have the following reverse of *Bessel’s inequality:*$$\begin{aligned} 0& \leq \left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2} \label{1.15} \\ & \leq \frac{1}{4}\cdot \sum_{i\in F}\left\vert \phi _{i}-\varphi _{i}\right\vert ^{2}-\func{Re}\left\langle \sum_{i\in F}\phi _{i}e_{i}-x,x-\sum_{i\in F}\varphi _{i}e_{i}\right\rangle \notag \\ & \leq \frac{1}{4}\cdot \sum_{i\in F}\left\vert \phi _{i}-\varphi _{i}\right\vert ^{2}. \notag\end{aligned}$$The constant $\frac{1}{4}$ in both inequalities is sharp. This result improves an earlier result by N. Ujević obtained only for real spaces [@NU]. In [@SSD4], by the use of a different technique, another reverse of Bessel’s inequality has been proven, namely:$$\begin{aligned} 0& \leq \left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2} \label{1.16} \\ & \leq \frac{1}{4}\cdot \sum_{i\in F}\left\vert \phi _{i}-\varphi _{i}\right\vert ^{2}-\sum_{i\in F}\left\vert \frac{\phi _{i}+\varphi _{i}}{2}% -\left\langle x,e_{i}\right\rangle \right\vert ^{2} \notag \\ & \leq \frac{1}{4}\cdot \sum_{i\in F}\left\vert \phi _{i}-\varphi _{i}\right\vert ^{2}, \notag\end{aligned}$$provided that $\left( e_{i}\right) _{i\in I},$ $\left( \phi _{i}\right) _{i\in I},$ $\left( \varphi _{i}\right) _{i\in I},$ $x$ and $F$ are as above. Here the constant $\frac{1}{4}$ is sharp in both inequalities. It has also been shown that the bounds provided by (\[1.15\]) and ([1.16]{}) for the Bessel’s difference $\left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}$ cannot be compared in general, meaning that there are examples for which one is smaller than the other [@SSD4]. Finally, we recall another type of reverse for Bessel inequality that has been obtained in [@SSD6]:$$\left\Vert x\right\Vert ^{2}\leq \frac{1}{4}\cdot \frac{\sum_{i\in F}\left( \left\vert \phi _{i}\right\vert +\left\vert \varphi _{i}\right\vert \right) ^{2}}{\sum_{i\in F}\func{Re}\left( \phi _{i}\overline{\varphi _{i}}\right) }% \sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}; \label{1.17}$$provided $\left( \phi _{i}\right) _{i\in I},$ $\left( \varphi _{i}\right) _{i\in I}$ satisfy (\[1.13\]) (or, equivalently (\[1.14\])) and $% \sum_{i\in F}\func{Re}\left( \phi _{i}\overline{\varphi _{i}}\right) >0.$ Here the constant $\frac{1}{4}$ is also best possible. An additive version of (\[1.17\]) is $$\begin{aligned} 0& \leq \left\Vert x\right\Vert ^{2}-\sum_{i\in F}\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2} \label{1.18} \\ & \leq \frac{1}{4}\cdot \frac{\sum_{i\in F}\left\{ \left( \left\vert \phi _{i}\right\vert -\left\vert \varphi _{i}\right\vert \right) ^{2}+4\left[ \left\vert \phi _{i}\varphi _{i}\right\vert -\func{Re}\left( \phi _{i}% \overline{\varphi _{i}}\right) \right] \right\} }{\sum_{i\in F}\func{Re}% \left( \phi _{i}\overline{\varphi _{i}}\right) }. \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible. It is the main aim of the present paper to point out new reverse inequalities to Schwarz’s, triangle and Bessel’s inequalities. Some results related to Grüss’ inequality in inner product spaces are also pointed out. Natural applications for integrals are also provided. Some Reverses of Schwarz’s Inequality\[s2\] =========================================== The following result holds. \[t2.1\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an inner product space over the real or complex number field $\mathbb{K}$ $\left( \mathbb{K}=\mathbb{R},\ \mathbb{K}=\mathbb{C}\right) $ and $x,a\in H, $ $r>0$ are such that$$x\in \overline{B}\left( a,r\right) :=\left\{ z\in H|\left\Vert z-a\right\Vert \leq r\right\} . \label{2.1}$$ 1. If $\left\Vert a\right\Vert >r,$ then we have the inequality$$0\leq \left\Vert x\right\Vert ^{2}\left\Vert a\right\Vert ^{2}-\left\vert \left\langle x,a\right\rangle \right\vert ^{2}\leq \left\Vert x\right\Vert ^{2}\left\Vert a\right\Vert ^{2}-\left[ \func{Re}\left\langle x,a\right\rangle \right] ^{2}\leq r^{2}\left\Vert x\right\Vert ^{2}. \label{2.2}$$The constant $C=1$ in front of $r^{2}$ is best possible in the sense that it cannot be replaced by a smaller one. 2. If $\left\Vert a\right\Vert =r,$ then$$\left\Vert x\right\Vert ^{2}\leq 2\func{Re}\left\langle x,a\right\rangle \leq 2\left\vert \left\langle x,a\right\rangle \right\vert . \label{2.3}$$The constant $2$ is best possible in both inequalities. 3. If $\left\Vert a\right\Vert <r,$ then$$\left\Vert x\right\Vert ^{2}\leq r^{2}-\left\Vert a\right\Vert ^{2}+2\func{Re% }\left\langle x,a\right\rangle \leq r^{2}-\left\Vert a\right\Vert ^{2}+2\left\vert \left\langle x,a\right\rangle \right\vert . \label{2.4}$$Here the constant $2$ is also best possible. Since $x\in \overline{B}\left( a,r\right) ,$ then obviously $\left\Vert x-a\right\Vert ^{2}\leq r^{2},$ which is equivalent to $$\left\Vert x\right\Vert ^{2}+\left\Vert a\right\Vert ^{2}-r^{2}\leq 2\func{Re% }\left\langle x,a\right\rangle . \label{2.5}$$ 1. If $\left\Vert a\right\Vert >r,$ then we may divide (\[2.5\]) by $\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}>0$ getting $$\frac{\left\Vert x\right\Vert ^{2}}{\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}% }+\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}\leq \frac{2\func{Re}\left\langle x,a\right\rangle }{\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}}. \label{2.6}$$Using the elementary inequality$$\alpha p+\frac{1}{\alpha }q\geq 2\sqrt{pq},\ \ \ \alpha >0,\ \ p,q\geq 0,$$we may state that$$2\left\Vert x\right\Vert \leq \frac{\left\Vert x\right\Vert ^{2}}{\sqrt{% \left\Vert a\right\Vert ^{2}-r^{2}}}+\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}% }. \label{2.7}$$Making use of (\[2.6\]) and (\[2.7\]), we deduce$$\left\Vert x\right\Vert \sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}\leq \func{% Re}\left\langle x,a\right\rangle . \label{2.8}$$Taking the square in (\[2.8\]) and re-arranging the terms, we deduce the third inequality in (\[2.2\]). The others are obvious. To prove the sharpness of the constant, assume, under the hypothesis of the theorem, that, there exists a constant $c>0$ such that$$\left\Vert x\right\Vert ^{2}\left\Vert a\right\Vert ^{2}-\left[ \func{Re}% \left\langle x,a\right\rangle \right] ^{2}\leq cr^{2}\left\Vert x\right\Vert ^{2}, \label{2.9}$$provided $x\in \overline{B}\left( a,r\right) $ and $\left\Vert a\right\Vert >r.$ Let $r=\sqrt{\varepsilon }>0,$ $\varepsilon \in \left( 0,1\right) ,$ $a,e\in H$ with $\left\Vert a\right\Vert =\left\Vert e\right\Vert =1$ and $a\perp e.$ Put $x=a+\sqrt{\varepsilon }e.$ Then obviously $x\in \overline{B}\left( a,r\right) ,$ $\left\Vert a\right\Vert >r$ and $\left\Vert x\right\Vert ^{2}=\left\Vert a\right\Vert ^{2}+\varepsilon \left\Vert e\right\Vert ^{2}=1+\varepsilon $, $\func{Re}\left\langle x,a\right\rangle =\left\Vert a\right\Vert ^{2}=1,$ and thus $\left\Vert x\right\Vert ^{2}\left\Vert a\right\Vert ^{2}-\left[ \func{Re}\left\langle x,a\right\rangle \right] ^{2}=\varepsilon .$ Using (\[2.9\]), we may write that$$\varepsilon \leq c\varepsilon \left( 1+\varepsilon \right) ,\ \ \varepsilon >0$$giving $$c+c\varepsilon \geq 1\text{ \ for any }\varepsilon >0 \label{2.10}$$Letting $\varepsilon \rightarrow 0+,$ we get from (\[2.10\]) that $c\geq 1, $ and the sharpness of the constant is proved. 2. The inequality (\[2.3\]) is obvious by (\[2.5\]) since $% \left\Vert a\right\Vert =r.$ The best constant follows in a similar way to the above. 3. The inequality (\[2.3\]) is obvious. The best constant may be proved in a similar way to the above. We omit the details. The following reverse of Schwarz’s inequality holds. \[t2.2\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an inner product space over $\mathbb{K}$ and $x,y\in H,$ $\gamma ,\Gamma \in \mathbb{K}$ such that either$$\func{Re}\left\langle \Gamma y-x,x-\gamma y\right\rangle \geq 0, \label{2.11}$$or, equivalently,$$\left\Vert x-\frac{\Gamma +\gamma }{2}y\right\Vert \leq \frac{1}{2}% \left\vert \Gamma -\gamma \right\vert \left\Vert y\right\Vert , \label{2.12}$$holds. 1. If $\func{Re}\left( \Gamma \overline{\gamma }\right) >0,$ then we have the inequalities$$\begin{aligned} \left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}& \leq \frac{1}{4}% \cdot \frac{\left\{ \func{Re}\left[ \left( \overline{\Gamma }+\overline{% \gamma }\right) \left\langle x,y\right\rangle \right] \right\} ^{2}}{\func{Re% }\left( \Gamma \overline{\gamma }\right) } \label{2.13} \\ & \leq \frac{1}{4}\cdot \frac{\left\vert \Gamma +\gamma \right\vert ^{2}}{% \func{Re}\left( \Gamma \overline{\gamma }\right) }\left\vert \left\langle x,y\right\rangle \right\vert ^{2}. \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible in both inequalities. 2. If $\func{Re}\left( \Gamma \overline{\gamma }\right) =0,$ then $$\left\Vert x\right\Vert ^{2}\leq \func{Re}\left[ \left( \overline{\Gamma }+% \overline{\gamma }\right) \left\langle x,y\right\rangle \right] \leq \left\vert \Gamma +\gamma \right\vert \left\vert \left\langle x,y\right\rangle \right\vert . \label{2.14}$$ 3. If $\func{Re}\left( \Gamma \overline{\gamma }\right) <0,$ then $$\begin{aligned} \left\Vert x\right\Vert ^{2}& \leq -\func{Re}\left( \Gamma \overline{\gamma }% \right) \left\Vert y\right\Vert ^{2}+\func{Re}\left[ \left( \overline{\Gamma }+\overline{\gamma }\right) \left\langle x,y\right\rangle \right] \label{2.15} \\ & \leq -\func{Re}\left( \Gamma \overline{\gamma }\right) \left\Vert y\right\Vert ^{2}+\left\vert \Gamma +\gamma \right\vert \left\vert \left\langle x,y\right\rangle \right\vert . \notag\end{aligned}$$ The proof of the equivalence between the inequalities (\[2.11\]) and ([2.12]{}) follows by the fact that in an inner product space $\func{Re}% \left\langle Z-x,x-z\right\rangle \geq 0$ for $x,z,Z\in H$ is equivalent with $\left\Vert x-\frac{z+Z}{2}\right\Vert \leq \frac{1}{2}\left\Vert Z-z\right\Vert $ (see for example [@SSD3]). Consider, for $y\neq 0,$ $a=\frac{\gamma +\Gamma }{2}y$ and $r=\frac{1}{2}% \left\vert \Gamma -\gamma \right\vert \left\Vert y\right\Vert ^{2}.$ Then$$\left\Vert a\right\Vert ^{2}-r^{2}=\frac{\left\vert \Gamma +\gamma \right\vert ^{2}-\left\vert \Gamma -\gamma \right\vert ^{2}}{4}\left\Vert y\right\Vert ^{2}=\func{Re}\left( \Gamma \overline{\gamma }\right) \left\Vert y\right\Vert ^{2}.$$ 1. If $\func{Re}\left( \Gamma \overline{\gamma }\right) >0,$ then the hypothesis of (i) in Theorem \[t2.1\] is satisfied, and by the second inequality in (\[2.2\]) we have$$\left\Vert x\right\Vert ^{2}\frac{\left\vert \Gamma +\gamma \right\vert ^{2}% }{4}\left\Vert y\right\Vert ^{2}-\frac{1}{4}\left\{ \func{Re}\left[ \left( \overline{\Gamma }+\overline{\gamma }\right) \left\langle x,y\right\rangle % \right] \right\} ^{2}\leq \frac{1}{4}\left\vert \Gamma -\gamma \right\vert ^{2}\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}$$from where we derive$$\frac{\left\vert \Gamma +\gamma \right\vert ^{2}-\left\vert \Gamma -\gamma \right\vert ^{2}}{4}\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}\leq \frac{1}{4}\left\{ \func{Re}\left[ \left( \overline{\Gamma }+% \overline{\gamma }\right) \left\langle x,y\right\rangle \right] \right\} ^{2},$$giving the first inequality in (\[2.13\]). The second inequality is obvious. To prove the sharpness of the constant $\frac{1}{4},$ assume that the first inequality in (\[2.13\]) holds with a constant $c>0,$ i.e., $$\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}\leq c\cdot \frac{% \left\{ \func{Re}\left[ \left( \overline{\Gamma }+\overline{\gamma }\right) \left\langle x,y\right\rangle \right] \right\} ^{2}}{\func{Re}\left( \Gamma \overline{\gamma }\right) }, \label{2.16}$$provided $\func{Re}\left( \Gamma \overline{\gamma }\right) >0$ and either (\[2.11\]) or (\[2.12\]) holds. Assume that $\Gamma ,\gamma >0,$ and let $x=\gamma y.$ Then (\[2.11\]) holds and by (\[2.16\]) we deduce$$\gamma ^{2}\left\Vert y\right\Vert ^{4}\leq c\cdot \frac{\left( \Gamma +\gamma \right) ^{2}\gamma ^{2}\left\Vert y\right\Vert ^{4}}{\Gamma \gamma }$$giving$$\Gamma \gamma \leq c\left( \Gamma +\gamma \right) ^{2}\text{ \ for any \ }% \Gamma ,\gamma >0. \label{2.17}$$Let $\varepsilon \in \left( 0,1\right) $ and choose in (\[2.17\]), $\Gamma =1+\varepsilon ,$ $\gamma =1-\varepsilon >0$ to get $1-\varepsilon ^{2}\leq 4c$ for any $\varepsilon \in \left( 0,1\right) .$ Letting $\varepsilon \rightarrow 0+,$ we deduce $c\geq \frac{1}{4},$ and the sharpness of the constant is proved. \(ii) and (iii) are obvious and we omit the details. We observe that the second bound in (\[2.13\]) for $\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}$ is better than the second bound provided by (\[1.5\]). The following corollary provides a reverse inequality for the additive version of Schwarz’s inequality. \[c2.3\]With the assumptions of Theorem \[t2.2\] and if $\func{Re}% \left( \Gamma \overline{\gamma }\right) >0,$ then we have the inequality:$$0\leq \left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}-\left\vert \left\langle x,y\right\rangle \right\vert ^{2}\leq \frac{1}{4}\cdot \frac{% \left\vert \Gamma -\gamma \right\vert ^{2}}{\func{Re}\left( \Gamma \overline{% \gamma }\right) }\left\vert \left\langle x,y\right\rangle \right\vert ^{2}. \label{2.18}$$The constant $\frac{1}{4}$ is best possible in (\[2.18\]). The proof is obvious from (\[2.13\]) on subtracting in both sides the same quantity $\left\vert \left\langle x,y\right\rangle \right\vert ^{2}.$ The sharpness of the constant may be proven in a similar manner to the one incorporated in the proof of (i), Theorem \[t2.2\]. We omit the details. It is obvious that the inequality (\[2.18\]) is better than (\[1.6\]) obtained in [@SSD3]. For some recent results in connection to Schwarz’s inequality, see [@ADR], [@DM] and [@GH]. Reverses of the Triangle Inequality\[s3\] ========================================= The following reverse of the triangle inequality holds. \[p2.4\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an inner product space over the real or complex number field $\mathbb{K}$ $\left( \mathbb{K}=\mathbb{R},\mathbb{C}\right) $ and $x,a\in H,$ $r>0$ are such that $$\left\Vert x-a\right\Vert \leq r<\left\Vert a\right\Vert . \label{2.19}$$Then we have the inequality$$0\leq \left\Vert x\right\Vert +\left\Vert a\right\Vert -\left\Vert x+a\right\Vert \leq \sqrt{2}r\cdot \sqrt{\frac{\func{Re}\left\langle x,a\right\rangle }{\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}\left( \sqrt{% \left\Vert a\right\Vert ^{2}-r^{2}}+\left\Vert a\right\Vert \right) }}. \label{2.20}$$ Using the inequality (\[2.8\]), we may write that$$\left\Vert x\right\Vert \left\Vert a\right\Vert \leq \frac{\left\Vert a\right\Vert \func{Re}\left\langle x,a\right\rangle }{\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}},$$giving$$\begin{aligned} 0& \leq \left\Vert x\right\Vert \left\Vert a\right\Vert -\func{Re}% \left\langle x,a\right\rangle \label{2.21} \\ & \leq \func{Re}\left\langle x,a\right\rangle \frac{\left\Vert a\right\Vert -% \sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}}{\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}} \notag \\ & =\frac{r^{2}\func{Re}\left\langle x,a\right\rangle }{\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}\left( \sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}% +\left\Vert a\right\Vert \right) }. \notag\end{aligned}$$Since$$\left( \left\Vert x\right\Vert +\left\Vert a\right\Vert \right) ^{2}-\left\Vert x+a\right\Vert ^{2}=2\left( \left\Vert x\right\Vert \left\Vert a\right\Vert -\func{Re}\left\langle x,a\right\rangle \right) ,$$then by (\[2.21\]), we have$$\begin{aligned} \left\Vert x\right\Vert +\left\Vert a\right\Vert & \leq \sqrt{\left\Vert x+a\right\Vert ^{2}+\frac{2r^{2}\func{Re}\left\langle x,a\right\rangle }{% \sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}\left( \sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}+\left\Vert a\right\Vert \right) }} \\ & \leq \left\Vert x+a\right\Vert +\sqrt{2}r\cdot \sqrt{\frac{\func{Re}% \left\langle x,a\right\rangle }{\sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}% \left( \sqrt{\left\Vert a\right\Vert ^{2}-r^{2}}+\left\Vert a\right\Vert \right) }},\end{aligned}$$giving the desired inequality (\[2.20\]). The following proposition providing a simpler reverse for the triangle inequality also holds. \[p2.5\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an inner product space over $\mathbb{K}$ and $x,y\in H,$ $M>m>0$ such that either$$\func{Re}\left\langle My-x,x-my\right\rangle \geq 0, \label{2.22}$$or, equivalently,$$\left\Vert x-\frac{M+m}{2}\cdot y\right\Vert \leq \frac{1}{2}\left( M-m\right) \left\Vert y\right\Vert , \label{2.23}$$holds. Then we have the inequality$$0\leq \left\Vert x\right\Vert +\left\Vert y\right\Vert -\left\Vert x+y\right\Vert \leq \frac{\sqrt{M}-\sqrt{m}}{\sqrt[4]{mM}}\sqrt{\func{Re}% \left\langle x,y\right\rangle }. \label{2.24}$$ Choosing in (\[2.8\]), $a=\frac{M+m}{2}y,$ $r=\frac{1}{2}\left( M-m\right) \left\Vert y\right\Vert $ we get$$\left\Vert x\right\Vert \left\Vert y\right\Vert \sqrt{Mm}\leq \frac{M+m}{2}% \func{Re}\left\langle x,y\right\rangle$$giving $$0\leq \left\Vert x\right\Vert \left\Vert y\right\Vert -\func{Re}\left\langle x,y\right\rangle \leq \frac{\left( \sqrt{M}-\sqrt{m}\right) ^{2}}{2\sqrt{mM}}% \func{Re}\left\langle x,y\right\rangle .$$Following the same arguments as in the proof of Proposition \[p2.4\], we deduce the desired inequality (\[2.24\]). For some results related to triangle inequality in inner product spaces, see [@JBDFTM], [@SMK], [@PMM] and [@DKR]. Some Grüss Type Inequalities\[s4\] ================================== We may state the following result. \[t4.1\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an inner product space over the real or complex number field $\mathbb{K}$ $\left( \mathbb{K}=\mathbb{R},\mathbb{K}=\mathbb{C}\right) $ and $x,y,e\in H$ with $\left\Vert e\right\Vert =1.$ If $r_{1},r_{2}\in \left( 0,1\right) $ and $$\left\Vert x-e\right\Vert \leq r_{1},\ \ \ \ \left\Vert y-e\right\Vert \leq r_{2}, \label{4.1}$$then we have the inequality$$\left\vert \left\langle x,y\right\rangle -\left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert \leq r_{1}r_{2}\left\Vert x\right\Vert \left\Vert y\right\Vert . \label{4.2}$$The inequality (\[4.2\]) is sharp in the sense that the constant $c=1$ in front of $r_{1}r_{2}$ cannot be replaced by a smaller constant. Apply Schwarz’s inequality in $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ for the vectors $x-\left\langle x,e\right\rangle e,$ $y-\left\langle y,e\right\rangle e,$ to get (see also [@SSD3])$$\left\vert \left\langle x,y\right\rangle -\left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert ^{2}\leq \left( \left\Vert x\right\Vert ^{2}-\left\vert \left\langle x,e\right\rangle \right\vert ^{2}\right) \left( \left\Vert y\right\Vert ^{2}-\left\vert \left\langle y,e\right\rangle \right\vert ^{2}\right) . \label{4.3}$$Using Theorem \[t2.1\] for $a=e,$ we may state that$$\left\Vert x\right\Vert ^{2}-\left\vert \left\langle x,e\right\rangle \right\vert ^{2}\leq r_{1}^{2}\left\Vert x\right\Vert ^{2},\ \ \ \ \ \ \left\Vert y\right\Vert ^{2}-\left\vert \left\langle y,e\right\rangle \right\vert ^{2}\leq r_{2}^{2}\left\Vert y\right\Vert ^{2}. \label{4.4}$$Utilizing (\[4.3\]) and (\[4.4\]), we deduce$$\left\vert \left\langle x,y\right\rangle -\left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert ^{2}\leq r_{1}^{2}r_{2}^{2}\left\Vert x\right\Vert ^{2}\left\Vert y\right\Vert ^{2}, \label{4.5}$$which is clearly equivalent to the desired inequality (\[4.2\]). The sharpness of the constant follows by the fact that for $x=y,$ $% r_{1}=r_{2}=r,$ we get from (\[4.2\])$$\left\Vert x\right\Vert ^{2}-\left\vert \left\langle x,e\right\rangle \right\vert ^{2}\leq r^{2}\left\Vert x\right\Vert ^{2} \label{4.6}$$provided $\left\Vert e\right\Vert =1$ and $\left\Vert x-e\right\Vert \leq r<1.$ The inequality (\[4.6\]) is sharp, as shown in Theorem \[t2.1\], and the theorem is thus proved. Another companion of the Grüss inequality may be stated as well. \[t4.2\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an inner product space over $\mathbb{K}$ and $x,y,e\in H$ with $% \left\Vert e\right\Vert =1.$ Suppose also that $a,A,b,B\in \mathbb{K}$ $% \left( \mathbb{K}=\mathbb{R},\mathbb{C}\right) $ such that $\func{Re}\left( A% \overline{a}\right) ,$ $\func{Re}\left( B\overline{b}\right) >0.$ If either$$\func{Re}\left\langle Ae-x,x-ae\right\rangle \geq 0,\ \ \func{Re}% \left\langle Be-y,y-be\right\rangle \geq 0,\ \label{4.7}$$or, equivalently,$$\left\Vert x-\frac{a+A}{2}e\right\Vert \leq \frac{1}{2}\left\vert A-a\right\vert ,\ \ \left\Vert y-\frac{b+B}{2}e\right\Vert \leq \frac{1}{2}% \left\vert B-b\right\vert , \label{4.8}$$holds, then we have the inequality$$\left\vert \left\langle x,y\right\rangle -\left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert \leq \frac{1}{4}\cdot \frac{% \left\vert A-a\right\vert \left\vert B-b\right\vert }{\sqrt{\func{Re}\left( A% \overline{a}\right) \func{Re}\left( B\overline{b}\right) }}\left\vert \left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert . \label{4.9}$$The constant $\frac{1}{4}$ is best possible. We know, by (\[4.3\]), that$$\left\vert \left\langle x,y\right\rangle -\left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert ^{2}\leq \left( \left\Vert x\right\Vert ^{2}-\left\vert \left\langle x,e\right\rangle \right\vert ^{2}\right) \left( \left\Vert y\right\Vert ^{2}-\left\vert \left\langle y,e\right\rangle \right\vert ^{2}\right) . \label{4.10}$$If we use Corollary \[c2.3\], then we may state that$$\left\Vert x\right\Vert ^{2}-\left\vert \left\langle x,e\right\rangle \right\vert ^{2}\leq \frac{1}{4}\cdot \frac{\left\vert A-a\right\vert ^{2}}{% \func{Re}\left( A\overline{a}\right) }\left\vert \left\langle x,e\right\rangle \right\vert ^{2} \label{4.11}$$and$$\left\Vert y\right\Vert ^{2}-\left\vert \left\langle y,e\right\rangle \right\vert ^{2}\leq \frac{1}{4}\cdot \frac{\left\vert B-b\right\vert ^{2}}{% \func{Re}\left( B\overline{b}\right) }\left\vert \left\langle y,e\right\rangle \right\vert ^{2}. \label{4.12}$$Utilizing (\[4.10\]) – (\[4.12\]), we deduce$$\left\vert \left\langle x,y\right\rangle -\left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert ^{2}\leq \frac{1}{16}\cdot \frac{% \left\vert A-a\right\vert ^{2}\left\vert B-b\right\vert ^{2}}{\func{Re}% \left( A\overline{a}\right) \func{Re}\left( B\overline{b}\right) }\left\vert \left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert ^{2},$$which is clearly equivalent to the desired inequality (\[4.9\]). The sharpness of the constant follows from Corollary \[c2.3\], and we omit the details. With the assumptions of Theorem \[t4.2\] and if $\left\langle x,e\right\rangle ,\left\langle y,e\right\rangle \neq 0$ (that is actually the interesting case), one has the inequality$$\left\vert \frac{\left\langle x,y\right\rangle }{\left\langle x,e\right\rangle \left\langle e,y\right\rangle }-1\right\vert \leq \frac{1}{4% }\cdot \frac{\left\vert A-a\right\vert \left\vert B-b\right\vert }{\sqrt{% \func{Re}\left( A\overline{a}\right) \func{Re}\left( B\overline{b}\right) }}. \label{4.13}$$The constant $\frac{1}{4}$ is best possible. The inequality (\[4.9\]) provides a better bound for the quantity $$\left\vert \left\langle x,y\right\rangle -\left\langle x,e\right\rangle \left\langle e,y\right\rangle \right\vert$$than (2.3) of [@SSD3]. For some recent results on Grüss type inequalities in inner product spaces, see [@SSD0], [@SSD00] and [@PFR]. Reverses of Bessel’s Inequality\[s5\] ===================================== Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be a real or complex infinite dimensional Hilbert space and $\left( e_{i}\right) _{i\in \mathbb{N}}$ an orthornormal family in $H$, i.e., we recall that $% \left\langle e_{i},e_{j}\right\rangle =0$ if $i,j\in \mathbb{N}$, $i\neq j$ and $\left\Vert e_{i}\right\Vert =1$ for $i\in \mathbb{N}$. It is well known that, if $x\in H,$ then the sum $\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}$ is convergent and the following inequality, called *Bessel’s inequality*$$\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}\leq \left\Vert x\right\Vert ^{2}, \label{5.1}$$holds. If $\ell ^{2}\left( \mathbb{K}\right) :=\left\{ \mathbf{a}=\left( a_{i}\right) _{i\in \mathbb{N}}\subset \mathbb{K}\left\vert \sum_{i=1}^{\infty }\left\vert a_{i}\right\vert ^{2}\right. <\infty \right\} ,$ where $\mathbb{K}=\mathbb{C}$ or $\mathbb{K}=\mathbb{R}$, is the Hilbert space of all complex or real sequences that are $2$-summable and $\mathbf{% \lambda }=\left( \lambda _{i}\right) _{i\in \mathbb{N}}\in \ell ^{2}\left( \mathbb{K}\right) ,$ then the sum $\sum_{i=1}^{\infty }\lambda _{i}e_{i}$ is convergent in $H$ and if $y:=\sum_{i=1}^{\infty }\lambda _{i}e_{i}\in H,$ then $\left\Vert y\right\Vert =\left( \sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}\right) ^{\frac{1}{2}}.$ We may state the following result. \[t5.1\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an infinite dimensional Hilbert space over the real or complex number field $\mathbb{K}$, $\left( e_{i}\right) _{i\in \mathbb{N}}$ an orthornormal family in $H,$ $\mathbf{\lambda }=\left( \lambda _{i}\right) _{i\in \mathbb{N% }}\in \ell ^{2}\left( \mathbb{K}\right) $ and $r>0$ with the property that$$\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}>r^{2}. \label{5.2}$$If $x\in H$ is such that$$\left\Vert x-\sum_{i=1}^{\infty }\lambda _{i}e_{i}\right\Vert \leq r, \label{5.3}$$then we have the inequality$$\begin{aligned} \left\Vert x\right\Vert ^{2}& \leq \frac{\left( \sum_{i=1}^{\infty }\func{Re}% \left[ \overline{\lambda _{i}}\left\langle x,e_{i}\right\rangle \right] \right) ^{2}}{\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}-r^{2}} \label{5.4} \\ & \leq \frac{\left\vert \sum_{i=1}^{\infty }\overline{\lambda _{i}}% \left\langle x,e_{i}\right\rangle \right\vert ^{2}}{\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}-r^{2}} \notag \\ & \leq \frac{\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}}{% \sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}-r^{2}}% \sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}; \notag\end{aligned}$$and$$\begin{aligned} 0 &\leq &\left\Vert x\right\Vert ^{2}-\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2} \label{5.5} \\ &\leq &\frac{r^{2}}{\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}-r^{2}}\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}.\end{aligned}$$ Applying the third inequality in (\[2.2\]) for $a=\sum_{i=1}^{\infty }\lambda _{i}e_{i}\in H,$ we have$$\left\Vert x\right\Vert ^{2}\left\Vert \sum_{i=1}^{\infty }\lambda _{i}e_{i}\right\Vert ^{2}-\left[ \func{Re}\left\langle x,\sum_{i=1}^{\infty }\lambda _{i}e_{i}\right\rangle \right] ^{2}\leq r^{2}\left\Vert x\right\Vert ^{2} \label{5.6}$$and since$$\begin{aligned} \left\Vert \sum_{i=1}^{\infty }\lambda _{i}e_{i}\right\Vert ^{2}& =\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}, \\ \func{Re}\left\langle x,\sum_{i=1}^{\infty }\lambda _{i}e_{i}\right\rangle & =\sum_{i=1}^{\infty }\func{Re}\left[ \overline{\lambda _{i}}\left\langle x,e_{i}\right\rangle \right] ,\end{aligned}$$then by (\[5.6\]) we deduce$$\left\Vert x\right\Vert ^{2}\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}-\left[ \func{Re}\left\langle x,\sum_{i=1}^{\infty }\lambda _{i}e_{i}\right\rangle \right] ^{2}\leq r^{2}\left\Vert x\right\Vert ^{2},$$giving the first inequality in (\[5.4\]). The second inequality is obvious by the modulus property. The last inequality follows by the Cauchy-Bunyakovsky-Schwarz inequality$$\left\vert \sum_{i=1}^{\infty }\overline{\lambda _{i}}\left\langle x,e_{i}\right\rangle \right\vert ^{2}\leq \sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}.$$The inequality (\[5.5\]) follows by the last inequality in (\[5.4\]) on subtracting in both sides the quantity $\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}<\infty .$ The following result provides a generalization for the reverse of Bessel’s inequality obtained in [@SSD6]. \[t5.2\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ and $\left( e_{i}\right) _{i\in \mathbb{N}}$ be as in Theorem \[t5.1\]. Suppose that $\mathbf{\Gamma }=\left( \Gamma _{i}\right) _{i\in \mathbb{N}% }\in \ell ^{2}\left( \mathbb{K}\right) ,$ $\mathbf{\gamma }=\left( \gamma _{i}\right) _{i\in \mathbb{N}}\in \ell ^{2}\left( \mathbb{K}\right) $ are sequences of real or complex numbers such that$$\sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}\overline{\gamma _{i}}\right) >0. \label{5.7}$$If $x\in H$ is such that either$$\left\Vert x-\sum_{i=1}^{\infty }\frac{\Gamma _{i}+\gamma _{i}}{2}% e_{i}\right\Vert \leq \frac{1}{2}\left( \sum_{i=1}^{\infty }\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{\frac{1}{2}} \label{5.8}$$or, equivalently,$$\func{Re}\left\langle \sum_{i=1}^{\infty }\Gamma _{i}e_{i}-x,x-\sum_{i=1}^{\infty }\gamma _{i}e_{i}\right\rangle \geq 0 \label{5.9}$$holds, then we have the inequalities$$\begin{aligned} \left\Vert x\right\Vert ^{2}& \leq \frac{1}{4}\cdot \frac{\left( \sum_{i=1}^{\infty }\func{Re}\left[ \left( \overline{\Gamma _{i}}+\overline{% \gamma _{i}}\right) \left\langle x,e_{i}\right\rangle \right] \right) ^{2}}{% \sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}\overline{\gamma _{i}}\right) } \label{5.10} \\ & \leq \frac{1}{4}\cdot \frac{\left\vert \sum_{i=1}^{\infty }\left( \overline{\Gamma _{i}}+\overline{\gamma _{i}}\right) \left\langle x,e_{i}\right\rangle \right\vert ^{2}}{\sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}\overline{\gamma _{i}}\right) } \notag \\ & \leq \frac{1}{4}\cdot \frac{\sum_{i=1}^{\infty }\left\vert \Gamma _{i}+\gamma _{i}\right\vert ^{2}}{\sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}\overline{\gamma _{i}}\right) }\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}. \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible in all inequalities in ([5.10]{}). We also have the inequalities:$$0\leq \left\Vert x\right\Vert ^{2}-\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}\leq \frac{1}{4}\cdot \frac{\sum_{i=1}^{\infty }\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}% }{\sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}\overline{\gamma _{i}}% \right) }\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}. \label{5.11}$$Here the constant $\frac{1}{4}$ is also best possible. Since $\mathbf{\Gamma }$, $\mathbf{\gamma }\in \ell ^{2}\left( \mathbb{K}% \right) ,$ then also $\frac{1}{2}\left( \mathbf{\Gamma }+\mathbf{\gamma }% \right) \in \ell ^{2}\left( \mathbb{K}\right) ,$ showing that the series$$\sum_{i=1}^{\infty }\left\vert \frac{\Gamma _{i}+\gamma _{i}}{2}\right\vert ^{2},\ \sum_{i=1}^{\infty }\left\vert \frac{\Gamma _{i}-\gamma _{i}}{2}% \right\vert ^{2}\text{ and}\ \sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}% \overline{\gamma _{i}}\right)$$are convergent. Also, the series $$\sum_{i=1}^{\infty }\Gamma _{i}e_{i},\text{ }\sum_{i=1}^{\infty }\gamma _{i}e_{i}\text{ and }\sum_{i=1}^{\infty }\frac{\gamma _{i}+\Gamma _{i}}{2}% e_{i}$$ are convergent in the Hilbert space $H.$ The equivalence of the conditions (\[5.8\]) and (\[5.9\]) follows by the fact that in an inner product space we have, for $x,z,Z\in H,$ $\func{Re}% \left\langle Z-x,x-z\right\rangle \geq 0$ is equivalent to $\left\Vert x-% \frac{z+Z}{2}\right\Vert \leq \frac{1}{2}\left\Vert Z-z\right\Vert ,$ and we omit the details. Now, we observe that the inequalities (\[5.10\]) and (\[5.11\]) follow from Theorem \[t5.1\] on choosing $\lambda _{i}=\frac{\gamma _{i}+\Gamma _{i}}{2},$ $i\in \mathbb{N}$ and $r=\frac{1}{2}\left( \sum_{i=1}^{\infty }\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{\frac{1}{2}}.$ The fact that $\frac{1}{4}$ is the best constant in both (\[5.10\]) and (\[5.11\]) follows from Theorem \[t2.2\] and Corollary \[c2.3\], and we omit the details. Note that (\[5.10\]) improves (\[1.17\]) and (\[5.11\]) improves ([1.18]{}), that have been obtained in [@SSD6]. For some recent results related to Bessel inequality, see [@SSD01], [SSDJS]{}, [@HXC] and [@GH1]. Some Grüss Type Inequalities for Orthonormal Families\[s6\] =========================================================== The following result related to Grüss inequality in inner product spaces, holds. \[t6.1\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an infinite dimensional Hilbert space over the real or complex number field $\mathbb{K}$, and $\left( e_{i}\right) _{i\in \mathbb{N}}$ an orthornormal family in $H.$ Assume that $\mathbf{\lambda }=\left( \lambda _{i}\right) _{i\in \mathbb{N}},\ \mathbf{\mu }=\left( \mu _{i}\right) _{i\in \mathbb{N}}\in \ell ^{2}\left( \mathbb{K}\right) $ and $r_{1},r_{2}>0$ with the properties that$$\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}>r_{1}^{2},\ \ \ \sum_{i=1}^{\infty }\left\vert \mu _{i}\right\vert ^{2}>r_{2}^{2}. \label{6.1}$$If $x,y\in H$ are such that$$\left\Vert x-\sum_{i=1}^{\infty }\lambda _{i}e_{i}\right\Vert \leq r_{1},\ \ \ \ \ \ \left\Vert y-\sum_{i=1}^{\infty }\mu _{i}e_{i}\right\Vert \leq r_{2}, \label{6.2}$$then we have the inequalities$$\begin{aligned} &&\left\vert \left\langle x,y\right\rangle -\sum_{i=1}^{\infty }\left\langle x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right\vert \label{6.3} \\ &\leq &\frac{r_{1}r_{2}}{\sqrt{\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}-r_{1}^{2}}\sqrt{\sum_{i=1}^{\infty }\left\vert \mu _{i}\right\vert ^{2}-r_{2}^{2}}}\cdot \sqrt{\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}\sum_{i=1}^{\infty }\left\vert \left\langle y,e_{i}\right\rangle \right\vert ^{2}} \notag \\ &\leq &\frac{r_{1}r_{2}\left\Vert x\right\Vert \left\Vert y\right\Vert }{% \sqrt{\sum_{i=1}^{\infty }\left\vert \lambda _{i}\right\vert ^{2}-r_{1}^{2}}% \sqrt{\sum_{i=1}^{\infty }\left\vert \mu _{i}\right\vert ^{2}-r_{2}^{2}}}. \notag\end{aligned}$$ Applying Schwarz’s inequality for the vectors $x-\sum_{i=1}^{\infty }\left\langle x,e_{i}\right\rangle e_{i},$ $y-\sum_{i=1}^{\infty }\left\langle y,e_{i}\right\rangle e_{i},$ we have$$\begin{gathered} \left\vert \left\langle x-\sum_{i=1}^{\infty }\left\langle x,e_{i}\right\rangle e_{i},y-\sum_{i=1}^{\infty }\left\langle y,e_{i}\right\rangle e_{i}\right\rangle \right\vert ^{2} \label{6.4} \\ \leq \left\Vert x-\sum_{i=1}^{\infty }\left\langle x,e_{i}\right\rangle e_{i}\right\Vert ^{2}\left\Vert y-\sum_{i=1}^{\infty }\left\langle y,e_{i}\right\rangle e_{i}\right\Vert ^{2}.\end{gathered}$$Since$$\left\langle x-\sum_{i=1}^{\infty }\left\langle x,e_{i}\right\rangle e_{i},y-\sum_{i=1}^{\infty }\left\langle y,e_{i}\right\rangle e_{i}\right\rangle =\left\langle x,y\right\rangle -\sum_{i=1}^{\infty }\left\langle x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle$$and$$\left\Vert x-\sum_{i=1}^{\infty }\left\langle x,e_{i}\right\rangle e_{i}\right\Vert ^{2}=\left\Vert x\right\Vert ^{2}-\sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2},$$then by (\[5.5\]) applied for $x$ and $y,$ and from (\[6.4\]), we deduce the first part of (\[6.3\]). The second part follows by Bessel’s inequality. The following Grüss type inequality may be stated as well. \[t6.2\]Let $\left( H;\left\langle \cdot ,\cdot \right\rangle \right) $ be an infinite dimensional Hilbert space and $\left( e_{i}\right) _{i\in \mathbb{N}}$ an orthornormal family in $H.$ Suppose that $\left( \Gamma _{i}\right) _{i\in \mathbb{N}},$ $\left( \gamma _{i}\right) _{i\in \mathbb{N}% },$ $\left( \phi _{i}\right) _{i\in \mathbb{N}},$ $\left( \Phi _{i}\right) _{i\in \mathbb{N}}\in \ell ^{2}\left( \mathbb{K}\right) $ are sequences of real and complex numbers such that$$\sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}\overline{\gamma _{i}}\right) >0,\ \ \ \sum_{i=1}^{\infty }\func{Re}\left( \Phi _{i}\overline{\phi _{i}}% \right) >0. \label{6.5}$$If $x,y\in H$ are such that either$$\begin{aligned} \left\Vert x-\sum_{i=1}^{\infty }\frac{\Gamma _{i}+\gamma _{i}}{2}\cdot e_{i}\right\Vert & \leq \frac{1}{2}\left( \sum_{i=1}^{\infty }\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{\frac{1}{2}} \label{6.6} \\ \left\Vert y-\sum_{i=1}^{\infty }\frac{\Phi _{i}+\phi _{i}}{2}\cdot e_{i}\right\Vert & \leq \frac{1}{2}\left( \sum_{i=1}^{\infty }\left\vert \Phi _{i}-\phi _{i}\right\vert ^{2}\right) ^{\frac{1}{2}} \notag\end{aligned}$$or, equivalently,$$\begin{aligned} \func{Re}\left\langle \sum_{i=1}^{\infty }\Gamma _{i}e_{i}-x,x-\sum_{i=1}^{\infty }\gamma _{i}e_{i}\right\rangle & \geq 0, \label{6.7} \\ \func{Re}\left\langle \sum_{i=1}^{\infty }\Phi _{i}e_{i}-y,y-\sum_{i=1}^{\infty }\phi _{i}e_{i}\right\rangle & \geq 0, \notag\end{aligned}$$holds, then we have the inequality$$\begin{aligned} & \left\vert \left\langle x,y\right\rangle -\sum_{i=1}^{\infty }\left\langle x,e_{i}\right\rangle \left\langle e_{i},y\right\rangle \right\vert \label{6.8} \\ & \leq \frac{1}{4}\cdot \frac{\left( \sum_{i=1}^{\infty }\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{\frac{1}{2}}\left( \sum_{i=1}^{\infty }\left\vert \Phi _{i}-\phi _{i}\right\vert ^{2}\right) ^{% \frac{1}{2}}}{\left( \sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}% \overline{\gamma _{i}}\right) \right) ^{\frac{1}{2}}\left( \sum_{i=1}^{\infty }\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) \right) ^{\frac{1}{2}}} \notag \\ & \times \left( \sum_{i=1}^{\infty }\left\vert \left\langle x,e_{i}\right\rangle \right\vert ^{2}\right) ^{\frac{1}{2}}\left( \sum_{i=1}^{\infty }\left\vert \left\langle y,e_{i}\right\rangle \right\vert ^{2}\right) ^{\frac{1}{2}} \notag \\ & \leq \frac{1}{4}\cdot \frac{\left( \sum_{i=1}^{\infty }\left\vert \Gamma _{i}-\gamma _{i}\right\vert ^{2}\right) ^{\frac{1}{2}}\left( \sum_{i=1}^{\infty }\left\vert \Phi _{i}-\phi _{i}\right\vert ^{2}\right) ^{% \frac{1}{2}}}{\left[ \sum_{i=1}^{\infty }\func{Re}\left( \Gamma _{i}% \overline{\gamma _{i}}\right) \right] ^{\frac{1}{2}}\left[ \sum_{i=1}^{\infty }\func{Re}\left( \Phi _{i}\overline{\phi _{i}}\right) % \right] ^{\frac{1}{2}}}\left\Vert x\right\Vert \left\Vert y\right\Vert \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible in the first inequality. Follows by (\[5.11\]) and (\[6.4\]). The best constant follows from Theorem \[t4.2\], and we omit the details. We note that the inequality (\[6.8\]) is better than the inequality (3.3) in [@SSD6]. We omit the details. Integral Inequalities\[s7\] =========================== Let $\left( \Omega ,\Sigma ,\mu \right) $ be a measurable space consisting of a set $\Omega ,$ a $\sigma -$algebra of parts $\Sigma $ and a countably additive and positive measure $\mu $ on $\Sigma $ with values $\mathbb{R\cup }\left\{ \infty \right\} .$ Let $\rho \geq 0$ be a $g-$measurable function on $\Omega $ with $\int_{\Omega }\rho \left( s\right) d\mu \left( s\right) =1.$ Denote by $L_{\rho }^{2}\left( \Omega ,\mathbb{K}\right) $ the Hilbert space of all real or complex valued functions defined on $\Omega $ and $% 2-\rho -$integrable on $\Omega ,$ i.e.,$$\int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) \right\vert ^{2}d\mu \left( s\right) <\infty . \label{7.1}$$It is obvious that the following inner product$$\left\langle f,g\right\rangle _{\rho }:=\int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) , \label{7.2}$$generates the norm $\left\Vert f\right\Vert _{\rho }:=\left( \int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) \right\vert ^{2}d\mu \left( s\right) \right) ^{\frac{1}{2}}$ of $L_{\rho }^{2}\left( \Omega ,% \mathbb{K}\right) ,$ and all the above results may be stated for integrals. It is important to observe that, if $$\func{Re}\left[ f\left( s\right) \overline{g\left( s\right) }\right] \geq 0% \text{ \ for }\mu -\text{a.e. }s\in \Omega , \label{7.3}$$then, obviously,$$\begin{aligned} \func{Re}\left\langle f,g\right\rangle _{\rho }& =\func{Re}\left[ \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right] \label{7.4} \\ & =\int_{\Omega }\rho \left( s\right) \func{Re}\left[ f\left( s\right) \overline{g\left( s\right) }\right] d\mu \left( s\right) \geq 0. \notag\end{aligned}$$The reverse is evidently not true in general. Moreover, if the space is real, i.e., $\mathbb{K=R}$, then a sufficient condition for (\[7.4\]) to hold is:$$f\left( s\right) \geq 0,\ \ g\left( s\right) \geq 0\text{ \ for }\mu -\text{% a.e. }s\in \Omega . \label{7.5}$$ We provide now, by the use of certain result obtained in Section \[s2\], some integral inequalities that may be used in practical applications. \[p7.1\]Let $f,g\in L_{\rho }^{2}\left( \Omega ,\mathbb{K}\right) $ and $% r>0$ with the properties that$$\left\vert f\left( s\right) -g\left( s\right) \right\vert \leq r\leq \left\vert g\left( s\right) \right\vert \ \text{\ for }\mu -\text{a.e. }s\in \Omega , \label{7.6}$$and $\int_{\Omega }\rho \left( s\right) \left\vert g\left( s\right) \right\vert ^{2}d\mu \left( s\right) \neq r.$ Then we have the inequalities$$\begin{aligned} 0& \leq \int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) \right\vert ^{2}d\mu \left( s\right) \int_{\Omega }\rho \left( s\right) \left\vert g\left( s\right) \right\vert ^{2}d\mu \left( s\right) -\left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right\vert ^{2} \label{7.7} \\ & \leq \int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) \right\vert ^{2}d\mu \left( s\right) \int_{\Omega }\rho \left( s\right) \left\vert g\left( s\right) \right\vert ^{2}d\mu \left( s\right) \notag \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\left[ \int_{\Omega }\rho \left( s\right) \func{Re}\left( f\left( s\right) \overline{g\left( s\right) }\right) d\mu \left( s\right) \right] ^{2} \notag \\ & \leq r^{2}\int_{\Omega }\rho \left( s\right) \left\vert g\left( s\right) \right\vert ^{2}d\mu \left( s\right) . \notag\end{aligned}$$The constant $c=1$ in front of $r^{2}$ is best possible. The proof follows by Theorem \[t2.1\] and we omit the details. \[p7.2\]Let $f,g\in L_{\rho }^{2}\left( \Omega ,\mathbb{K}\right) $ and $% \gamma ,\Gamma \in \mathbb{K}$ such that $\func{Re}\left( \Gamma \overline{% \gamma }\right) >0$ and$$\func{Re}\left[ \left( \Gamma g\left( s\right) -f\left( s\right) \right) \left( \overline{f\left( s\right) }-\overline{\gamma }\overline{g\left( s\right) }\right) \right] \geq 0\text{ \ for }\mu -\text{a.e. }s\in \Omega . \label{7.8}$$Then we have the inequalities$$\begin{aligned} & \int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) \right\vert ^{2}d\mu \left( s\right) \int_{\Omega }\rho \left( s\right) \left\vert g\left( s\right) \right\vert ^{2}d\mu \left( s\right) \label{7.9} \\ & \leq \frac{1}{4}\cdot \frac{\left\{ \func{Re}\left[ \left( \overline{% \Gamma }+\overline{\gamma }\right) \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right] \right\} ^{2}}{\func{Re}\left( \Gamma \overline{\gamma }\right) } \notag \\ & \leq \frac{1}{4}\cdot \frac{\left\vert \Gamma +\gamma \right\vert ^{2}}{% \func{Re}\left( \Gamma \overline{\gamma }\right) }\left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right\vert ^{2}. \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible in both inequalities. The proof follows by Theorem \[t2.2\] and we omit the details. \[c7.3\]With the assumptions of Proposition \[p7.2\], we have the inequality$$\begin{aligned} 0& \leq \int_{\Omega }\rho \left( s\right) \left\vert f\left( s\right) \right\vert ^{2}d\mu \left( s\right) \int_{\Omega }\rho \left( s\right) \left\vert g\left( s\right) \right\vert ^{2}d\mu \left( s\right) \label{7.10} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right\vert ^{2} \notag \\ & \leq \frac{1}{4}\cdot \frac{\left\vert \Gamma -\gamma \right\vert ^{2}}{% \func{Re}\left( \Gamma \overline{\gamma }\right) }\left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right\vert ^{2}. \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible. If the space is real and we assume, for $M>m>0,$ that$$mg\left( s\right) \leq f\left( s\right) \leq Mg\left( s\right) \text{ \ for }% \mu -\text{a.e. }s\in \Omega , \label{7.11}$$then, by (\[7.9\]) and (\[7.10\]), we deduce the inequalities$$\begin{gathered} \int_{\Omega }\rho \left( s\right) \left[ f\left( s\right) \right] ^{2}d\mu \left( s\right) \int_{\Omega }\rho \left( s\right) \left[ g\left( s\right) % \right] ^{2}d\mu \left( s\right) \label{7.12} \\ \leq \frac{1}{4}\cdot \frac{\left( M+m\right) ^{2}}{mM}\left[ \int_{\Omega }\rho \left( s\right) f\left( s\right) g\left( s\right) d\mu \left( s\right) % \right] ^{2}.\end{gathered}$$and $$\begin{aligned} 0& \leq \int_{\Omega }\rho \left( s\right) \left[ f\left( s\right) \right] ^{2}d\mu \left( s\right) \int_{\Omega }\rho \left( s\right) \left[ g\left( s\right) \right] ^{2}d\mu \left( s\right) \label{7.13} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -\left[ \int_{\Omega }\rho \left( s\right) f\left( s\right) g\left( s\right) d\mu \left( s\right) \right] ^{2} \notag \\ & \leq \frac{1}{4}\cdot \frac{\left( M-m\right) ^{2}}{mM}\left[ \int_{\Omega }\rho \left( s\right) f\left( s\right) g\left( s\right) d\mu \left( s\right) % \right] ^{2}. \notag\end{aligned}$$The inequality (\[7.12\]) is known in the literature as Cassel’s inequality. The following Grüss type integral inequality for real or complex-valued functions also holds. \[p.7.3\]Let $f,g,h\in L_{\rho }^{2}\left( \Omega ,\mathbb{K}\right) $ with $\int_{\Omega }\rho \left( s\right) \left\vert h\left( s\right) \right\vert ^{2}d\mu \left( s\right) =1$ and $a,A,b,B\in \mathbb{K}$ such that $\func{Re}\left( A\overline{a}\right) ,\func{Re}\left( B\overline{b}% \right) >0$ and$$\begin{aligned} \func{Re}\left[ \left( Ah\left( s\right) -f\left( s\right) \right) \left( \overline{f\left( s\right) }-\overline{a}\overline{h\left( s\right) }\right) % \right] &\geq &0, \\ \func{Re}\left[ \left( Ah\left( s\right) -g\left( s\right) \right) \left( \overline{g\left( s\right) }-\overline{b}\overline{h\left( s\right) }\right) % \right] &\geq &0\text{,}\end{aligned}$$for $\mu -$a.e. $s\in \Omega .$ Then we have the inequalities$$\begin{aligned} & \left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{% g\left( s\right) }d\mu \left( s\right) -\int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{h\left( s\right) }d\mu \left( s\right) \int_{\Omega }\rho \left( s\right) h\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right\vert \\ & \leq \frac{1}{4}\cdot \frac{\left\vert A-a\right\vert \left\vert B-b\right\vert }{\sqrt{\func{Re}\left( A\overline{a}\right) \func{Re}\left( B% \overline{b}\right) }}\left\vert \int_{\Omega }\rho \left( s\right) f\left( s\right) \overline{h\left( s\right) }d\mu \left( s\right) \int_{\Omega }\rho \left( s\right) h\left( s\right) \overline{g\left( s\right) }d\mu \left( s\right) \right\vert \notag\end{aligned}$$The constant $\frac{1}{4}$ is best possible. The proof follows by Theorem \[t4.2\]. All the other inequalities in Sections \[s3\] – \[s6\] may be used in a similar way to obtain the corresponding integral inequalities. We omit the details. [99]{} X.H. CAO, Bessel sequences in a Hilbert space. *Gongcheng Shuxue Xuebao* 1**7** (2000), no. 2, 92–98. A. De ROSSI, A strengthened Cauchy-Schwarz inequality for biorthogonal wavelets. *Math. Inequal. Appl.* **2** (1999), no. 2, 263–282. J. B. DIAZ and F. T. METCALF, A complementary triangle inequality in Hilbert and Banach spaces. *Proc. Amer. Math. Soc*. **17** (1966), 88–97. S.S. DRAGOMIR, A generalization of Grüss inequality in inner product spaces and applications. *J. Math. Anal. Appl.* **237** (1999), no. 1, 74–82. S.S. DRAGOMIR, A note on Bessel’s inequality, *Austral. Math. Soc. Gaz.* **28** (2001), no. 5, 246–248. S.S. DRAGOMIR, Some Grüss type inequalities in inner product spaces, *J. Inequal. Pure & Appl. Math.*, **4**(2003), No. 2, Article 42, \[`On line: http://jipam.vu.edu.au/v4n2/032_03.html`\] S.S. DRAGOMIR, A counterpart of Schwarz’s inequality in inner product spaces, *RGMIA Res. Rep. Coll.,* **6**(2003), *Supplement*, Article 18, `[On line http://rgmia.vu.edu.au/v6(E).html]` S.S. DRAGOMIR, A generalisation of the Cassels and Grueb-Reinboldt inequalities in inner product spaces, Preprint, *Mathematics Ar*$X$*iv*, math.CA/0307130, `[On line http://front.math.ucdavis.edu/]` S.S. DRAGOMIR, Some companions of the Grüss inequality in inner product spaces, *RGMIA Res. Rep. Coll.* **6**(2003), *Supplement*, Article 8, `[On line http://rgmia.vu.edu.au/v6(E).html]` S.S. DRAGOMIR, On Bessel and Grüss inequalities for orthornormal families in inner product spaces, *RGMIA Res. Rep. Coll.* **6**(2003), *Supplement*, Article 12, `[On line http://rgmia.vu.edu.au/v6(E).html]` S.S. DRAGOMIR, A counterpart of Bessel’s inequality in inner product spaces and some Grüss type related results, *RGMIA Res. Rep. Coll.* **6**(2003), *Supplement*, Article 10, `[On line http://rgmia.vu.edu.au/v6(E).html]` S.S. DRAGOMIR, Some new results related to Bessel and Grüss inequalities for orthornormal families in inner product spaces, *RGMIA Res. Rep. Coll.* **6**(2003), *Supplement*, Article 13, `[On line http://rgmia.vu.edu.au/v6(E).html]` S. S. DRAGOMIR and B. MOND, On the superadditivity and monotonicity of Schwarz’s inequality in inner product spaces. *Makedon. Akad. Nauk. Umet. Oddel. Mat.-Tehn. Nauk. Prilozi* **15** (1994), no. 2, 5–22 (1996). S.S. DRAGOMIR and J. SÁNDOR, On Bessel’s and Gram’s inequalities in pre-Hilbertian spaces. *Period. Math. Hungar.* **29** (1994), no. 3, 197–205. H. GUNAWAN, On n-inner products, n-norms, and the Cauchy-Schwarz inequality. *Sci. Math. Jpn.* **55** (2002), no. 1, 53–60. H. GUNAWAN, A generalization of Bessel’s inequality and Parseval’s identity. *Period. Math. Hungar.* **44** (2002), no. 2, 177–181. S.M. KHALEELULLA, On Diaz-Metcalf’s complementary triangle inequality.* Kyungpook Math. J.* **15** (1975), 9–11. P.M. MILIČIĆ, On a complementary inequality of the triangle inequality (French)* Mat. Vesnik*** 41** (1989), no. 2, 83–88. D. K. RAO, A triangle inequality for angles in a Hilbert space. *Rev. Colombiana Mat.* **10** (1976), no. 3, 95–97. P. F. RENAUD, A matrix formulation of Grüss inequality, *Linear Algebra Appl.* **335** (2001), 95–100. N. UJEVIĆ,A generalisation of Grüss inequality in prehilbertian spaces, *Math. Inequal. & Appl.,* (to appear).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the first results from the deep and wide 5 GHz radio observations of the Great Observatories Origins Deep Survey (GOODS)-North ($\sigma = 3.5 \, \mu$Jy beam$^{-1}$, synthesized beam size $\theta = 1.47\arcsec \times1.42\arcsec$, and 52 sources over 109 arcmin$^{2}$) and GOODS-South ($\sigma = 3.0\, \mu$Jy beam$^{-1}$, $\theta = 0.98\arcsec \times0.45\arcsec$, and 88 sources over 190 arcmin$^{2}$) fields using the Karl G. Jansky Very Large Array. We derive radio spectral indices $\alpha$ between 1.4 and 5 GHz using the beam-matched images and show that the overall spectral index distribution is broad even when the measured noise and flux bias are considered. We also find a clustering of faint radio sources around $\alpha=$0.8, but only within $S_{5GHz} < 150\, \mu$Jy. We demonstrate that the correct radio spectral index is important for deriving accurate rest frame radio power and analyzing the radio-FIR correlation, and adopting a single value of $\alpha=$0.8 leads to a significant scatter and a strong bias in the analysis of the radio-FIR correlation, resulting from the broad and asymmetric spectral index distribution. When characterized by specific star formation rates, the starburst population (58%) dominates the 5 GHz radio source population, and the quiescent galaxy population (30%) follows a distinct trend in spectral index distribution and the radio-FIR correlation. Lastly, we offer suggestions on sensitivity and angular resolution for future ultra-deep surveys designed to trace the cosmic history of star formation and AGN activity using radio continuum as a probe.' author: - 'Hansung B. Gim' - 'Min S. Yun' - 'Frazer N. Owen' - Emmanuel Momjian - 'Neal A. Miller' - Mauro Giavalisco - Grant Wilson - 'James D. Lowenthal' - Itziar Aretxaga - 'David H. Hughes' - 'Glenn E. Morrison' - Ryohei Kawabe title: 'Nature of Faint Radio Sources in GOODS-North and GOODS-South Fields – I. Spectral Index and Radio-FIR Correlation' --- Introduction ============ Stellar mass build-up and central massive black-hole growth are two key observational constraints for understanding galaxy evolution in modern astronomy. A significant fraction of these activities are heavily obscured by dust over the cosmic history [@lefloch05; @caputi07; @magnelli11a; @whitaker17], and we need another tracer that can penetrate deep into column densities exceeding $N_{HI} > 10^{24}$ cm$^{-2}$ ($A_{V} \gg 100$). The completion of the NSF’s Karl G. Jansky Very Large Array[^1] (VLA) with a more than 100 times larger spectral bandwidth and a new powerful digital correlator translates to more than an order of magnitude improvement in sensitivity to probe star formation and black hole activities at cosmological distances [@perley11]. The low-frequency ($\nu \lesssim 10$ GHz) radio sky is dominated by synchrotron emission [@condon92], which mainly comes from star-forming galaxies (SFGs) and active galactic nuclei (AGN). In SFGs, synchrotron emission is generated through cooling of cosmic rays accelerated by shocks associated with Type II supernovae. In AGN, synchrotron radiation is produced by relativistic charged particles in radio cores and jets. Different origins of the observed synchrotron radiation are encoded in radio spectral index $\alpha$, which is defined as $S \propto \nu^{-\alpha}$, where $S$ is the flux density and $\nu$ is the frequency. Star-forming regions are optically thin to synchrotron radiation, which yields a steep, characteristic radio spectral index of $\alpha \approx$ 0.8 [@condon92]. Synchrotron emission in AGN is produced in two different ways. Radio core AGN are optically thick enough to absorb synchrotron emission and re-emit, which makes the slope of the synchrotron radiation flatter (“synchrotron self-absorption”), $\alpha \ll 0.8$ [@debruyn76]. In jets, relativistic electrons lose their energy over time while traveling down the length of the jets, and the resulting radio spectral index is steeper (“synchrotron aging”), $\alpha > 0.8$ [e.g., @burch79]. Radio spectral indices have been used to study the nature of radio sources. In particular, the emergence of flat spectrum sources in the sub-mJy regime has been reported by several authors [e.g., @donnelly87; @prandoni06; @randall12], although others have reported no flattening in the mean spectral index [@fomalont91; @ibar09]. Deeper radio observations with $\mu$Jy sensitivity have shown that the fraction of steep spectrum sources increases with decreasing flux density, suggesting the emergence of SFGs at the sub-mJy level [@ibar09; @huynh15; @murphy17], in agreement with the interpretation of the normalized number counts [@owen08; @condon12] and the analysis of the polarization [@rudnick14]. A radio study of sub-millimeter galaxies (SMG) has showed that their radio spectral index distribution is a skewed Gaussian with a peak near $\alpha\sim0.7$ and a tail towards flatter spectrum [@ibar10]. These studies indicate a promising potential for the radio spectral index as a tracer of underlying physical activity in distant galaxies. We show here that obtaining [*correct*]{} measurements of radio spectral indices is critically important in calculating the rest-frame radio power and for understanding the cosmic evolution of the faint radio population. Any uncertainty in radio spectral index translates directly to the uncertainty in derived radio power, and this in turn affects the accuracy of the radio-far infrared (FIR) correlation analysis [@gim15; @delhaize17]. Radio AGNs with jets are often resolved by interferometric observations, and even normal SFGs show spatially resolved structures at arcsecond scales [e.g., @chapman04; @barger17]. In this paper, we present the analysis of radio spectral indices between 1.4 and 5 GHz derived with matched beams, for a large sample of faint radio sources identified from the deep and wide 5 GHz radio observations on the GOODS-North (GN) and -South (GS) fields. We examine the correlations among radio spectral index, radio-FIR correlation, and star formation properties. We also discuss the limitations of radio observations tracing normal SFGs, the importance of correct derivation of radio spectral index, and the constraints provided by radio spectral index to classifying radio SFGs. Throughout this paper we adopt the cosmological parameters, $H_{0}=$ 67.8 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{m}=$ 0.308, and $\Omega_{\Lambda}=$ 0.692 [@pdg18]. Observations \[OBS\] ==================== Radio Observations \[RADIO\] ---------------------------- ### GOODS-North \[RADIO\_GN\] Our observations of the GN field were conducted in February and March of 2011, for a total of 22 hours at 5 GHz in the B-configuration of the VLA under the program code [**10C-225**]{}. As summarized in Table \[observations\], we observed two fields with the VLA’s Wideband Interferometric Digital Architecture (WIDAR) correlator which was configured to deliver two 128 MHz sub-bands in full polarization. The sub-bands were further split into 64$\times$2 MHz channels each, and centered at 4896 and 5024 MHz, respectively. The correlator integration time was 3 seconds. [ccccc]{} Field & R.A. (J2000) & Dec. (J2000) & Date & Duration\ & & & 2011 Feb 28 & 5.5 hrs\ & & & 2011 Mar 10 & 5.5 hrs\ & & & 2011 Mar 15 & 5.5 hrs\ & & & 2011 Mar 20 & 5.5 hrs\ & 03$^{h}$ 32$^{m}$ 30$^{s}$.00 & -27$^{\circ}$4345.0& 2012 Dec 16 & 2.5 hrs\ & 03$^{h}$ 32$^{m}$ 13$^{s}$.33 & -27$^{\circ}$4552.5& 2012 Dec 23 & 2.5 hrs\ & 03$^{h}$ 32$^{m}$ 13$^{s}$.33 & -27$^{\circ}$5007.5& 2012 Dec 31 & 2.5 hrs\ & 03$^{h}$ 32$^{m}$ 30$^{s}$.00 & -27$^{\circ}$5215.0& 2013 Jan 01 & 2.5 hrs\ & 03$^{h}$ 32$^{m}$ 46$^{s}$.67 & -27$^{\circ}$5007.5& 2013 Jan 05 (1) & 2.5 hrs\ & 03$^{h}$ 32$^{m}$ 46$^{s}$.67 & -27$^{\circ}$4552.5& 2013 Jan 05 (2) & 2.5 hrs\ The calibration and reduction of the VLA data were carried out using the standard data reduction package Astronomical Image Processing System (AIPS). The flux calibrator 3C286 was used for the calibrations of delay, flux density scale, and polarization while the gain calibrator J1400+6210 was used for the bandpass and gain calibration. The radio quasar J1400+6210 is bright enough (1.72 Jy at 5 GHz) to be used for the bandpass calibration. Imaging of the visibility data was performed using the Common Astronomy Software Applications [CASA, @mcmullin07]. The wide field imaging of each field was carried out using nine facets, each with 4096$\times$4096 pixels with a cell size of 0.35, down to 3$\sigma$. The Clark point spread function (PSF) model is adopted, and the Briggs function is used to weight the data with a robust value of $R=1$. The Briggs weighting function is intermediate between natural (lowest noise, poorest resolution) and uniform (highest noise, best resolution) weighting functions, and the robust factor of $R=1$ gives an optimal compromise between sensitivity and resolution. The final mosaic and sensitivity images incorporating the primary beam correction are produced using the AIPS tasks LTESS and STESS, respectively. The final mosaic image has a size of 5120$\times$5120 pixels, centered at \[12$^{h}$ 36$^{m}$ 49$^{s}$.4, 62$^{\circ}$ 12 50.5\] (J2000), with a synthesized beam of 1.47$\times$1.42. The effective central frequency of the image is 4.959 GHz (hereafter 5 GHz) with a total bandwidth of 240 MHz. The final noise is $\sigma=3.5$ $\mu$Jy beam$^{-1}$ in the image center. The survey coverage map for the GN field is shown in panel (A) of Figure \[coverage\]. ![image](f1.pdf) ### GOODS-South \[RADIO\_GS\] The GS field was observed at 5 GHz for a total of 15 hours in the A-configuration of the VLA under the project code of [**12B-274**]{}. The coordinates of the six pointing centers and observation dates are listed in Table \[observations\]. The WIDAR correlator was configured to deliver sixteen 128 MHz sub-bands, each with 64$\times$2 MHz channels and full polarization products. The frequency span was from 4488 to 6512 MHz. Correlator integration time was 1 second to minimize the time smearing effect. The observations were executed in six different sessions, each with 2.5 hrs long. Data reduction and imaging were performed using CASA. The flux density scale calibrator 3C48 was used for the calibrations of delay, flux scale, and polarization, while the gain calibrator J0240$-$2309 (2.33Jy at 5 GHz) was used for calibrations of bandpass, phase, and delay. Severe radio-frequency interference (RFI) dominated the last four SPWs (12 to 15), and they are excluded in the analysis. Self-calibration was carried out successfully to improve the overall dynamic range of the image using bright sources ($>$ 1 mJy) in each field. Initial imaging was done in CASA for each field and each SPW exploiting the wide-field imaging with 36 facets that are each 10240$\times$10240 pixels in size and using a cell size of 0.1, down to 3$\sigma$. The Clark PSF and the Briggs weighting function with a robust value of $R=0.8$ are adopted for imaging. The synthesized beam depends on the frequency, and all images are convolved to match the largest beam at the lowest frequency SPW before the final mosaic image is constructed. Using the weights of $w_{i}=(beam\; area)_{new}/(beam\; area)_{old}$, all images were convolved to have beam sizes of 0.98$\times$0.45. The mosaic image of each SPW is produced first using the AIPS tasks LTESS and STESS with primary beam correction. The final band-merged image is produced by averaging the SPW mosaic images using the $1/\sigma^{2}$ weight, where $\sigma$ is an RMS noise of each mosaic. The final band-merged mosaic image is 16384$\times$16384 pixels in size with the central frequency of 5.245 GHz (hereafter 5 GHz) and a total bandwidth of 1.486 GHz. The RMS noise in the center of the mosaic is $\sigma = 3.0\, \mu$Jy beam$^{-1}$, and the coverage map centered on \[03$^{h}$ 32$^{m}$ 30$^{s}$.0, $-$27$^{\circ}$ 48 00\] is shown in panel (B) of Figure \[coverage\]. ### Source Catalogs \[CATALOG\] The 5 GHz sources are extracted from primary-beam corrected images using the AIPS task SAD. Since radio-frequency interference is time-dependent and the primary beam response is not uniform, the final noise distribution is not uniform or symmetric across the mosaic. Therefore, we limit the source search for generating the catalogs to the central regions with up to twice the RMS noise in primary-beam corrected maps, i.e., $7\, \mu$Jy beam$^{-1}$ for the GN field and $6\, \mu$Jy beam$^{-1}$ for the GS fields as shown with inner red contours in Figure \[coverage\]. We also minimized the impact of the effective frequency shift to lower frequency toward the edge of the final image. Since the coverage of the image is different at each SPW due to the frequency-dependent primary beam correction, the effective frequency moves to the lower frequency toward the edge of the frequency-stacked image. We have created a matching sensitivity map to track the frequency-dependent effects in the final mosaic. We also limited our catalog to the more central region reasonably far away from the edges. Sources detected with a peak signal-to-noise ratio (SNR) $>$ 5 are selected for the final catalogs, and the measured flux densities are corrected for bandwidth smearing by setting the AIPS adverb BWSMEAR as the fraction of channel width with respect to the central frequency in the SAD. However, the time averaging effect is not taken into account since its impact on the flux density is small ($<$0.1%) enough to be neglected within our catalog regions [@bridle99]. The final catalogs include 52 & 88 sources in the GN & GS fields covering 109 & 190 arcmin$^{2}$ areas, respectively. These catalogs are shown in Appendix \[CAT\]. ### Comparisons with previous results \[PREVOBS\] There are recent radio continuum observations of both GN & GS fields with comparable or higher sensitivity and at a higher angular resolution, and they offer an interesting and complementary view on the nature of the faint radio source population. @guidetti17 have studied the GN field at 5.5 GHz with an RMS noise of $3\, \mu$Jy beam$^{-1}$ and a synthesized beam size of 0.5, and they reported a total of 94 sources ($\ge5\sigma$) over their 154 arcmin$^{2}$ survey area. This is about 80% larger number of sources over a 50% larger area with a similar flux density sensitivity compared to our survey. At least part of this difference must be due to their 3 times smaller beam (9 times worse surface brightness sensitivity), which can fragment some of the resolved star-forming galaxies and jet sources into multiple components. @guidetti17 also suggested this surface brightness sensitivity effect as the root cause for their unexpectedly large (80%) AGN fraction. Earlier surveys of the GS field by @kellermann08 at 4.9 GHz using the VLA and by @huynh15 5.5 GHz using the Australia Telescope Compact Array were both about a factor of 2 shallower in sensitivity ($\sigma\approx8\, \mu$Jy) and 2-3 times lower in angular resolution ($\theta\approx 4\arcsec$) compared to our survey. @huynh15 reported finding 212 source components over their 0.34 deg$^2$ survey area down to a flux density of $\sim50$ $\mu$Jy ($\ge5\sigma$). @kellermann08 did not report the source count in their 4.9 GHz VLA survey, but @huynh15 reported their data to be consistent because of their similar resolution and sensitivity. The 5 GHz source density derived from these surveys with $\sim3$ times shallower depth is 2.6 times lower than our survey. More recently, @rujopakarn16 have observed the Hubble Ultra Deep Field (HUDF) within the GOODS-South at 6 GHz with an RMS noise of $0.32\, \mu$Jy beam$^{-1}$ at an angular resolution of 0.61$\times$0.31. A direct comparison of the source density is difficult in this case because these authors report two source counts that are not fully reflective of the true source density: (1) a total of 68 “bright" ($\ge8\sigma$) sources within the 61 arcmin$^2$ survey region extending beyond the primary beam; and (2) a total of 11 sources detected at $\ge5\sigma$ among the 13 sources detected by ALMA inside the 40.7 arcmin$^{2}$ ALMA survey area. The former number offers a more useful comparison, and corresponds to about 2.5 times higher source density at 6-8 times better sensitivity compared with our survey. The latter number is strictly a lower limit since it includes only ALMA-detected sources at $z=1-3$. The resulting source density is only 60% of the source density we derive, despite their 10 times better flux density sensitivity. In summary, the source density we derive is consistent with those of the past surveys. A striking trend seen is that the derived source density increases relatively slowly with improved sensitivity. There are potentially important systematic differences in how the catalogs are generated, and these source counts are not corrected for completeness in a consistent way. Nevertheless, the rise in source density with improving depth of the survey is far flatter than the Euclidean case. Along with the improving sensitivity, subsequent observations have also employed higher angular resolution, and this might play an important role in the derived source statistics, as discussed further below in § \[RESOLUTION\]. This also serves as one of our motivations for using beam-matched data for our spectral index analysis (see § \[ALPHA\]). Multi-wavelength data \[MULTI\] ------------------------------- ### VLA 1.4 GHz \[DATA14\] 1.4 GHz data are needed to calculate the radio spectral index with our 5 GHz data. For the GN field, we use the deep 1.525 GHz (hereafter 1.5 GHz) imaging data obtained by @owen18 with RMS noise of 2.2$\, \mu$Jy beam$^{-1}$ and an angular resolution of 1.6$\times$1.6(FWHM). @owen18 have used different beam sizes (2, 3, 6, and 12) to measure the flux densities of extended sources because those sources were resolved out with the original beam size, which resulted in the prevention of the loss of flux densities. All of our 5 GHz sources have a matching counterpart in the 1.5 GHz source catalog. For the GS field, we use the 1.4 GHz VLA data by @miller13, which has RMS noise of $\sim$6 $\mu$Jy beam$^{-1}$ at the image center with a beam size of 2.8$\times$1.6. Since the beam area of these 1.4 GHz data is about ten times larger than our 5 GHz data and the depth of the 1.4 GHz data is significantly shallower than in the GN field, matching the counterparts to the 5 GHz sources is more complicated. We convolve the 5 GHz images for each field and SPW to yield a beam size of 2.8$\times$1.6 using the AIPS task CONVL, and the final mosaic is produced by summing over all pointings and SPW using the AIPS tasks LTESS and STESS.[^2] The RMS noise of the convolved 5 GHz image is slightly higher, 6.4 $\mu$Jy beam$^{-1}$. We generated the 3$\sigma$ catalog from the convolved image using the AIPS task SAD. For the 38 sources that were not found in this 3$\sigma$ catalog due to increased noise and low completeness at low SNR, we manually performed aperture photometry on the convolved image centered on the source coordinates from the original, full resolution image. A total of 83 sources are identified in the final convolved 5 GHz mosaic image with a beam size of 2.8$\times$1.6, as eight of the sources in the original catalog are now blended into three sources. Matching the 1.4 GHz catalog with this beam-matched 5 GHz data yields 64 counterparts among the 83 sources. A total of 19 sources lack a 1.4 GHz counterpart because the 1.4 GHz data are too shallow (5$\sigma$ $\geq$ 30 $\mu$Jy beam$^{-1}$ at the image center) to detect 5 GHz sources with a flat or inverted spectrum which is a characteristic of some of the radio AGNs (see the panels (D) and (E) of Figure \[alpha\_flux\]). Throughout this paper, we analyze only the GS sources that have a unique 1.4 GHz counterpart to avoid the uncertainty introduced by the upper limits. ### Chandra X-ray Observatory \[XRAY\] We use X-ray data taken from the [*Chandra X-ray Observatory*]{} survey with full band (0.5-7 keV), soft band (0.5-2 keV), and hard band (2-7 keV) catalogs. We make use of 2 Ms observations for the GN field [@xue16] and 7 Ms observations for the GS field [@luo17]. The limiting fluxes for the GN field are $3.5 \times 10^{-17}$, $1.2 \times 10^{-17}$, $5.9 \times 10^{-17}$ erg cm$^{-2}$ s$^{-1}$ at full band, soft band, and hard band respectively. For the GS field, the limiting fluxes are $1.9 \times 10^{-17}$ at full band, $6.4 \times 10^{-18}$ at soft band, and $2.7 \times 10^{-17}$ erg cm$^{-2}$ s$^{-1}$ at hard band. To calculate the X-ray luminosity, we assume a photon index of $\Gamma=1.8$ for X-ray detected radio sources [@tozzi06] but $\Gamma=1.4$ for X-ray undetected radio sources [@luo17]. The full band X-ray luminosity at \[0.5-7 keV\] is converted to the luminosity at \[0.5-8 keV\] using the relation of $L_{[0.5-8 keV]} = 1.066 \times L_{[0.5-7keV]}$ for the assumed $\Gamma=1.8$ [@xue16]. ### Spitzer Space Telescope \[SPITZER\] We exploit publicly released $Spitzer$ $Space$ $Telescope$ ($Spitzer$) IRAC catalogs of the GN [@wang10] and GS [@damen11] fields. The GN field IRAC catalog has a sensitivity (1$\sigma$) of 0.15$\mu$Jy at 3.6$\mu$m, while the GS field IRAC catalog by the $Spitzer$ IRAC/MUSYC Public Legacy Survey in the Extended Chandra Deep Field-South (ECDFS) has a sensitivity (1$\sigma$) of 0.22$\mu$Jy at 3.6$\mu$m. We make use of the high angular resolutions of our radio observations to find counterparts within the beam sizes, i.e., 1.47 for the GN and 0.98 for the GS fields. ### Herschel Space Observatory \[HERSCHEL\] The comparison FIR data are constructed using the public archival data for the Photodetector Array Camera and Spectrometer (PACS) and the Spectral and Photometric Imaging REceiver (SPIRE) of the $Herschel$ $Space$ $Observatory$[^3]. The PACS photometry data at 70, 100, and 160 $\mu$m are taken from the combination of PACS Evolutionary Probe [@lutz11 PEP] and GOODS-Herschel [@elbaz11] programs described by @magnelli13. The SPIRE 250, 350, and 500 $\mu$m photometry data are taken from the Herschel Multi-tiered Extragalactic Survey (HerMES) DR 3 and 4 [@roseboom10; @magnelli11b; @roseboom12]. We adopt the catalogs extracted using the $Spitzer$ MIPS 24 $\mu$m position priors for the PACS bands by the GOODS-Herschel collaboration[^4]. As for the SPIRE bands, we used the catalogs extracted at the SPIRE 250 $\mu$m source positions (HerMES DR4)[^5]. To identify FIR counterparts to the radio sources, we apply the likelihood ratio technique [@sutherland92]. The search radius adopted is three times the combined positional uncertainties of the radio and Herschel sources. Sources with the reliability of $Rel_{i}>$0.8[^6] are accepted as formal counterparts. We consider an FIR source to be the counterpart to a radio source if it is detected in at least one band in both PACS and SPIRE, with a SNR$>4$ in at least one band. We have compiled the observed 24, 100, 160, 250, 350, and 500 $\mu$m band fluxes of 40 GN and 44 GS sources. The best-fit FIR SED models are identified using a widely used SED fitting code $Le$ $Phare$[^7] [@arnouts99; @ilbert06] with various SED templates for SFGs [@chary01; @dale01; @lagache03] and QSOs [@polletta07]. This analysis yielded a good SED model for 39 GN and 42 GS sources. For the radio sources undetected at FIR or with a poor-fit SED, we calculate IR luminosity with 4$\sigma$ flux limits adopting the average $z=1$ SFG SED template [@kirkpatrick12]. ### Spectroscopic redshifts \[SPECZ\] Spectroscopic redshifts are compiled from the published surveys: GN [@cowie04a; @donley07; @barger08; @wirth15] and GS [@szokoly04; @zheng04; @mignoli05; @ravikumar07; @vanzella08; @popesso09; @straughn09; @balestra10; @silverman10; @cooper12; @kurk13; @lefevre14; @skelton14; @morris15], respectively. From these compilations, we have 45 (out of 52) sources with spectroscopic redshifts for the GN and 55 (out of 64) for the GS field. In particular, all 55 GS sources with a spectroscopic redshift are in the subsample of 64 sources with both 1.4 GHz and 5 GHz photometry used for the spectral index analysis. Even though reliable photometric redshifts from well-sampled photometry data exist in both fields, we limit our analysis to only those with a spectroscopic redshift because errors in redshift translate directly to a large scatter and systematic biases in the derived quantities such as the rest frame radio power, radio-FIR correlation, and star formation rate (SFR). A detailed evaluation of the accuracy of the existing photometric redshifts and a quantitative analysis on the magnitude of error introduced by using photometric redshifts using this spectroscopic subsample are presented in Appendix \[ZPHOT\]. Adding those sources with only photometric redshifts to our statistical analysis can in principle expand our sample by up to 16%, but we have elected to remove this major source of scatter in our statistical analyses presented here for now. ### 3D-HST \[3DHST\] We adopt physical parameters such as stellar mass, SFR, and effective radius for our 5 GHz sources that also appear in the 3D-HST [^8] [@brammer12; @skelton14; @momcheva16] database. Stellar mass is estimated by the FAST code [@kriek09] with the @chabrier03 initial mass function, and the @bc03 stellar population synthesis library [@skelton14]. The SFR is computed through the conversion of UV+IR luminosity, where UV luminosity is derived from the rest-frame luminosity at 2800Å, and IR \[8-1000$\mu$m\] luminosity is derived from $Spitzer$ MIPS 24$\mu$m flux density by assuming the log average of @dh02 templates [see @whitaker14]. Effective radius (R$_{eff}$) is the semi-major axis of the ellipse containing one half of the total flux of the best Sérsic model given by GALFIT [@vanderwel12]. The spectroscopic redshifts given in the 3D-HST database are not as complete as our compilation, and we have to match our spectroscopic redshifts with the best redshifts in the 3D-HST database, which ranks them by spectroscopic, grism, or photometric redshift. A comparison of the best 3D-HST redshifts with our spectroscopic redshifts is shown in Figure \[specz\_3dhst\]. We choose the 3D-HST counterparts with best redshifts satisfying $|z_{spec}-z_{best,3D-HST}|/(1+z_{spec}) < 0.05$, which is shown with dashed lines in Figure \[specz\_3dhst\]. Spectroscopic redshifts of the best redshifts in 3D-HST are mostly the same as ours while there are some small to significant offsets in grism and photometric redshifts. Through matching the redshifts, we have 3D-HST counterparts for 39 GN and 45 GS radio sources. ![Comparison of our spectroscopic redshifts and the best redshifts in 3D-HST for the GN (square) and the GS (triangle) fields. The color represents the type of redshift measurements, e.g. spectroscopy (blue), grism (green), and photometry (red). The solid line is the one-to-one line and dashed lines show the selection limits of $\pm 0.05$ in $|z_{spec}-z_{best,3D-HST}|/(1+z_{spec}) < 0.05$. \[specz\_3dhst\]](f2.pdf) ![image](f3.pdf) Selection Function and Rest Frame 5 GHz Radio Power \[P5GHz\] ============================================================= In Figure \[lumz\], we show the selection function of our radio sources with rest-frame 5 GHz radio power as a function of redshift. The rest-frame radio power is calculated using the measured spectral index as $$P_{5GHz}=4 \pi d_{L}^{2} S_{5GHz} (1+z)^{\alpha-1} [{\rm W\, Hz}^{-1}],$$ where $d_L$ is the luminosity distance, $S_{5GHz}$ is the measured 5 GHz flux density of the original map, and $\alpha$ is the measured radio spectral index between 1.4 GHz and 5 GHz using the convolved map (see § \[ALPHA\]). The strong positive k-correction associated with radio sources translates to a significant selection bias in favor of flat spectrum sources ($\alpha=0$, dashed line) with lower intrinsic radio power, but such flat spectrum sources are rare in our sample, as shown in this plot (also see Fig. \[alpha\_flux\]). The selection functions of the two fields are similar with comparable mean and median values of 5 GHz radio power and redshifts, and a joint analysis of the combined sample is reasonable as long as the slight difference in the catalog depth is properly taken into account. The majority of the detected sources have rest frame 5 GHz radio power between $10^{22}$ and $10^{24}$ W Hz$^{-1}$, which is the range of radio power associated with intense starburst systems (LIRGs, ULIRGs) and Seyfert nuclei in the local universe. However, gas content and SFR of star forming main sequence (MS) galaxies are known to increase rapidly with increasing redshift by an order of magnitude to $z\ge1$ [e.g., @speagle14; @scoville17], and a large fraction of these galaxies at higher redshifts are likely powered by star formation, as discussed below. Only four sources (two in each field) have a radio power high enough to be classified as “radio-loud" with $P_{5GHz}\ge 10^{25}$ W Hz$^{-1}$ [@miller90]. Radio Spectral Index \[ALPHA\] ============================== Radio spectral index $\alpha$ is a measure of the shape of a radio spectrum characterized as a power-law, $S \sim \nu^{-\alpha}$. We compute the spectral index between 1.4 and 5 GHz using the flux densities derived from the 5 GHz images beam-matched to the 1.4 GHz images as described in Section \[DATA14\]. In principle, the radio spectral index can be estimated using only the 5 GHz data with its wide bandwidth of 1.5 GHz through the multi-frequency synthesis. The algorithm that can produce in-band spectral index calculation for mosaic observations was not available in CASA when the data were being analyzed. The significant changes in the size of both the primary beam and the synthesized beam across the bandwidth make this in-band spectral index calculation challenging, especially away from the pointing center. These difficulties result in the errors of the in-band spectral index which are not competitive with those using the full 1.4-5 GHz spectral baseline. It is empirically shown that the majority of radio sources in a wide range of redshifts show radio spectra that are fit well with a simple power-law [e.g., @klamer06]. In the frequency range between 1.4 and 5 GHz, the contribution by free-free emission is generally negligibly small [@condon92]. ![image](f4.pdf) The distributions of radio spectral index as a function of flux density are shown in Figure \[alpha\_flux\]. Panels (A), (B), and (C) are for the GN field while the panels (D), (E), and (F) are for the GS field. Since the sensitivity for radio spectral index (dotted line) depends on the flux density limit of the second band (dashed lines), it is not uniform as a function of flux density, and this is a common but important feature for all flux-limited surveys. Specifically, this non-uniform completeness limits our study to a narrower range of radio spectral indices at fainter flux densities. For our 5 GHz selected sample analyzed here, the depth of the existing 1.4 GHz data restricts the observable range of radio spectral index. We can see this effect clearly in panel (D), where the range of the radio spectral indices is limited to $\alpha > 0$ even at $S_{5GHz}=30\, \mu$Jy (10$\sigma$), and this can potentially lead to missing sources with inverted spectra at flux densities of $S_{5GHz} < 30\, \mu$Jy. In practice, however, few inverted spectrum sources with $S_{5GHz} < 35\, \mu$Jy ($10\sigma$) are found in the GN field (panel A), and the actual impact of this potential bias may be limited. The uncertainties in the derived radio spectral indices are mainly attributed to the larger uncertainties of flux densities at 5 GHz for the GN field and flux densities at 1.4 GHz for the GS field. The radio spectral index distribution in the GS field is broader and smoother than that in the GN field, and this can be attributed to the shallow depth of the 1.4 GHz data and the noisier 5 GHz photometry as a result of the convolution with a larger Gaussian kernel. Another source of the uncertainty is the wide bandwidth of the VLA. The effective frequency of each flux density measurement depends on the bandwidth and the spectral shape of the source, and this could lead to a significant offset of the effective frequency from the instrumental frequency. For the steepest spectrum source with $\alpha=$1.64 in the GS field, we estimate that this effect can lead to a maximum frequency offset of 0.1 GHz and a maximum deviation of 0.02 in the derived radio spectral index. Thus, we conclude that this effect has only a minor impact on our radio spectral index calculation. When these systematic effects are taken into account, the distributions of radio spectral indices in these two fields are consistent with each other. Panel (A) in Figure \[alpha\_flux\] shows a clustering of radio sources at $\alpha \sim$0.75 and $S_{5GHz} \le 150\, \mu$Jy, leading to a prominent peak in the histogram in panel (C). The peak of the radio spectral index histogram for the GS field (panel F) occurs at the same $\alpha$ value, but the clustering is not as pronounced, possibly diluted and broadened by the larger uncertainties in the measured radio spectral indices (see panels B & E). This peak in the $\alpha$ of steep spectrum radio sources at $S_{5GHz} \le 150\, \mu$Jy has not been reported by earlier studies [e.g., @donnelly87; @fomalont91], but their small sample size (30 in @donnelly87 and 41 in @fomalont91) likely contributed to their poor statistics. A more recent study of a larger sample by @huynh15, who measured radio spectral indices of 5.5 GHz selected sources above $S_{5.5GHz} \gtrsim 50\mu$Jy in the Extended Chandra Deep Field-South (ECDFS) using the 1.4 GHz catalog of @miller13, did report a spectral index distribution with a clear peak near $\alpha\sim0.7$, as long expected of the star forming galaxy population (see the discussion below). We note that @huynh15 computed their radio spectral index without matching the beam sizes (about a factor of 2.2 in diameter), and this might be a source of an important systematic error – see further discussions in § \[DISC\_SI\]. A natural explanation for the peak near $\alpha \sim 0.7$ is the contribution by the SFG population. Synchrotron emission is optically thin when it is produced by the shocks associated with supernovae in SFGs [@condon92; @seymour08]. The flattening or upturn in the number counts of radio sources seen around $S_{20cm} \leq 100-200\, \mu$Jy [@owen08; @condon12] is explained by the emergence of this population of SFGs at faint flux density levels, exceeding those of the radio-loud AGN population that is dominant at flux densities $\geq 1\,$mJy. The increase of fractional polarization and the change of slope in the polarized number count at polarized flux densities $\leq$1 mJy also imply the increasing contribution of SFGs [@rudnick14]. The broad radio spectral index distributions for the GS and GN fields shown in Figure \[alpha\_flux\] suggests the existence of both steep spectrum ($\alpha=0.5-1.0$) and flatter or inverted spectrum ($\alpha < 0.5$) sources at $S_{5GHz} < 150\, \mu$Jy, supporting the conclusions of the more recent analyses indicating that the faint $\mu$Jy radio population consist of both SFGs and radio-quiet AGN [@padovani09; @bonzini13; @rudnick14]. A detailed study of a small sample of 14 local SFGs by @klein18 has shown that there is also some scatter in the observed radio spectral index in the GHz range due to a varying degree of free-free emission and opacity effects. What our study further indicates is that a larger sample with higher quality radio spectral index measurements are needed to characterize the relative contribution by these two populations. Star Formation Properties of Radio Sources \[RADIO\_SFMS\] ========================================================== In the previous section, we have shown and discussed the distributions of radio spectral indices derived between 1.4 and 5 GHz from the beam matched images. In this section, we investigate how the radio spectral index correlates with star formation properties by utilizing the SFRs and stellar masses derived by the 3D-HST project [@skelton14; @momcheva16]. ![image](f5.pdf) $\Delta_{SFR}$ as a measure of SF activity \[DELTASFR\] ------------------------------------------------------- The distributions of SFR and stellar mass of radio sources in GN (squares) and GS (triangles) are shown in four redshift bins in Figure \[sfr\_mass\_z\]. The dashed lines indicate the SFR-stellar mass relation of the star forming MS at a mean redshift in each panel, and the shaded regions represent dispersions of SFR-stellar mass relation at the MS with $log_{10} SFR - log_{10} SFR(MS) = \pm 0.2$ [@speagle14]. As @speagle14 and others noted, the MS evolves strongly with redshift, and it is not clear whether the SFRs measured at different redshifts can be compared directly in a meaningful way. A more insightful measure might be the level of SF activity normalized by that of the MS at the same redshift. Therefore, we define “$\Delta_{SFR}$", the logarithm of the ratio of SFR with respect to that of the MS, as $$\Delta_{SFR} \equiv log_{10} SFR - log_{10} SFR(MS),$$ where $SFR(MS)$ is the $SFR$ for the star forming MS galaxy at a given stellar mass and redshift calculated using Equation (28) by @speagle14. ![Offset of SFR from the MS ($\Delta_{SFR}$) and stellar mass with a color code according to radio spectral index in GN (squares) and GS (triangles) fields. Small gray points are the 3D-HST galaxies without radio counterparts in GN and GS fields for a comparison. The dashed lines of $\Delta_{SFR}= \pm 0.2$ indicate the selection of SFGs. SBs are sources above a line of $\Delta_{SFR} > 0.2$ while quiescent galaxies are those below a line of $\Delta_{SFR} < -0.2$. The distribution of radio spectral index show that SFG+SB have mainly steep spectra while quiescent galaxies have flatter spectra even though both have wide distributions. \[sfms\_selection\]](f6.pdf) Following @speagle14, we define “SFGs" as galaxies with $-0.2 \le \Delta_{SFR} \le 0.2$, “starbursts (SBs)" as those with $\Delta_{SFR} > 0.2$, and “quiescent galaxies" as those with $\Delta_{SFR} < -0.2$. In total, we have 49 SBs (58%), 10 SFGs (12%), and 25 quiescent galaxies (30%). The dominance of the SB population among the $\mu$Jy radio population identified by one of the deepest surveys thus far is somewhat surprising, but this reflects the selection bias driven by the survey depth as discussed further below (also see § \[RADIO\_OBS\]). In Figure \[sfms\_selection\], we show the distribution of $\Delta_{SFR}$ as a function of stellar mass, color-coded by radio spectral index, $\alpha$. Quiescent galaxies detected in radio continuum are on average more massive than the SFG+SB while the SFG+SB show a wider range of stellar masses as shown in Figure \[sfms\_selection\]. The median stellar masses are 3.8$\times$10$^{10} M_{\bigodot}$ for SFG+SB and 9.3$\times$10$^{10} M_{\bigodot}$ for quiescent galaxies, respectively. The two-sided Kolmogorov-Smirnov test for two samples in R [@rcite] indicates that stellar mass distributions in both populations are substantially different with a p-value of $< 4.3 \times 10^{-5}$. This significant difference in mass distributions is consistent with the mass quenching scenario for quiescent galaxies [e.g., @kauffmann03]. The majority of our radio sources (58%) show intense star formation activity with $\Delta_{SFR} > 0.2$ while only 12% of radio sources fall within the range of MS SFGs with $-0.2 < \Delta_{SFR} < 0.2$. For comparison, we show the 3D-HST galaxies without radio counterparts (light gray) in Figure \[sfms\_selection\]. In the same stellar mass range as the radio sources (log $M_{*} \ge 9.08$), the 3D-HST galaxies undetected in radio are classified into SBs (25%), SFGs (44%), and quiescent galaxies (31%). The fraction of quiescent galaxies among source undetected in radio is the same as radio detected sources. Therefore, the main difference is in the fraction of SBs. In all cases, the radio detected galaxies trace the high stellar mass envelope for all types of galaxies, independent of $\Delta_{SFR}$, and this is a natural consequence of a flux-limited survey as demonstrated by our selection function shown in Figure \[lumz\]. Since our radio observations trace the synchrotron emission from star formation and AGN activities, these statistics imply that our radio survey is not deep enough to detect the star formation activity in the star forming MS galaxies in the full range of redshift probed, even with $\mu$Jy sensitivity. We discuss this finding in more detail in § \[RADIO\_OBS\]. Star Formation Activity and Radio Spectral Index \[SFRSI\] ---------------------------------------------------------- An apparent correlation between radio spectral index and star formation property ($\Delta_{SFR}$) is hinted in the color-coded data for radio spectral index in Figure \[sfms\_selection\]. Steep spectrum sources with $\alpha > 0.5$ (green and blue) appear predominantly in the $\Delta_{SFR} > -0.2$ region while sources with a flat or inverted spectrum ($\alpha < 0.5$, yellow and orange) appear mostly in the region below $\Delta_{SFR} = -0.2$. This might be an indication that steep spectrum sources are abundant among SFG+SB galaxies with $\Delta_{SFR} > -0.2$ while few steep spectrum sources are in the quiescent galaxy region with $\Delta_{SFR} < -0.2$. This apparent trend is examined more directly in Figure \[alpha\_delsfr\] by plotting the radio spectral index as a function of $\Delta_{SFR}$. What is apparent now is that the SFG+SB galaxies are more tightly clustered around $\alpha \sim 0.8$, while the quiescent galaxies ($\Delta_{SFR} < -0.2$) are distributed more uniformly, spanning a nearly twice as large range in spectral index $\alpha$ – the SFG+SB galaxies have a tighter distribution with a higher mean (0.72$\pm$0.05) than the quiescent galaxies (0.22$\pm$0.11). The histograms in the panel (B) of Figure \[alpha\_delsfr\] show these trends clearly with different peak positions – the SFG+SB galaxies (blue) have a peak at $\alpha \approx 0.8$, but the quiescent galaxies (red) have a peak at $\alpha \approx 0.13$. The two-sided Kolmogorov-Smirnov test for the two samples in R indicates that the null hypothesis of their radio spectral index distributions drawn from the same parent population is rejected with a p-value of 0.0015. This result is consistent with the expectation that star formation yields steep radio spectra with $\alpha \sim 0.8$ through optically thin synchrotron emission produced by supernova shocks [@condon92] while AGN are associated with flat or inverted radio spectra with $\alpha \ll 0.8$ through synchrotron self-absorption [e.g., @debruyn76]. It is tempting to speculate that there is a weak trend of decreasing $\alpha$ with decreasing $\Delta_{SFR}$ if the handful of sources with $\alpha \ge1$ in the upper left corner of Figure \[alpha\_delsfr\] are ignored. These ultra-steep spectrum sources are generally jet-dominated AGNs, and one could separate them out morphologically, but that kind of handpicking is not generally possible for a study without the necessary spatial information.[^9] The large spread in $\alpha$ at a given value of $\Delta_{SFR}$ also makes such a generalization difficult to trust. What seems to be more certain is that this spread is real and essentially independent of star forming activity $\Delta_{SFR}$, and this has an important consequence for understanding and modeling the nature of faint radio population and their evolution, as we discuss further below. ![Radio spectral index distribution as a function of $\Delta_{SFR}$. The panel (A) shows that the radio spectral index distribution of SFG+SB ($\Delta_{SFR} > -0.2$) is more tightly clustered around $\alpha \sim 0.8$, in comparison with the quiescent galaxies ($\Delta_{SFR} < -0.2$), which are distributed more uniformly and widely in spectral index $\alpha$. These trends are easily seen in the histograms of SFG+SB (blue) and quiescent galaxies (red) in panel (B). The Kolmogorov-Smirnov test indicates that the radio spectral index distributions of the two populations are different from each other with a p-value of 0.0015. \[alpha\_delsfr\]](f7.pdf) Radio-FIR Correlation of Radio Sources \[QFIR\_RADIO\] ====================================================== The radio-FIR correlation is one of the robust indicators of star formation and black hole activities [@helou85; @condon92; @yun01; @bell03]. In particular, the radio-FIR correlation of SFGs is a tight correlation with a less than 0.3 dex scatter over five orders of magnitudes in luminosity [@yun01], and this obviously indicates that a strong coupling exists between dust-reprocessed emission of ultraviolet radiation from massive young stars and synchrotron radiation by cosmic rays accelerated in type II supernovae [@condon92]. In this section, we examine the radio-FIR correlation of the $\mu$Jy radio sources identified in the GN & GS fields as a function of their star formation properties and their measured radio spectral index. ![image](f8.pdf) The rest-frame radio-FIR correlation parameter, $q_{FIR}$ is defined as $$q_{FIR} = log_{10}\left( {{L_{FIR} [W] } \over {3.75 \times 10^{12} Hz}} \right) - log_{10} P_{1.4GHz} [W Hz^{-1} ] ,$$ where $L_{FIR}$ is a rest-frame FIR luminosity from 40 to 120 $\mu$m [@helou85; @yun01]. The radio-FIR correlations of radio sources as a function of redshift are shown in Figure \[rfc\_p5\] for GN (squares) and GS (triangles), color-coded by $\Delta_{SFR}$. The overwhelming majority of the SFG+SB population (86%) follow the local radio-FIR correlation for SFGs [@yun01], and galaxies near the star-forming MS ($-0.5\le \Delta_{SFR}\le +0.5$) nearly exclusively fall within the grey band shown in the left panel of Figure \[rfc\_p5\]. On the other hand, only $\sim$30% of the quiescent galaxies have $q_{FIR}$ of local SFGs, and their radio continuum emission likely has an origin other than star formation. Most of the quiescent galaxies (76%=19/25) are not detected in the far-IR, and they are marked with a down arrow in Figure \[rfc\_p5\]. A statistical analysis of the radio-FIR correlation for each subpopulation distinguished by its star formation properties shows a clear difference between the SFG+SB galaxies and the quiescent galaxies. We have applied the Kaplan-Meier estimator for $q_{FIR}$ of the two subpopulations with the subroutine [**cenfit**]{} of the statistical package NADA[^10] in R [@rcite]. This analysis shows that the SFG+SB galaxies have a median $q_{FIR}$ value of 2.26$\pm$0.09, in good agreement with the local canonical value $<q_{FIR}>\approx 2.3$ [@yun01], while the quiescent galaxies have a median value of 1.10$\pm$0.10. The difference in these median values is quite substantial with a significance of $\sim 8.8 \sigma_{c}$ (the combined uncertainty for both populations is $\sigma_{c}=0.13$). To quantify the difference of radio-FIR correlation distributions between SFG+SB and quiescent galaxies further, we perform the Log-rank test with left-censored data using the [**cendiff**]{} function in the NADA in R [@rcite]. This test indicates that the SFG+SB galaxies and the quiescent galaxies have entirely different distributions of $q_{FIR}$ with a p-value of $< 2 \times 10^{-6}$. These statistical tests confirm the results of previous studies that the radio-FIR correlation is a powerful tracer of star formation activity [@yun01; @bell03]. An obvious trend seen in the left panel of Figure \[rfc\_p5\] is the decreasing $q_{FIR}$ with increasing 5 GHz radio power. A straightforward interpretation is that radio AGN contribution is increasing both fractionally and in absolute value for the most radio luminous objects at $P_{5GHz}\ge 10^{24}$ W Hz$^{-1}$. A somewhat surprising fact is that the majority of these “radio-excess" objects with $P_{5GHz}\ge 10^{24}$ W Hz$^{-1}$ are also intensely starbursting galaxies with $\Delta_{SFR}\gtrsim 1$. Similar objects found in the local Universe are mostly Seyfert AGNs associated with a nuclear starburst, but they are exceedingly rare, accounting for only 1% of the $IRAS$ 2 Jy Sample studied by @yun01. One might conclude a sharp increase (up to $\sim$5%) of such AGN+SB hybrid objects at $z>1$, but our sample size is too small to be highly quantitative. Furthermore, survey depth and sample definition might have a strong influence in such an inference as even our $\mu$Jy sensitivity is not sufficient to probe the MS star forming galaxies (see below § \[RADIO\_OBS\]). Indeed, both the AGN fraction and the radio-excess fraction reported by the deeper survey of the COSMOS field by @smolcic17b are much higher, $\sim$20%, at the $S_{1.4GHz}=50\, \mu$Jy and rising up to $\sim$50% at $S_{1.4GHz}=100\, \mu$Jy (see their Figure 12). A similar result was also reported by a study with a different AGN identification using the VLBA observations on the same field, where the AGN fraction is $>$40$-$55% at 100 $< S_{1.4GHz} < $ 500 $\mu$Jy [@herrera-ruiz18]. The dependence of radio-FIR correlation on radio spectral index is examined on the right panel of Figure \[rfc\_p5\], and the quiescent galaxies with $\Delta_{SFR}\le$ -0.2 show systematically lower $q_{FIR}$ (on average by 0.6-0.8) compared with the SFG+SB population, nearly independent of radio spectral index $\alpha$. An in-depth analysis of the similarities and differences among these different subpopulations is discussed in our next paper (Paper II), but this is another indication that quiescent galaxies are indeed a distinct population in their radio and IR properties as well. It is interesting that the extreme steep spectrum quiescent galaxies identified in Figure \[alpha\_delsfr\] and discussed in § \[SFRSI\] are [*not*]{} extreme outliers and instead nearly follow the normal radio-FIR correlation. A real outlier in the distribution is again the radio-excess SBs with $\Delta_{SFR}\gtrsim 1$ discussed above, and their radio spectral index is typically around $\alpha\sim +0.9$, indistinguishable from the bulk of the normal SFGs and SBs. Intense starbursts associated with massive galaxies in the local universe, such as luminous infrared galaxies (LIRGs) and ultraluminous infrared galaxies (ULIRGs), are associated with high free-free opacity, leading to the flattening of radio spectrum [e.g., @klein18] and even obscuring a radio AGN altogether at longer wavelengths (e.g., Mrk 231). Therefore, the distribution and geometry of starburst activity in these $z>1$ luminous radio-excess SBs are somehow different from local examples. And they certainly cannot be identified from their luminosity and radio spectral index alone. Future higher resolution observations that can resolve the star-forming structures and kinematics are required to yield deeper insight on these sources. Discussion \[DISCUSSION\] ========================= Importance of Survey Sensitivity \[RADIO\_OBS\] ----------------------------------------------- What makes deep radio continuum imaging attractive as a tool for studying galaxy evolution is the high angular resolution of an interferometer like the VLA to deliver spatial information at much better than 1, free from the fundamental limits of source confusion that restrict the usefulness of current infrared facilities such as [*Herschel*]{}. Advances in sensitivity through increased bandwidth and collecting area also enable us to probe star forming galaxies and AGN population at cosmological distances directly. One of the main goals of this VLA study of the GOODS cosmology fields is to analyze the nature of the faintest radio continuum sources detectable with the current technology and establish technical specifications for future surveys for galaxy evolution using facilities such as MeerKAT, ASKAP, and eventually the Square Killimeter Array. The plot of rest-frame 5 GHz radio power versus spectroscopic redshift shown in Figure \[lumz\] and the analysis of their star formation properties discussed in § \[RADIO\_SFMS\] clearly demonstrate that our deep 5 GHz continuum data indeed probes star forming galaxies out to $z\sim 3$. On the other hand, our detailed examination of their specific star formation rate shown in Figure \[sfms\_selection\] finds that the fraction of SBs (58%) in our radio sources are more than twice the fraction among the parent general galaxy population in the 3D-HST survey. Since there are no reasons for radio-selected SFGs to be fundamentally different from optical or UV selected SFGs, this statistical difference is likely the result of the combined effects of our survey depth and the strong evolution of cosmic star formation rate density [see review by @madau14]. ![Detectability of MS SFGs and a sensitivity of radio observations. We show the observable galaxies with a certain SFR and stellar mass as a function of redshift. We show SFGs with SFR of MS (solid lines) and $5 \times$SFR of MS (dashed lines) as a function of redshift with respect to the stellar masses of $10^{10} M_{\odot}$ (blue), $10^{11} M_{\odot}$ (green), and $10^{12} M_{\odot}$ (red). The survey limits ($5\sigma$) of our radio observations are indicated by the horizontal lines, i.e. 15$\mu$Jy for GS. As examples, we marked the maximum redshifts of detecting M82-like (red diamond) and Arp220-like (red star) galaxies at the survey limits. \[detection\]](f9.pdf) To explore this further, we show the calculated 5 GHz radio flux density of SFGs with SFR of MS (solid lines) and $5 \times$SFR (dashed lines) for stellar masses of $10^{10} M_{\odot}$ (blue), $10^{11} M_{\odot}$ (green), and $10^{12} M_{\odot}$ (red) in Figure \[detection\]. We assume that SFR scales with 1.4 GHz radio power following the radio-total IR correlation with $q_{TIR}=2.64$ [@murphy11] and a single average radio spectral index of +0.8 (but see the discussion on potential bias below). In general, angular resolution and source size are important considerations for survey sensitivity. Here, we make a simplifying assumption that most sources detected in a deep survey like this are at high redshifts are unresolved or marginally resolved [@owen08; @murphy17; @owen18].[^11] At our 15$\, \mu$Jy ($5\sigma$) survey limit for the GS field (black horizontal line), the maximum observable redshifts for star forming MS galaxies (dashed lines) are $z$=0.13 for $10^{10} M_{\odot}$ (solid blue), $z$=0.32 for $10^{11} M_{\odot}$ (solid green), and $z$=2.55 for $10^{12} M_{\odot}$ (solid red). SFGs with $5\times$SFR of the MS can be detected out to $z$=0.41 for $10^{10} M_{\odot}$ (dashed blue), $z$=2.19 for $10^{11} M_{\odot}$ (dashed green), and $z >$3 for $10^{12} M_{\odot}$ (dashed red). In terms of well-known local SFGs, we can detect M82-like galaxy out to $z$=0.34 and Arp220-like galaxy out to $z$=1.63, respectively. Therefore, even with the $\mu$Jy sensitivity we achieved in these two GOODS fields, we can probe a main sequence SFG with a stellar mass of $10^{11} M_{\odot}$ only out to $z\sim$0.3, and our survey is strongly biased to ULIRG-like starbursts and AGN-host galaxies at $z>1$. This same plot also demonstrates that directly probing the evolution of the star forming MS galaxies will require a [*much*]{} deeper survey. To probe a MS SFG with $SFR=10 M_{\odot}$ yr$^{-1}$ at the Cosmic Noon ($z=2.5$) at $5\sigma$, a 5 GHz radio survey needs to reach a survey sensitivity of 28 nJy with the Next Generation VLA or Square Kilometer Array. This required sensitivity is $\sim$11.5 times deeper than the existing deepest 5 GHz continuum survey of the Hubble Ultra Deep Field by @rujopakarn16 and [*more than 100 times deeper*]{} than our own surveys presented here. Importance of Angular Resolution \[RESOLUTION\] ----------------------------------------------- In the previous section, we discussed the importance of sensitivity in probing star forming galaxies at cosmological distances and the requirement for future surveys to improve the sensitivity by more than an order of magnitude to probe the evolution of the main sequence SFGs. However, another surprising outcomes of our deep VLA 5 GHz surveys is that simply obtaining a deeper data itself does not guarantee probing much deeper into the luminosity function. As discussed in § \[PREVOBS\], the comparison of the past and recent deep surveys seems to suggest that the rise in source density is [*apparently*]{} much flatter than the Euclidean case. Obviously this is not an entirely fair and rigorous comparison, and the situation is quite a bit more complex. ![Measured flux density comparison among the radio sources in the GS field with those reported by previous studies with different angular resolution. Those by @kellermann08 and @huynh15 with $\sim3$ times larger beams are on average $\sim$30 percent larger. The higher resolution survey by @rujopakarn16 with 0.61$\times$0.31beam has only one detected source in common [the 6 GHz source flux densities are actually reported by @dunlop17] that agrees well with ours. The dotted line is the unity ratio line to guide the eye. \[GSCompare\]](f10.pdf) A potentially important experimental parameter here is angular resolution. Both statistical [e.g., @windhorst90; @morrison10] and direct imaging [e.g., @chapman04] studies have shown that faint radio sources have an intrinsic size of $1\arcsec-2\arcsec$. Resolving sources with an angular resolution higher than the intrinsic size can negatively impact deep surveys of star forming galaxies in two ways: (1) by fragmenting individual radio sources into multiple components, especially in the low SNR regime; and (2) loss of surface brightness sensitivity and the resulting loss of extended emission. The former is a well-known phenomenon for nearly all deep radio surveys, and most previous studies have produced catalogs of “source components" as well as integrated source catalogs. In analyzing the $0.5\arcsec$ imaging data of the GN field, @guidetti17 identified the loss of surface brightness sensitivity and their bias toward compact sources as the primary cause for their extra-ordinarily high AGN fraction. Only a modest (a factor of $\sim3$) increase in the source density reported by @rujopakarn16 in their ultra-deep imaging of the GS field with nearly 10 times better sensitivity than our survey is likely driven by the loss of flux density and surface brightness sensitivity resulting from their using very high angular resolution (0.61$\times$0.31). We explore the impact of angular resolution on flux recovery further by comparing the measured flux density of the faint radio sources in the GS field reported by different surveys with varying angular resolution in Figure \[GSCompare\]. The flux densities reported by @kellermann08 at 4.85 GHz and by @huynh15 at 5.5 GHz were both measured using a $\theta\approx4\arcsec$ beam, and these flux densities are systematically higher when compared with our measurements obtained with a 1.5 beam. The average flux ratio between the @kellermann08 flux density to our flux density is 1.34, with a median ratio of 1.14. Similarly, the average and median ratios of the @huynh15 flux density to our flux density is 1.26 and 1.18, respectively. A small correction due to intrinsic spectral index is neglected, as both low angular resolution measurements are significantly larger (about 30%) than our measurements with an effective center frequency of 5.25 GHz. These measured differences are much larger than the expected absolute calibration uncertainties ($\lesssim$10%) associated with the standard flux density bootstrapping calibration. The comparison with the higher resolution (0.61$\times$0.31) imaging by @rujopakarn16 does not provide much new insight as there is only one source in common. In summary, observing angular resolution smaller than the expected intrisic radio source size of $1\arcsec-2\arcsec$ can lead to a significant systematic bias in deep radio surveys. Carefully accounting for this resolution effect and surface brightness sensitivity is an important consideration for all future ultra-deep surveys with nJy sensitivity. Importance of Accurate Radio Spectral Index \[DISC\_SI\] -------------------------------------------------------- Obtaining accurate radio spectral indices is an important step in studying the radio-FIR correlation and its evolution over the cosmic time because computing the rest-frame radio-FIR correlation requires a correction with a “$log_{10} \left[(1+z)^{1-\alpha} \right]$" dependence on radio spectral index, associated with the $k$-correction for the observed radio power. This has the largest impact at the highest redshifts, where the evidence for any evolution in the radio-FIR correlation is expected to be the most pronounced. Many previous studies of faint radio source population have applied only a partial correction for this spectral index effect, largely because of practical constraints, but the magnitude of the resulting error may have been under-appreciated. Ideally, one should obtain observations at two different frequencies with matched beams and depths to derive correct radio spectral index. However, conducting observations in [*two*]{} frequency bands can be prohibitively expensive in telescope time, especially for deep surveys that require tens to hundreds of hours of integration time in each band. Instead, a common practice is to take advantage of existing survey at another frequency, as we have done using the existing 1.4 GHz surveys by @miller13 and by @owen18. If the complementary archival data are not readily available in raw format as is often the case, however, radio spectral index has to be computed without the beam correction [e.g., @ivison10a; @bourne11; @magnelli15; @delhaize17]. Alternatively, a number of other studies have resorted to adopting a single average radio spectral index of 0.7-0.8 instead [e.g., @appleton04; @ibar08; @murphy09a; @sargent10; @ivison10b; @mao11]. Because even SFGs at $z\ge1$ are resolved at $\sim1\arcsec$ scales, ignoring this resolution effect can lead to significant systematic errors in computing the total radio power and the radio spectral index. Similarly, the radio spectral index distribution is intrinsically broad as discussed in § \[ALPHA\], and adopting a single value of $\alpha$ can introduce significant errors in the derived source properties. Here, we analyze both of these issues quantitatively using our GN and GS deep survey data with and without the appropriate corrections. ![image](f11.pdf) ### Importance of Beam-matching for the Radio Spectral Index Calculation \[NON\_ALPHA\] A measured radio spectral index is a direct indicator of the primary radiation mechanism for the observed radio power. In this section, we compare the radio spectral index estimated without matching beam sizes ($\alpha_{non}$) and with those with matched beams ($\alpha_{beam}$), to quantify the importance of the beam effect. The ratio of beam areas is mostly between 1.2 and 1.9 for the GN sources while the GS sources have an average beam area ratio of 10.2, requiring a much larger correction. The impact of ignoring the beam size difference is clearly shown in the plot of the deviation of radio spectral index $\alpha_{non}$ from $\alpha_{beam}$ ($\Delta \alpha \equiv \alpha_{non} - \alpha_{beam}$) as a function of total 5 GHz flux density in Figure \[alpha\_diff\_flux\]. In the GN field (left panel) where the synthesized beams of 5 GHz and 1.4 GHz data are closely matched, the change is small for most objects as expected. A few sources still show a large deviation with a large positive $\Delta \alpha$ value, indicating that extended or blended sources can lead to large errors in derived spectral indices even when the beam size difference is relatively small. Otherwise the observed scatter is consistent with the expected increase in the noise of the 5 GHz data by the larger photometry aperture. The scatter in the derived spectral index is much larger in the GS field (right panel), and this reflects the impact of a much larger beam difference. As in the GN field, the source distribution is biased to the large positive $\Delta \alpha$ values with a mean of $0.054$, especially among $S_{5GHz}\ge$ 1 mJy sources that are usually associated with extended radio jet sources. This analysis clearly demonstrates that a small but non-negligible fraction of radio sources are resolved at 1 scale by our 5 GHz beam, and beam-matching is critically important in deriving a correct radio spectral index. This analysis also indicates that our deep 5 GHz data might suffer from loss of flux density due to spatial filtering, even after the beams are matched by smoothing. These combined effects lead to a systematic bias to a steeper (more positive) spectral index and smearing of the overall spectral index distribution, as seen in Figure \[alpha\_flux\] and discussed in section § \[ALPHA\]. Indeed, all interferometric observations are subject to loss of flux density, and matching the resolution to source size is the best that can be done without obtaining additional data. ![image](f12.pdf) ### Impact of Spectral Index on Radio-FIR Correlation The rest-frame radio-FIR correlation depends on the radio spectral index through the k-correction for the rest-frame radio power, and there are two common ways which incorrect radio spectral index has impacted the radio-FIR correlation analysis in the literature: (a) not matching beams; and (b) adopting a single value of $\alpha$. Here, we demonstrate how both of these errors in radio spectral index can lead to systematic deviations in the derived radio-FIR correlation parameters $q_{FIR}$ using our data. The deviation of radio-FIR correlation is defined as $\Delta q_{FIR} \equiv q_{FIR} (\alpha_{non}) - q_{FIR} (\alpha_{beam})$ \[for the unmatched beam case\], and they are shown as a function of redshift, color-coded by $\Delta_{SFR}$, in Figure \[qdiff\]. As discussed in the previous section, the net effect of not correcting for the beam size difference is over-estimating radio spectral indices (for this study, because of the higher angular resolution of the 5 GHz data), which in turn leads to a larger k-correction and an over-estimation of the rest frame radio power. As shown on the left panel of Figure \[qdiff\], the overall scatter in $\Delta q_{FIR}$ resulting from not matching the beams is not large, less than 0.1 in dex. However, all sources with a significant deviation in $q_{FIR}$ are [*nearly uniformly and systematically towards a lower value with a mean scatter of -0.019, and this bias is larger in magnitude at a higher redshift*]{} because of a larger k-correction. The common practice of adopting a single “average" value (e.g., $\alpha=0.8$) leads to an even greater scatter and a stronger bias in $q_{FIR}$ than the unmatched beam case, as shown on the right panel of Figure \[qdiff\]. The magnitude of the scatter in $\Delta q_{FIR}$ is now nearly 0.2 in dex, approaching the [*total*]{} intrinsic scatter in the observed radio-FIR correlation for the local SFGs [@yun01]. In addition, $\Delta q_{FIR}$ is heavily biased towards the negative values with a mean of -0.061 (and growing with redshift), as is the case for the unmatched beam. Both of these trends are the direct results of the large and asymmetric spread in the measured radio spectral index distribution shown in Figure \[alpha\_flux\]. The fact that both of these common errors in radio spectral index can lead to a significant scatter and a strong bias in the derived $q_{FIR}$ is a serious concern for the study of the faint radio source population in general and the study of radio-FIR correlation specifically. The magnitude of the error grows systematically with redshift and is more biased to a lower value of $q_{FIR}$, and this has an important consequence for the evaluation of possible evolution of the radio-FIR correlation. We will discuss this effect in the context of radio-FIR correlation evolution in Paper II. Conclusions \[CONCLUSION\] ========================== We reported the first results from our deep and wide VLA 5 GHz surveys of the GN and GS fields with the resolution and sensitivity of $\theta=1.47\arcsec\times1.42\arcsec$ & $\sigma=3.5\, \mu$Jy beam$^{-1}$ and $\theta=0.98\arcsec \times0.45\arcsec$ & $\sigma=3.0\, \mu$Jy beam$^{-1}$, respectively. The central deep cosmology fields with HST and other multi-wavelength data are covered with a nearly uniform sensitivity and resolution, and a total of 52 & 88 sources are identified at $\ge5\sigma$ significance in the 109 & 190 arcmin$^{2}$ survey areas, respectively. We have carefully derived their radio spectral indices by utilizing the existing 1.4 GHz images and catalogs by @owen18 and by @miller13 and examined the radio spectral index distribution and radio-FIR correlation using only a subset of 84 sources with a reliable spectroscopic redshift to minimize introducing additional scatter. Some of the main results from our analyses of these data include: 1. The radio spectral index is measured from beam-matched images of 1.4 & 5 GHz, and its distributions show the clustering of faint radio sources with $S_{5GHz} \lesssim 150 \mu$Jy at around the steep radio spectral index of $\alpha \sim$0.8, which has not seen in previous studies. The associated peak in the GN field is more distinct than in the GS field where the distribution is more smeared out by higher noise. The overall spectral index distribution derived is quite broad, ranging $-0.5 \le \alpha \le 1.4$, as many earlier studies have reported. 2. The star formation activity is characterized by the distance from the “star formation main sequence" [@speagle14], taking into account the strong evolution of SFR with redshift. The majority of faint radio sources are identified as SBs (58%) while only 12% is identified as star forming MS galaxies with $|\Delta_{SFR}| \le 0.2$. The remaining 30% are quiescent galaxies with $\Delta_{SFR} \le -0.2$. This high frequency of SBs is traced to the relatively poor sensitivity of even this deep continuum survey to normal MS SFGs at $z\ge 0.5$, and [*future surveys with up to 100 times better sensitivity ($\sigma_{5GHz} \lesssim 30$ nJy) are needed in order to trace the evolution of the star forming MS at the Cosmic Noon ($z=2.5$).*]{} Our comparison of flux density measurements and source density at different angular resolution support the $\sim$1 extent of intrinsic radio source size reported by previous studies [e.g., @windhorst90; @chapman04; @morrison10], and future ultra-deep surveys should carefully consider the resolution effects, e.g., such as surface brightness sensitivity as well. 3. The SFG+SB population shows a significantly tighter distribution of spectral index than the quiescent galaxies, as shown in Figure \[alpha\_delsfr\], suggesting a systematically different origin for their radio emission. The overwhelming majority of the SFG+SB population (86%) follow the local radio-FIR correlation for SFGs [@yun01] with a median $q_{FIR}$ value of $2.26\pm0.09$. Only $\sim$30% of quiescent galaxies follow the same trend, with a median $q_{FIR}$ value of $1.10\pm0.10$ – most of the quiescent galaxies (76%) are not detected in any of the $Herschel$ far-IR bands. The fraction of radio-excess objects with $q_{FIR} \le 1.6$ increases with increasing 5 GHz radio power, especially for objects at $z\ge1$ with $P_{5GHz}\ge 10^{24}$ W Hz$^{-1}$, and the majority of these objects are intense starburst galaxies with $\Delta_{SFR}\gtrsim1$. This may indicate a sharp rise in the AGN+SB hybrid population at these redshifts, as suggested by previous studies. 4. Determining and applying correct radio spectral indices is important for deriving accurate radio power and analyzing the radio-FIR correlation. Using our own survey data, we demonstrate that the common practice of not matching the beams carefully can lead to a significant and strongly bias estimation of $\alpha$ and over-estimation of radio power for high redshift sources. More importantly, as shown in Figure \[qdiff\], the widely used practice of adopting a single “characteristic" value of spectral index ($\alpha \approx 0.7-0.8$) leads to a much greater scatter matching or exceeding the intrinsic scatter seen in the local population and also a strong systematic bias to negative $q_{FIR}$ values, resulting from the broad width and the asymmetry in the intrinsic radio spectral index distribution. Lastly, analyzing our data using the photometric redshifts from the 3D-HST project leads to an additional scatter of 0.112 dex in the derived radio-FIR correlation – see Appendix \[ZPHOT\]. The resulting scatter is nearly symmetric, unlike the errors in spectral index discussed above, and analyzing a much larger sample with high quality photometric redshifts might be acceptable for future studies requiring much better statistics. We are grateful to Ryan Cybulski, Stéphane Arnouts, Olivier Ilbert for the use of Le Phare, Katherine E. Whitaker for help in using the 3D-HST, and Daniel Q Wang for a discussion of X-ray AGN and HMXBs. We also thank Urvashi Rau for a discussion about the radio imaging and Ken Kellermann for a valuable discussion. Hansung B. Gim acknowledges special thanks to NRAO employees for their hospitality when he was visiting NRAO at Socorro, NM, and for valuable helps offered through the NRAO helpdesk. We appreciate the anonymous referees to help us improving this paper.\ A. Spectroscopic Redshifts versus Photometric Redshifts \[ZPHOT\] ================================================================= As discussed in § \[SPECZ\], we limit our analysis only to the subsample of GN and GS radio sources with a spectroscopic redshift because we aim to remove any additional and possibly systematic noise introduced by adopting photometric redshifts, at the expense of reducing the total sample size by up to 16%. As shown in Figure \[qdiff\_photoz\], photometric redshifts reported by the 3D-HST project, derived using the well-sampled and deep UV-to-NIR photometry available in these fields, are quite good in general, with a few catastrophic outliers. When these redshift errors are propagated into the derivation of $q_{FIR}$ as shown on the right panel, the magnitude of additional scatter introduced by using photometric redshifts is 0.112 in dex. This is about 50% of the intrinsic scatter measured among the local sample of IR-selected SFGs by @yun01 and thus is substantial in magnitude. Fortunately, the redshift error and the resulting changes in $\Delta q_{FIR}$ seem random and not systematic, and using photometric redshifts might be acceptable in future studies if the analysis requires a much larger sample size for improved statistics. ![image](f13.pdf) B. Catalog of 5 GHz flux densities and spectral indices of our radio sources \[CAT\] ==================================================================================== The final radio source catalog is presented in Table \[tab:catalog\]. It includes all 52 GN and 88 GS sources cataloged from images with original beam sizes. The 5 GHz flux densities listed in Table \[tab:catalog\] are measured with the original beam sizes, but the spectral index is derived with the beam-matched catalogs as shown in § \[DATA14\]. Eight GS radio sources with original beam sizes are merged into three sources in the image with the beam size matched to that of 1.4 GHz image (refer to § \[DATA14\]). Positions of three merged sources (GS-15, GS-44 and GS-73) are found in the beam matched catalog, but their 5 GHz flux densities are measured from the image with the original beam size. We also list the eight GS sources below the merged sources as GS-15a, -15b, -15c, GS-44a, -44b, -44c, GS-73a, and -73b. The merged sources are not Gaussian-like shapes in the image with the original beam size, so their flux densities are poorly measured by AIPS tasks SAD or JMFIT which utilize the 2D Gaussian fitting function. For this reason, the flux densities of three merged sources are measured with the AIPS task TVSTAT which is appropriate for measuring the flux density of the irregular shaped source. The flux density measured with TVSTAT are larger in general than summation over flux densities of individual sources, because the TVSTAT traces flux densities of regions among individual sources. Data columns of Table \[tab:catalog\] are summarized as follows: (1) Source ID (ID), (2) Right Ascension (RA J2000), a unit of \[hour, minute, second\], (3) uncertainty of RA, a unit of second, (4) Declination (DEC J2000), a unit of \[$^{\circ}$     \], (5) uncertainty of DEC, a unit of , (6) peak flux density (S$_{peak}$) and its uncertainty, a unit of $\mu$Jy beam$^{-1}$, (7) integrated flux density (S$_{int}$) and its uncertainty, a unit of $\mu$Jy, and (8) radio spectral index ($\alpha$) and its uncertainty. [rccccccc]{} \ & [**RA J2000**]{} & [**eRA**]{} & [**DEC J2000**]{} & [**eDEC**]{} & [**S$_{peak}$**]{} & [**S$_{int}$**]{}$^{10}$ & [**$\alpha$**]{}$^{11}$\ & \[h m s\] & \[s\] & \[$^{\circ}$   \] & \[\] & \[$\mu$Jy beam$^{-1}$ \] & \[$\mu$Jy\] &\ [[**  – continued from previous page**]{}]{}\ & [**RA J2000**]{} & [**eRA**]{} & [**DEC J2000**]{} & [**eDEC**]{} & [**S$_{peak}$**]{} & [**S$_{int}$**]{}$^{10}$ & [**$\alpha$**]{}$^{11}$\ & \[h m s\] & \[s\] & \[$^{\circ}$   \] & \[\] & \[$\mu$Jy beam$^{-1}$ \] & \[$\mu$Jy\] &\ \ GS-01 & 3 31 59.619 & 0.034 & -27 47 32.87 & 0.07 & 27.8 $\pm$4.9 & 27.8 $\pm$ 4.9 & 0.265 $\pm$ 0.138\ GS-02 & 3 31 59.843 & 0.011 & -27 45 40.88 & 0.02 & 96.2 $\pm$ 5.2 & 96.2 $\pm$ 5.2 & 0.727 $\pm$ 0.051\ GS-03 & 3 32 1.547 & 0.006 & -27 46 47.84 & 0.01 & 550.4 $\pm$ 4.0 & 9338.2 $\pm$ 78.7 & 0.903 $\pm$ 0.001\ GS-04 & 3 32 3.667 & 0.015 & -27 46 3.98 & 0.03 & 63.8 $\pm$ 4.1 & 66.3 $\pm$ 7.3 & 0.189 $\pm$ 0.061\ GS-05 & 3 32 6.446 & 0.054 & -27 47 28.96 & 0.08 & 18.2 $\pm$ 3.5 & 25.1 $\pm$ 7.4 & 0.901 $\pm$ 0.083\ GS-06 & 3 32 8.538 & 0.042 & -27 46 48.55 & 0.06 & 26.7 $\pm$ 3.2 & 55.8 $\pm$ 9.3 & 1.088 $\pm$ 0.044\ GS-07 & 3 32 8.673 & 0.000 & -27 47 34.68 & 0.00 & 4030.0 $\pm$ 3.0 & 4030.0 $\pm$ 3.0 & -0.521 $\pm$ 0.002\ GS-08 & 3 32 9.716 & 0.003 & -27 42 48.43 & 0.01 & 329.5 $\pm$ 4.8 & 329.5 $\pm$ 4.8 & -0.168 $\pm$ 0.019\ GS-09 & 3 32 10.734 & 0.060 & -27 48 7.49 & 0.08 & 19.0 $\pm$ 3.0 & 41.9 $\pm$ 9.0 & 0.408 $\pm$ 0.086\ GS-10 & 3 32 10.797 & 0.008 & -27 46 28.11 & 0.01 & 92.5 $\pm$ 3.2 & 99.2 $\pm$ 5.8 & 0.518 $\pm$ 0.028\ GS-11 & 3 32 10.923 & 0.001 & -27 44 15.26 & 0.00 & 1589.5 $\pm$ 4.0 & 1740.9 $\pm$ 7.0 & 0.449 $\pm$ 0.003\ GS-12 & 3 32 11.501 & 0.017 & -27 48 15.90 & 0.04 & 39.8 $\pm$ 3.1 & 51.4 $\pm$ 6.3 & 0.108 $\pm$ 0.081\ GS-13 & 3 32 11.532 & 0.014 & -27 47 13.31 & 0.02 & 57.5 $\pm$ 3.1 & 72.9 $\pm$ 6.3 & 0.889 $\pm$ 0.033\ GS-14 & 3 32 11.615 & 0.048 & -27 50 27.54 & 0.09 & 16.2 $\pm$ 3.2 & 16.2 $\pm$ 3.2 & $<$ 0.347\ GS-15 & 3 32 13.104 & 0.020 & -27 43 50.95 & 0.21 & & 368.1 $\pm$ 28.5 & 1.312 $\pm$ 0.022\ 15a & 3 32 13.047 & 0.095 & -27 43 50.60 & 0.09 & 25.6 $\pm$ 3.3 & 159.2 $\pm$ 23.2 &\ 15b & 3 32 13.115 & 0.056 & -27 43 51.63 & 0.05 & 32.5 $\pm$ 3.3 & 90.6 $\pm$ 12.2 &\ 15c & 3 32 13.139 & 0.029 & -27 43 50.62 & 0.04 & 42.7 $\pm$ 3.4 & 105.5 $\pm$ 11.1 &\ GS-16 & 3 32 13.247 & 0.033 & -27 42 41.31 & 0.06 & 30.0 $\pm$ 4.3 & 30.0 $\pm$ 4.3 & 0.751 $\pm$ 0.097\ GS-17 & 3 32 13.490 & 0.008 & -27 49 53.11 & 0.02 & 87.3 $\pm$ 3.0 & 103.4 $\pm$ 5.9 & -0.604 $\pm$ 0.052\ GS-18 & 3 32 13.898 & 0.013 & -27 50 0.88 & 0.02 & 56.4 $\pm$ 3.1 & 56.4 $\pm$ 3.1 & $<$ -0.483\ GS-19 & 3 32 14.164 & 0.051 & -27 49 10.53 & 0.08 & 17.9 $\pm$ 2.9 & 33.8 $\pm$ 7.8 & 0.959 $\pm$ 0.070\ GS-20 & 3 32 14.213 & 0.053 & -27 46 34.89 & 0.08 & 16.6 $\pm$ 3.0 & 24.4 $\pm$ 6.7 & $<$ 0.338\ GS-21 & 3 32 14.992 & 0.033 & -27 42 25.49 & 0.07 & 24.8 $\pm$ 4.2 & 24.8 $\pm$ 4.2 & $<$ 0.238\ GS-22 & 3 32 15.267 & 0.053 & -27 50 19.76 & 0.12 & 15.0 $\pm$ 2.9 & 32.0 $\pm$ 8.5 & $<$ -0.143\ GS-23 & 3 32 15.338 & 0.043 & -27 50 37.72 & 0.09 & 16.4 $\pm$ 3.0 & 20.9 $\pm$ 6.1 & 0.349 $\pm$ 0.114\ GS-24 & 3 32 17.157 & 0.019 & -27 43 3.70 & 0.04 & 40.2 $\pm$ 3.6 & 40.2 $\pm$ 3.6 & 0.461 $\pm$ 0.108\ GS-25 & 3 32 17.183 & 0.032 & -27 52 21.10 & 0.05 & 32.0 $\pm$ 3.3 & 54.1 $\pm$ 8.1 & 0.452 $\pm$ 0.059\ GS-26 & 3 32 18.023 & 0.002 & -27 47 18.77 & 0.00 & 375.9 $\pm$ 3.0 & 384.9 $\pm$ 5.2 & 0.220 $\pm$ 0.009\ GS-27 & 3 32 18.563 & 0.044 & -27 51 34.82 & 0.07 & 18.2 $\pm$ 3.1 & 22.6 $\pm$ 6.1 & $<$ 0.048\ GS-28 & 3 32 19.052 & 0.048 & -27 52 14.99 & 0.09 & 18.2 $\pm$ 3.1 & 32.4 $\pm$ 8.0 & 0.737 $\pm$ 0.115\ GS-29 & 3 32 19.310 & 0.019 & -27 52 19.52 & 0.04 & 37.7 $\pm$ 3.2 & 44.4 $\pm$ 6.2 & -0.033 $\pm$ 0.103\ GS-30 & 3 32 19.316 & 0.003 & -27 54 6.58 & 0.00 & 352.4 $\pm$ 4.3 & 2432.7 $\pm$ 60.0 & 0.923 $\pm$ 0.007\ GS-31 & 3 32 19.514 & 0.012 & -27 52 17.87 & 0.02 & 63.0 $\pm$ 3.2 & 69.3 $\pm$ 6.0 & 0.693 $\pm$ 0.039\ GS-32 & 3 32 19.817 & 0.012 & -27 41 23.10 & 0.02 & 83.1 $\pm$ 4.6 & 83.1 $\pm$ 4.6 & 0.594 $\pm$ 0.047\ GS-33 & 3 32 21.285 & 0.016 & -27 44 35.90 & 0.03 & 43.6 $\pm$ 2.9 & 43.6 $\pm$ 2.9 & 1.102 $\pm$ 0.042\ GS-34 & 3 32 22.159 & 0.058 & -27 49 36.76 & 0.09 & 14.5 $\pm$ 2.9 & 23.3 $\pm$ 6.9 & 0.673 $\pm$ 0.114\ GS-35 & 3 32 22.281 & 0.032 & -27 48 4.83 & 0.10 & 15.5 $\pm$ 3.0 & 15.5 $\pm$ 3.0 & 0.713 $\pm$ 0.162\ GS-36 & 3 32 22.514 & 0.017 & -27 48 4.99 & 0.03 & 38.0 $\pm$ 3.0 & 38.0 $\pm$ 3.0 & 0.343 $\pm$ 0.095\ GS-37 & 3 32 22.597 & 0.028 & -27 44 26.11 & 0.04 & 30.3 $\pm$ 2.9 & 41.5 $\pm$ 6.1 & 0.809 $\pm$ 0.056\ GS-38 & 3 32 22.723 & 0.037 & -27 41 26.79 & 0.07 & 28.5 $\pm$ 4.1 & 44.8 $\pm$ 9.7 & 0.095 $\pm$ 0.112\ GS-39 & 3 32 24.262 & 0.039 & -27 41 26.81 & 0.06 & 31.9 $\pm$ 4.0 & 47.9 $\pm$ 9.1 & $<$ -0.859\ GS-40 & 3 32 24.670 & 0.045 & -27 53 34.37 & 0.09 & 19.5 $\pm$ 3.5 & 24.4 $\pm$ 7.1 & 0.895 $\pm$ 0.108\ GS-41 & 3 32 25.174 & 0.051 & -27 54 50.31 & 0.09 & 24.1 $\pm$ 4.6 & 30.1 $\pm$ 9.1 & 0.795 $\pm$ 0.086\ GS-42 & 3 32 25.180 & 0.035 & -27 42 19.15 & 0.06 & 23.1 $\pm$ 3.4 & 27.3 $\pm$ 6.6 & 1.347 $\pm$ 0.193\ GS-43 & 3 32 26.769 & 0.037 & -27 41 45.98 & 0.08 & 23.9 $\pm$ 3.6 & 36.9 $\pm$ 8.4 & $<$ 0.084\ GS-44 & 3 32 26.974 & 0.001 & -27 41 7.16 & 0.01 & & 5390.7 $\pm$ 33.0 & 0.958 $\pm$ 0.002\ 44a & 3 32 26.953 & 0.001 & -27 41 7.88 & 0.00 & 1069.0 $\pm$ 4.0 & 3613.0 $\pm$ 17.0 &\ 44b & 3 32 27.011 & 0.001 & -27 41 5.44 & 0.00 & 1079.0 $\pm$ 4.0 & 1290.0 $\pm$ 8.0 &\ 44c & 3 32 27.060 & 0.044 & -27 41 3.69 & 0.03 & 77.6 $\pm$ 3.9 & 463.6 $\pm$ 27.2 &\ GS-45 & 3 32 27.018 & 0.072 & -27 42 18.66 & 0.14 & 16.4 $\pm$ 3.2 & 30.1 $\pm$ 8.6 & $<$ -0.020\ GS-46 & 3 32 27.728 & 0.031 & -27 50 41.24 & 0.05 & 18.9 $\pm$ 2.9 & 18.9 $\pm$ 2.9 & 1.311 $\pm$ 0.177\ GS-47 & 3 32 28.002 & 0.024 & -27 46 39.65 & 0.04 & 30.0 $\pm$ 2.9 & 39.5 $\pm$ 6.1 & 0.592 $\pm$ 0.060\ GS-48 & 3 32 28.425 & 0.037 & -27 43 44.85 & 0.08 & 15.1 $\pm$ 2.9 & 15.1 $\pm$ 2.9 & $<$ 0.740\ GS-49 & 3 32 28.513 & 0.030 & -27 46 58.48 & 0.06 & 22.9 $\pm$ 3.0 & 22.9 $\pm$ 3.0 & 0.864 $\pm$ 0.098\ GS-50 & 3 32 28.742 & 0.008 & -27 46 20.60 & 0.01 & 94.7 $\pm$ 2.9 & 127.8 $\pm$ 6.1 & 0.534 $\pm$ 0.022\ GS-51 & 3 32 28.826 & 0.005 & -27 43 55.94 & 0.01 & 127.5 $\pm$ 2.8 & 244.2 $\pm$ 8.8 & 1.554 $\pm$ 0.027\ GS-52 & 3 32 28.886 & 0.026 & -27 41 29.76 & 0.04 & 38.6 $\pm$ 3.9 & 38.6 $\pm$ 3.9 & $<$ -0.464\ GS-53 & 3 32 29.876 & 0.036 & -27 44 25.26 & 0.14 & 28.2 $\pm$ 2.5 & 226.1 $\pm$ 22.7 & 1.099 $\pm$ 0.025\ GS-54 & 3 32 29.986 & 0.101 & -27 44 5.39 & 0.14 & 15.6 $\pm$ 2.6 & 71.7 $\pm$ 14.2 & 1.140 $\pm$ 0.056\ GS-55 & 3 32 31.489 & 0.055 & -27 46 23.51 & 0.09 & 15.4 $\pm$ 2.8 & 27.4 $\pm$ 7.3 & 1.067 $\pm$ 0.082\ GS-56 & 3 32 31.546 & 0.008 & -27 50 29.00 & 0.01 & 89.8 $\pm$ 2.9 & 110.9 $\pm$ 5.8 & -0.578 $\pm$ 0.075\ GS-57 & 3 32 33.007 & 0.033 & -27 46 6.64 & 0.07 & 16.1 $\pm$ 2.9 & 16.1 $\pm$ 2.9 & $<$ 0.597\ GS-58 & 3 32 33.446 & 0.057 & -27 52 28.55 & 0.07 & 19.0 $\pm$ 2.9 & 38.4 $\pm$ 8.3 & 0.981 $\pm$ 0.062\ GS-59 & 3 32 36.185 & 0.053 & -27 49 32.17 & 0.08 & 15.1 $\pm$ 2.9 & 20.3 $\pm$ 6.2 & 1.105 $\pm$ 0.107\ GS-60 & 3 32 37.734 & 0.030 & -27 50 0.71 & 0.05 & 28.3 $\pm$ 2.9 & 38.0 $\pm$ 6.1 & 0.908 $\pm$ 0.084\ GS-61 & 3 32 37.768 & 0.027 & -27 52 12.63 & 0.05 & 29.5 $\pm$ 3.1 & 36.6 $\pm$ 6.2 & 0.631 $\pm$ 0.061\ GS-62 & 3 32 37.890 & 0.069 & -27 53 17.86 & 0.15 & 17.1 $\pm$ 3.4 & 30.5 $\pm$ 8.8 & $<$ 0.277\ GS-63 & 3 32 38.791 & 0.033 & -27 44 49.28 & 0.05 & 22.4 $\pm$ 2.9 & 26.4 $\pm$ 5.5 & 0.633 $\pm$ 0.103\ GS-64 & 3 32 38.838 & 0.076 & -27 49 56.60 & 0.07 & 15.0 $\pm$ 2.8 & 28.0 $\pm$ 7.5 & 0.136 $\pm$ 0.093\ GS-65 & 3 32 39.193 & 0.053 & -27 53 57.94 & 0.10 & 22.4 $\pm$ 3.8 & 48.5 $\pm$ 11.5 & 0.384 $\pm$ 0.081\ GS-66 & 3 32 39.488 & 0.024 & -27 53 1.87 & 0.04 & 40.7 $\pm$ 3.4 & 62.1 $\pm$ 7.8 & 0.607 $\pm$ 0.049\ GS-67 & 3 32 43.320 & 0.034 & -27 46 47.01 & 0.06 & 19.4 $\pm$ 2.9 & 19.4 $\pm$ 2.9 & $<$ 0.256\ GS-68 & 3 32 43.542 & 0.045 & -27 54 55.05 & 0.07 & 29.1 $\pm$ 5.8 & 29.1 $\pm$ 5.8 & $<$ 0.271\ GS-69 & 3 32 44.051 & 0.062 & -27 51 43.90 & 0.19 & 20.9 $\pm$ 2.9 & 105.2 $\pm$ 17.4 & 1.072 $\pm$ 0.039\ GS-70 & 3 32 44.275 & 0.009 & -27 51 41.31 & 0.02 & 85.1 $\pm$ 3.2 & 106.9 $\pm$ 6.4 & 0.741 $\pm$ 0.024\ GS-71 & 3 32 45.401 & 0.036 & -27 43 49.36 & 0.08 & 17.2 $\pm$ 3.4 & 17.2 $\pm$ 3.4 & $<$ 0.502\ GS-72 & 3 32 45.967 & 0.038 & -27 53 16.25 & 0.08 & 25.0 $\pm$ 4.2 & 25.0 $\pm$ 4.2 & 1.641 $\pm$ 0.146\ GS-73 & 3 32 46.802 & 0.008 & -27 42 14.40 & 0.14 & & 93.5 $\pm$ 13.0 & -0.265 $\pm$ 0.078\ 73a & 3 32 46.770 & 0.039 & -27 42 12.50 & 0.05 & 34.2 $\pm$ 4.6 & 42.8 $\pm$ 9.2 &\ 73b & 3 32 46.884 & 0.045 & -27 42 15.56 & 0.07 & 29.4 $\pm$ 4.6 & 38.8 $\pm$ 9.4 &\ GS-74 & 3 32 47.494 & 0.040 & -27 42 43.97 & 0.10 & 21.9 $\pm$ 4.3 & 21.9 $\pm$ 4.3 & $<$ 0.737\ GS-75 & 3 32 47.902 & 0.047 & -27 42 33.12 & 0.10 & 24.1 $\pm$ 4.3 & 45.2 $\pm$ 11.5 & 1.155 $\pm$ 0.074\ GS-76 & 3 32 48.185 & 0.031 & -27 52 57.02 & 0.06 & 31.7 $\pm$ 4.1 & 37.7 $\pm$ 8.0 & 0.066 $\pm$ 0.120\ GS-77 & 3 32 48.566 & 0.040 & -27 49 34.63 & 0.05 & 24.8 $\pm$ 3.0 & 39.4 $\pm$ 7.2 & 0.636 $\pm$ 0.086\ GS-78 & 3 32 49.440 & 0.002 & -27 42 35.54 & 0.00 & 599.6 $\pm$ 4.7 & 716.9 $\pm$ 9.1 & 1.159 $\pm$ 0.008\ GS-79 & 3 32 51.838 & 0.020 & -27 44 37.09 & 0.03 & 53.7 $\pm$ 3.7 & 72.2 $\pm$ 7.7 & 0.218 $\pm$ 0.059\ GS-80 & 3 32 52.077 & 0.008 & -27 44 25.57 & 0.01 & 151.8 $\pm$ 3.8 & 214.6 $\pm$ 8.2 & -0.279 $\pm$ 0.030\ GS-81 & 3 32 52.326 & 0.055 & -27 45 42.24 & 0.07 & 19.0 $\pm$ 3.4 & 26.1 $\pm$ 7.3 & 0.445 $\pm$ 0.133\ GS-82 & 3 32 53.863 & 0.045 & -27 51 36.91 & 0.10 & 21.4 $\pm$ 4.1 & 29.3 $\pm$ 8.6 & $<$ -0.035\ GS-83 & 3 32 59.386 & 0.050 & -27 47 58.50 & 0.08 & 22.7 $\pm$ 4.4 & 28.8 $\pm$ 8.8 & $<$ 0.040\ GN-01 & 12 36 0.117 & 0.144 & 62 10 46.92 & 0.16 & 29.0 $\pm$ 5.4 & 46.1 $\pm$ 13.0 & 0.796 $\pm$ 0.101\ GN-02 & 12 36 1.803 & 0.111 & 62 11 26.34 & 0.12 & 32.7 $\pm$ 5.4 & 32.7 $\pm$ 5.4 & 1.034 $\pm$ 0.064\ GN-03 & 12 36 3.238 & 0.070 & 62 11 10.67 & 0.07 & 43.9 $\pm$ 5.2 & 43.9 $\pm$ 5.2 & 1.042 $\pm$ 0.049\ GN-04 & 12 36 6.607 & 0.054 & 62 9 50.91 & 0.06 & 63.0 $\pm$ 4.7 & 90.8 $\pm$ 10.6 & 0.665 $\pm$ 0.044\ GN-05 & 12 36 8.122 & 0.018 & 62 10 35.70 & 0.02 & 158.2 $\pm$ 4.5 & 169.6 $\pm$ 8.2 & 0.205 $\pm$ 0.018\ GN-06 & 12 36 8.790 & 0.295 & 62 11 43.57 & 0.15 & 21.6 $\pm$ 4.2 & 60.7 $\pm$ 15.6 & -0.149 $\pm$ 0.098\ GN-07 & 12 36 12.513 & 0.158 & 62 11 40.22 & 0.16 & 21.4 $\pm$ 4.0 & 39.3 $\pm$ 10.7 & 0.626 $\pm$ 0.099\ GN-08 & 12 36 17.096 & 0.068 & 62 10 11.35 & 0.06 & 38.0 $\pm$ 3.9 & 38.0 $\pm$ 3.9 & 0.222 $\pm$ 0.052\ GN-09 & 12 36 19.453 & 0.078 & 62 12 52.47 & 0.09 & 31.9 $\pm$ 4.1 & 31.9 $\pm$ 4.1 & 0.930 $\pm$ 0.054\ GN-10 & 12 36 20.284 & 0.022 & 62 8 44.12 & 0.02 & 122.9 $\pm$ 4.3 & 133.7 $\pm$ 7.9 & -0.054 $\pm$ 0.023\ GN-11 & 12 36 21.217 & 0.122 & 62 11 8.68 & 0.17 & 18.2 $\pm$ 3.5 & 25.8 $\pm$ 7.8 & 0.865 $\pm$ 0.112\ GN-12 & 12 36 22.536 & 0.012 & 62 6 53.70 & 0.01 & 325.8 $\pm$ 6.4 & 325.8 $\pm$ 6.4 & -0.158 $\pm$ 0.008\ GN-13 & 12 36 31.266 & 0.038 & 62 9 57.66 & 0.04 & 56.5 $\pm$ 3.5 & 56.5 $\pm$ 3.5 & 0.806 $\pm$ 0.028\ GN-14 & 12 36 32.480 & 0.063 & 62 11 5.19 & 0.07 & 30.2 $\pm$ 3.4 & 30.2 $\pm$ 3.4 & 0.100 $\pm$ 0.073\ GN-15 & 12 36 34.456 & 0.043 & 62 12 13.01 & 0.05 & 55.8 $\pm$ 3.3 & 85.0 $\pm$ 7.6 & 0.761 $\pm$ 0.036\ GN-16 & 12 36 34.505 & 0.040 & 62 12 41.00 & 0.04 & 59.8 $\pm$ 3.4 & 78.1 $\pm$ 7.1 & 0.726 $\pm$ 0.036\ GN-17 & 12 36 35.608 & 0.115 & 62 14 23.97 & 0.14 & 23.0 $\pm$ 3.9 & 33.0 $\pm$ 8.7 & 0.718 $\pm$ 0.104\ GN-18 & 12 36 37.042 & 0.074 & 62 8 52.16 & 0.09 & 31.3 $\pm$ 4.0 & 31.3 $\pm$ 4.0 & 0.946 $\pm$ 0.055\ GN-19 & 12 36 40.742 & 0.100 & 62 10 11.33 & 0.18 & 21.9 $\pm$ 3.4 & 44.1 $\pm$ 9.5 & 0.065 $\pm$ 0.116\ GN-20 & 12 36 41.563 & 0.077 & 62 9 48.16 & 0.08 & 29.7 $\pm$ 3.7 & 29.7 $\pm$ 3.7 & 0.967 $\pm$ 0.052\ GN-21 & 12 36 42.093 & 0.016 & 62 13 31.29 & 0.02 & 137.8 $\pm$ 3.5 & 147.3 $\pm$ 6.3 & 0.980 $\pm$ 0.020\ GN-22 & 12 36 42.187 & 0.057 & 62 15 45.22 & 0.07 & 46.3 $\pm$ 4.3 & 54.5 $\pm$ 8.4 & 1.018 $\pm$ 0.058\ GN-23 & 12 36 44.390 & 0.003 & 62 11 33.05 & 0.00 & 641.0 $\pm$ 3.4 & 963.0 $\pm$ 6.3 & 0.471 $\pm$ 0.018\ GN-24 & 12 36 46.074 & 0.100 & 62 14 48.58 & 0.09 & 28.3 $\pm$ 3.6 & 42.6 $\pm$ 8.3 & 0.726 $\pm$ 0.072\ GN-25 & 12 36 46.331 & 0.012 & 62 14 4.58 & 0.01 & 177.7 $\pm$ 3.5 & 177.7 $\pm$ 3.5 & 0.380 $\pm$ 0.014\ GN-26 & 12 36 46.334 & 0.082 & 62 16 29.25 & 0.08 & 47.2 $\pm$ 4.3 & 95.9 $\pm$ 12.4 & 1.196 $\pm$ 0.046\ GN-27 & 12 36 46.660 & 0.104 & 62 8 33.15 & 0.09 & 33.2 $\pm$ 4.6 & 41.7 $\pm$ 9.2 & 0.710 $\pm$ 0.083\ GN-28 & 12 36 49.663 & 0.027 & 62 7 37.97 & 0.03 & 130.6 $\pm$ 5.9 & 130.6 $\pm$ 5.9 & 0.723 $\pm$ 0.021\ GN-29 & 12 36 50.181 & 0.190 & 62 8 44.80 & 0.22 & 22.0 $\pm$ 4.4 & 59.6 $\pm$ 15.6 & 0.289 $\pm$ 0.092\ GN-30 & 12 36 51.091 & 0.082 & 62 10 30.91 & 0.08 & 32.3 $\pm$ 3.7 & 45.0 $\pm$ 8.0 & 0.568 $\pm$ 0.067\ GN-31 & 12 36 51.721 & 0.078 & 62 12 21.36 & 0.08 & 22.6 $\pm$ 3.4 & 22.6 $\pm$ 3.4 & 0.910 $\pm$ 0.066\ GN-32 & 12 36 52.814 & 0.088 & 62 18 7.95 & 0.10 & 44.9 $\pm$ 5.6 & 66.9 $\pm$ 12.7 & 0.670 $\pm$ 0.070\ GN-33 & 12 36 52.888 & 0.012 & 62 14 43.97 & 0.01 & 188.1 $\pm$ 3.5 & 205.8 $\pm$ 6.4 & 0.028 $\pm$ 0.018\ GN-34 & 12 36 53.372 & 0.089 & 62 11 39.33 & 0.16 & 19.7 $\pm$ 3.5 & 23.3 $\pm$ 6.8 & 0.806 $\pm$ 0.109\ GN-35 & 12 36 55.800 & 0.111 & 62 9 17.32 & 0.11 & 30.4 $\pm$ 4.6 & 45.0 $\pm$ 10.4 & 0.375 $\pm$ 0.087\ GN-36 & 12 36 59.317 & 0.003 & 62 18 32.46 & 0.00 & 1106.0 $\pm$ 6.0 & 1122.0 $\pm$ 10.0 & 1.202 $\pm$ 0.012\ GN-37 & 12 36 59.926 & 0.110 & 62 14 49.80 & 0.15 & 18.4 $\pm$ 3.4 & 18.4 $\pm$ 3.4 & 0.316 $\pm$ 0.117\ GN-38 & 12 37 0.260 & 0.030 & 62 9 9.76 & 0.03 & 114.2 $\pm$ 5.3 & 119.7 $\pm$ 9.5 & 0.766 $\pm$ 0.032\ GN-39 & 12 37 1.558 & 0.090 & 62 11 46.40 & 0.12 & 28.3 $\pm$ 3.6 & 47.4 $\pm$ 9.0 & 0.593 $\pm$ 0.071\ GN-40 & 12 37 2.106 & 0.115 & 62 17 34.32 & 0.16 & 26.7 $\pm$ 4.5 & 46.4 $\pm$ 11.4 & -0.286 $\pm$ 0.091\ GN-41 & 12 37 8.211 & 0.128 & 62 16 59.05 & 0.13 & 21.6 $\pm$ 4.1 & 21.6 $\pm$ 4.1 & 0.514 $\pm$ 0.129\ GN-42 & 12 37 8.287 & 0.144 & 62 10 56.17 & 0.18 & 23.4 $\pm$ 4.4 & 43.0 $\pm$ 11.7 & 0.348 $\pm$ 0.098\ GN-43 & 12 37 11.327 & 0.106 & 62 13 30.91 & 0.07 & 30.5 $\pm$ 3.5 & 46.6 $\pm$ 8.1 & 0.769 $\pm$ 0.067\ GN-44 & 12 37 13.854 & 0.011 & 62 18 26.27 & 0.01 & 321.0 $\pm$ 5.8 & 321.0 $\pm$ 5.8 & 0.564 $\pm$ 0.013\ GN-45 & 12 37 16.375 & 0.015 & 62 15 12.32 & 0.01 & 153.0 $\pm$ 3.7 & 153.0 $\pm$ 3.7 & 0.126 $\pm$ 0.016\ GN-46 & 12 37 16.672 & 0.027 & 62 17 33.39 & 0.03 & 108.3 $\pm$ 4.8 & 118.4 $\pm$ 8.8 & 0.869 $\pm$ 0.030\ GN-47 & 12 37 21.271 & 0.008 & 62 11 29.91 & 0.01 & 416.1 $\pm$ 5.3 & 429.3 $\pm$ 9.4 & -0.129 $\pm$ 0.015\ GN-48 & 12 37 25.962 & 0.024 & 62 11 28.59 & 0.01 & 314.8 $\pm$ 5.6 & 1174.7 $\pm$ 26.8 & 1.270 $\pm$ 0.014\ GN-49 & 12 37 30.818 & 0.066 & 62 12 58.75 & 0.07 & 43.1 $\pm$ 5.2 & 43.1 $\pm$ 5.2 & 0.924 $\pm$ 0.050\ GN-50 & 12 37 34.503 & 0.173 & 62 17 23.45 & 0.14 & 32.3 $\pm$ 6.2 & 55.8 $\pm$ 15.6 & 0.442 $\pm$ 0.102\ GN-51 & 12 37 36.922 & 0.092 & 62 14 29.51 & 0.13 & 28.4 $\pm$ 5.4 & 28.4 $\pm$ 5.4 & 0.652 $\pm$ 0.076\ GN-52 & 12 37 42.331 & 0.091 & 62 15 18.19 & 0.11 & 46.6 $\pm$ 6.4 & 62.1 $\pm$ 13.4 & 0.397 $\pm$ 0.084\ The integrated flux density is the same as the peak flux density for a point source. $^{13}$The spectral index $\alpha$ is estimated between 1.4 and 5 GHz using 1.4 GHz images (@owen18 for the GN and @miller13 for the GS fields) and 5 GHz images with same beam sizes as those of 1.4 GHz images. natexlab\#1[\#1]{}\[1\][[\#1](#1)]{} \[1\][doi: [](http://doi.org/#1)]{} \[1\][[](http://ascl.net/#1)]{} \[1\][[](https://arxiv.org/abs/#1)]{} , P. N., [Fadda]{}, D. T., [Marleau]{}, F. R., [et al.]{} 2004, , 154, 147, , S., [Cristiani]{}, S., [Moscardini]{}, L., [et al.]{} 1999, , 310, 540, , I., [Mainieri]{}, V., [Popesso]{}, P., [et al.]{} 2010, , 512, A12, , A. J., [Cowie]{}, L. L., [Owen]{}, F. N., [Hsu]{}, L.-Y., & [Wang]{}, W.-H. 2017, , 835, 95, , A. J., [Cowie]{}, L. L., & [Wang]{}, W.-H. 2008, , 689, 687, , E. F. 2003, , 586, 794, , M., [Padovani]{}, P., [Mainieri]{}, V., [et al.]{} 2013, , 436, 3759, , N., [Dunne]{}, L., [Ivison]{}, R. J., [et al.]{} 2011, , 410, 1155, , G. B., [van Dokkum]{}, P. G., [Franx]{}, M., [et al.]{} 2012, , 200, 13, , A. H., & [Schwab]{}, F. R. 1999, in Astronomical Society of the Pacific Conference Series, Vol. 180, Synthesis Imaging in Radio Astronomy II, ed. G. B. [Taylor]{}, C. L. [Carilli]{}, & R. A. [Perley]{}, 371 , G., & [Charlot]{}, S. 2003, , 344, 1000, , S. F. 1979, , 186, 519, , K. I., [Lagache]{}, G., [Yan]{}, L., [et al.]{} 2007, , 660, 97, , G. 2003, , 115, 763, , S. C., [Smail]{}, I., [Windhorst]{}, R., [Muxlow]{}, T., & [Ivison]{}, R. J. 2004, , 611, 732, , R., & [Elbaz]{}, D. 2001, , 556, 562, , J. J. 1992, , 30, 575, , J. J., [Cotton]{}, W. D., [Fomalont]{}, E. B., [et al.]{} 2012, , 758, 14, , M. C., [Yan]{}, R., [Dickinson]{}, M., [et al.]{} 2012, , 425, 2116, , L. L., [Barger]{}, A. J., [Hu]{}, E. M., [Capak]{}, P., & [Songaila]{}, A. 2004, , 127, 3137, , D. A., & [Helou]{}, G. 2002, , 576, 159, , D. A., [Helou]{}, G., [Contursi]{}, A., [Silbermann]{}, N. A., & [Kolhatkar]{}, S. 2001, , 549, 215, , M., [Labb[é]{}]{}, I., [van Dokkum]{}, P. G., [et al.]{} 2011, , 727, 1, , A. G. 1976, , 52, 439 , J., [Smol[č]{}i[ć]{}]{}, V., [Delvecchio]{}, I., [et al.]{} 2017, , 602, A4, , J. L., [Rieke]{}, G. H., [P[é]{}rez-Gonz[á]{}lez]{}, P. G., [Rigby]{}, J. R., & [Alonso-Herrero]{}, A. 2007, , 660, 167, , R. H., [Partridge]{}, R. B., & [Windhorst]{}, R. A. 1987, , 321, 94, , J. S., [McLure]{}, R. J., [Biggs]{}, A. D., [et al.]{} 2017, , 466, 861, , D., [Dickinson]{}, M., [Hwang]{}, H. S., [et al.]{} 2011, , 533, A119, , E. B., [Windhorst]{}, R. A., [Kristian]{}, J. A., & [Kellerman]{}, K. I. 1991, , 102, 1258, , H. B., [Hales]{}, C. A., [Momjian]{}, E., & [Yun]{}, M. S. 2015, in American Astronomical Society Meeting Abstracts, Vol. 225, American Astronomical Society Meeting Abstracts, 143.40 , D., [Bondi]{}, M., [Prandoni]{}, I., [et al.]{} 2017, , 471, 210, , G., [Soifer]{}, B. T., & [Rowan-Robinson]{}, M. 1985, , 298, L7, , N., [Middelberg]{}, E., [Deller]{}, A., [et al.]{} 2018, , 616, A128, , M. T., [Bell]{}, M. E., [Hopkins]{}, A. M., [Norris]{}, R. P., & [Seymour]{}, N. 2015, , 454, 952, , E., [Ivison]{}, R. J., [Best]{}, P. N., [et al.]{} 2010, , 401, L53, , E., [Ivison]{}, R. J., [Biggs]{}, A. D., [et al.]{} 2009, , 397, 281, , E., [Cirasuolo]{}, M., [Ivison]{}, R., [et al.]{} 2008, , 386, 953, , O., [Arnouts]{}, S., [McCracken]{}, H. J., [et al.]{} 2006, , 457, 841, , R. J., [Alexander]{}, D. M., [Biggs]{}, A. D., [et al.]{} 2010, , 402, 245, , R. J., [Magnelli]{}, B., [Ibar]{}, E., [et al.]{} 2010, , 518, L31, , G., [Heckman]{}, T. M., [White]{}, S. D. M., [et al.]{} 2003, , 341, 54, , K. I., [Fomalont]{}, E. B., [Mainieri]{}, V., [et al.]{} 2008, , 179, 71, , A., [Pope]{}, A., [Alexander]{}, D. M., [et al.]{} 2012, , 759, 139, , I. J., [Ekers]{}, R. D., [Bryant]{}, J. J., [et al.]{} 2006, , 371, 852, , U., [Lisenfeld]{}, U., & [Verley]{}, S. 2018, , 611, A55, , M., [van Dokkum]{}, P. G., [Labb[é]{}]{}, I., [et al.]{} 2009, , 700, 221, , J., [Cimatti]{}, A., [Daddi]{}, E., [et al.]{} 2013, , 549, A63, , G., [Dole]{}, H., & [Puget]{}, J.-L. 2003, , 338, 555, , O., [Cassata]{}, P., [Cucciati]{}, O., [et al.]{} 2013, , 559, A14, , E., [et al.]{} 2005, , 632, 169, , B., [Brandt]{}, W. N., [Xue]{}, Y. Q., [et al.]{} 2017, , 228, 2, , D., [Poglitsch]{}, A., [Altieri]{}, B., [et al.]{} 2011, , 532, A90, , P., & [Dickinson]{}, M. 2014, , 52, 415, , B., [Elbaz]{}, D., [Chary]{}, R. R., [et al.]{} 2011, VizieR Online Data Catalog, 352 , B., [et al.]{} 2011, , 528, A35, , B., [Popesso]{}, P., [Berta]{}, S., [et al.]{} 2013, , 553, A132, , B., [Ivison]{}, R. J., [Lutz]{}, D., [et al.]{} 2015, , 573, A45, , M. Y., [Huynh]{}, M. T., [Norris]{}, R. P., [et al.]{} 2011, , 731, 79, , J. P., [Waters]{}, B., [Schiebel]{}, D., [Young]{}, W., & [Golap]{}, K. 2007, in Astronomical Society of the Pacific Conference Series, Vol. 376, Astronomical Data Analysis Software and Systems XVI, ed. R. A. [Shaw]{}, F. [Hill]{}, & D. J. [Bell]{}, 127 , M., [Cimatti]{}, A., [Zamorani]{}, G., [et al.]{} 2005, , 437, 883, , L., [Peacock]{}, J. A., & [Mead]{}, A. R. G. 1990, , 244, 207 , N. A., [Bonzini]{}, M., [Fomalont]{}, E. B., [et al.]{} 2013, , 205, 13, , I. G., [Brammer]{}, G. B., [van Dokkum]{}, P. G., [et al.]{} 2016, , 225, 27, , A. M., [Kocevski]{}, D. D., [Trump]{}, J. R., [et al.]{} 2015, , 149, 178, , G. E., [Owen]{}, F. N., [Dickinson]{}, M., [Ivison]{}, R. J., & [Ibar]{}, E. 2010, , 188, 178, , E. J., [Chary]{}, R.-R., [Alexander]{}, D. M., [et al.]{} 2009, , 698, 1380, , E. J., [Momjian]{}, E., [Condon]{}, J. J., [et al.]{} 2017, , 839, 35, , E. J., [Condon]{}, J. J., [Schinnerer]{}, E., [et al.]{} 2011, , 737, 67, , F. N. 2018, , 235, 34, , F. N., & [Morrison]{}, G. E. 2008, , 136, 1889, , P., [Mainieri]{}, V., [Tozzi]{}, P., [et al.]{} 2009, , 694, 235, , R. A., [Chandler]{}, C. J., [Butler]{}, B. J., & [Wrobel]{}, J. M. 2011, , 739, L1, , M., [Tajer]{}, M., [Maraschi]{}, L., [et al.]{} 2007, , 663, 81, , P., [Dickinson]{}, M., [Nonino]{}, M., [et al.]{} 2009, , 494, 443, , I., [Parma]{}, P., [Wieringa]{}, M. H., [et al.]{} 2006, , 457, 517, . 2013, R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria , K. E., [Hopkins]{}, A. M., [Norris]{}, R. P., [et al.]{} 2012, , 421, 1644, , C. D., [Puech]{}, M., [Flores]{}, H., [et al.]{} 2007, , 465, 1099, , I. G., [Oliver]{}, S. J., [Kunz]{}, M., [et al.]{} 2010, , 409, 48, , I. G., [Ivison]{}, R. J., [Greve]{}, T. R., [et al.]{} 2012, , 419, 2758, , L., & [Owen]{}, F. N. 2014, , 785, 45, , W., [Dunlop]{}, J. S., [Rieke]{}, G. H., [et al.]{} 2016, , 833, 12, , M. T., [Schinnerer]{}, E., [Murphy]{}, E., [et al.]{} 2010, , 186, 341, , N., [Lee]{}, N., [Vanden Bout]{}, P., [et al.]{} 2017, , 837, 150, , N., [Dwelly]{}, T., [Moss]{}, D., [et al.]{} 2008, , 386, 1695, , J. D., [Mainieri]{}, V., [Salvato]{}, M., [et al.]{} 2010, , 191, 124, , R. E., [Whitaker]{}, K. E., [Momcheva]{}, I. G., [et al.]{} 2014, , 214, 24, , V., [Delvecchio]{}, I., [Zamorani]{}, G., [et al.]{} 2017, , 602, A2, , J. S., [Steinhardt]{}, C. L., [Capak]{}, P. L., & [Silverman]{}, J. D. 2014, , 214, 15, , A. N., [Pirzkal]{}, N., [Meurer]{}, G. R., [et al.]{} 2009, , 138, 1022, , W., & [Saunders]{}, W. 1992, , 259, 413 , G. P., [Bergeron]{}, J., [Hasinger]{}, G., [et al.]{} 2004, , 155, 271, Tanabashi, M., Hagiwara, K., Hikasa, K., [et al.]{} 2018, Phys. Rev. D, 98, 030001, , P., [Gilli]{}, R., [Mainieri]{}, V., [et al.]{} 2006, , 451, 457, , A., [Bell]{}, E. F., [H[ä]{}ussler]{}, B., [et al.]{} 2012, , 203, 24, , E., [Cristiani]{}, S., [Dickinson]{}, M., [et al.]{} 2008, , 478, 83, , W.-H., [Cowie]{}, L. L., [Barger]{}, A. J., [Keenan]{}, R. C., & [Ting]{}, H.-C. 2010, , 187, 251, , K. E., [Pope]{}, A., [Cybulski]{}, R., [et al.]{} 2017, , submitted , K. E., [Franx]{}, M., [Leja]{}, J., [et al.]{} 2014, , 795, 104, , R., [Mathis]{}, D., & [Neuschaefer]{}, L. 1990, in Astronomical Society of the Pacific Conference Series, Vol. 10, Evolution of the Universe of Galaxies, ed. R. G. [Kron]{}, 389–403 , G. D., [Trump]{}, J. R., [Barro]{}, G., [et al.]{} 2015, , 150, 153, , Y. Q., [Luo]{}, B., [Brandt]{}, W. N., [et al.]{} 2016, , 224, 15, , M. S., [Reddy]{}, N. A., & [Condon]{}, J. J. 2001, , 554, 803, , W., [Mikles]{}, V. J., [Mainieri]{}, V., [et al.]{} 2004, , 155, 73, [^1]: The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. [^2]: Since the final radio image is a combination of cleaned components with flux density scaled by clean beam and residuals with flux densities weighted by dirty beam, the convolution of the radio image with the clean beam includes the convolution of the residuals scaled by the dirty beam in addition to the convolution of the clean components scaled by the clean beam. The former contributes on the uncertainty of the convolved images, but it is not easy to estimate its contributions because it involves many parameters such as clean thresholds, PSF shape, and the convolution kernel size. This is a subtle but notable systematic effect that we have decided to ignore for the moment. [^3]: Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. [^4]: Data are available at http://www.mpe.mpg.de/ir/Research/PEP/DR1 [^5]: Data are available at http://hedam.lam.fr/HerMES/index/dr4 [^6]: Reliability is defined as $Rel_{i} = LR_{i}/(\Sigma LR_{i} + (1-q_{0}))$ for the likelihood ratio $LR_{i}$ and the fraction of true counterparts above the detection limit, $q_{0}$ [^7]: $Le$ $Phare$ is available at http://www.cfht.hawaii.edu/ $\sim$arnouts/lephare.html [^8]: This work is based on observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. [^9]: The identification of AGN among the faint radio source population and their impact on observed properties are presented exclusively in Paper II. [^10]: Nondetects and Data Analysis for Environmental Data. [^11]: Median of radio source sizes reported at 1.4 GHz by @owen08 and @owen18 are 1.2- 1.5while the median source size at 10 GHz reported by @murphy17 is 0.17 $\pm$ 0.03. The apparent difference in these median radio source sizes is likely attributable to the structures present in these radio sources and the differences in the surface brightness sensitivity achieved.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We show that if in a classical accretion disk the thin disk approximation fails interior to a certain radius, a transition from Keplerian to radial infalling trajectories should occur. We show that this transition is actually expected to occur interior to a certain critical radius, provided surface density profiles are steeper than $\Sigma(R) \propto R^{-1/2}$, and further, that it probably corresponds to the observationally inferred phenomena of thick hot walls internally limiting the extent of many stellar accretion disks. Once shears stop, the inner region of radially infalling orbits is naturally expected to be cold. This leads to the divergent focusing and concentration of matter towards the very central regions, most of which will simply be swallowed by the central star. However, if a warm minority component is present, we show through a perturbative hydrodynamical analysis, that this will naturally develop into an extremely well collimated and very fast moving pair of polar jets. A first analytic treatment of the problem described is given, proving the feasibility of purely hydrodynamical mechanisms of astrophysical jet generation. The purely hydrodynamic jet generation mechanism explored here complements existing ideas focusing on the role of magnetic fields, which probably account for the large-scale collimation and stability of jets.' author: - | X. Hernandez$^1$, Pablo. L. Rendón$^2$, Rosa G. Rodríguez-Mota$^2$, A. Capella$^3$\ $^1$Instituto de Astronomía, Universidad Nacional Autónoma de México, Apartado Postal 70–264 C.P. 04510 México D.F. México.\ $^2$Centro de Ciencias Aplicadas y Desarrollo Tecnológico, Universidad Nacional Autónoma de México, Apartado Postal 70–186,\ México D.F. 04510, México.\ $^3$Instituto de Matematicas, Universidad Nacional Autónoma de México, México D.F. 04510, México date: 'Released 28/02/2011' title: - - A Hydrodynamical Mechanism for Generating Astrophysical Jets --- \[firstpage\] accretion, accretion discs — hydrodynamics — protoplanetary disks — galaxies: jets — ISM: jets and outflows Introduction {#intro} ============ Astrophysical jets occur over a large range of astrophysical scales, from the stellar Newtonian scales of HH objects (e.g. Reipurth & Bally 2001), to the relativistic cases of microquasars and gamma ray bursts (e.g. Mirabel & Rodriguez 1999, Mészáros 2002), to the Mega-parsec extragalactic scales of AGN jets (e.g. Marscher et al. 2002). Although the processes of propagation and collimation appear to be relatively well understood in terms of the interplay between the hydrodynamics of the problem (Scheuer 1997) and magnetic fields (e.g. Blandford 1990), the precise mechanism of jet generation probably remains as the most uncertain part of the system. Existing jet generation mechanisms focus on the interaction between the rotating material in the inner regions of the accretion disk and the magnetic field of the central (rotating or otherwise) star, or perhaps an external magnetic field threading the disk. See for example Blandford & Payne (1982), Henriksen & Valls-Gabaud (1994), Meier et al. (2001), Price et al. (2003) for some examples. Although generically successful, detailed comparisons with observations do dot always yield consistent results, e.g. Ferreira et al. (2006). Further, the details of relative orientation of stellar spin, accretion disk and magnetic field configurations, have to be supplied in somewhat highly specific manners. As an alternative, we explore the possibility of hydrodynamical jet generation mechanism where magnetic fields play no rôle, but only the intrinsic hydrodynamical physics of the interplay between the accretion and the central star wholly determine the characteristics of the jet. The main obstacle to such a scheme is the presence of a centrifugal barrier associated with the angular momentum content of the material in the accretion disk. We show however, that the thin disk approximation for accretion disks, which is equivalent to the assumption of quasi-circular orbits for the disk (e.g. Pringle 1981), is expected to break down internal to a certain critical radius, resulting in a transition to quasi-radial flow for the disk. Once this obstacle has been removed, an analytic first order perturbation treatment serves to demonstrate the feasibility of purely hydrodynamical jets. We show also that many generic features of observed astrophysical jets across a wide range of scales, can be naturally accounted for in the general model presented. The presence of magnetic fields in jet phenomena is evident empirically, however, it is not impossible that their main contribution to the problem could be restricted to the radiation of the jet material and to its long range collimation, with purely hydrodynamical physics playing a part in the actual jet generation mechanisms. The problem of jet formation has been studied extensively in the context of classical hydrodynamics, most often regarding fluid-body interactions. The appearance of stable coaxial jets resulting from radially-symmetric velocity fields over thin fluid sheets has been established by, among others, Taylor (1960). The rôle played by both the magnitude and the direction of velocity in the formation of this type of jet is the subject of a theoretical study by Glauert (1956), where it is shown that at the point at which the jet forms, a large velocity gradient is observed, and momentum flux is constant, with horizontal momentum being transformed into vertical momentum. Similar results were obtained by King & Needham (1994), who provide an asymptotic study of a jet formed by a vertical plate accelerating into a semi-infinite expanse of stationary fluid of finite depth with a free surface and a gravitational restoring force. It is found that as the fluid approaches the plate in a horizontal direction, a gradual rise in free-surface elevation occurs. Eventually, a thin region is reached where vertical velocity dominates horizontal velocity as a consequence of the fluid finding it more difficult to overcome the inertia it would meet by continuing its horizontal motion than escaping vertically towards the low-pressure free surface. In essence, this same mechanism can allow for jet formation in the context of the problem we study here. In section 2 we develop the criterion for transition to radial flow in a standard accretion disk. The perturbative solution to the resulting problem of a radially infalling disk is then developed in section 3. Section 4 presents trajectories for particular cases of the solution obtained in the previous section, in dimensionless units. Finally, we give our conclusions in section 5, The transition to Radial Flow ============================= We start from a standard thin accretion disk where material orbits on quasi-circular Keplerian orbits around a central star of mass M. Assuming axial symmetry and cylindrical coordinates, the total angular momentum of a ring at radius $R$ of width $\Delta R$ will be: $$L=2 \pi R^{3} \Delta R \Sigma(R) \Omega(R)$$ where $\Sigma$ and $\Omega$ are the surface density and orbital frequency profiles of the disk, respectively. If the breaking torque on a given ring of the disk, due to its exterior ring is denoted by $\tau(R)$, the total breaking torque on the ring at radius $R$ will be: $$\tau_{B} = \tau(R+ \Delta R) - \tau(R).$$ The rate of change of the angular momentum of the ring in question can now be calculated as: $$\dot{L}=~\tau_{B}=~\frac{d\tau}{dR} \Delta R.$$ If at any radius a substantial fraction of the angular momentum of the ring is lost through viscous torques over an orbital period, the assumption of quasi-circular orbits will break down, and the disk will make a transition to mostly radial orbits. This condition can be stated as: $$\frac{L}{2 \dot{L}}< \frac{2 \pi}{\Omega}.$$ Substitution of equations (1) and (2) into the above yields: $$R^{3} \Omega^{2} \Sigma < 2 \left( \frac{d \tau}{dR} \right),$$ as the condition for the onset of radial flow in the disk. We can now introduce a model for the torques in terms of the rate of shear and the effective viscosity $\nu$ (e.g. Von Wiezsäcker 1943, Pringle 1981) as: $$\tau=2 \pi R^{3} \nu \Sigma \frac{d\Omega}{dR}.$$ Taking $\Omega^{2} = GM/R^{3}$ to substitute the above equation into condition (5) gives $$(GM)^{1/2} \Sigma < -6 \pi \frac{d}{dR} \left(R^{1/2} \nu \Sigma \right).$$ To proceed further we can take for example, $\nu=$constant and a model for the disk surface density profile of the form: $$\Sigma=\Sigma_{0} \left(\frac{R}{R_{0}} \right)^{-n},$$ of the type often used in models of accretion disks when fitted to observations (e.g. Hughes et al. 2009). Use of the two above forms for $\nu$ and $\Sigma(R)$ reduces condition(7) to: $$6 \pi \nu (n-1/2) > (G M R)^{1/2}.$$ The above condition will always be met in any accretion disk with $n>1/2$, interior to a transition radius $R_{T}$ given by: $$R_{T} = \frac{[6 \pi \nu (n-1/2)]^{1/2}}{GM}.$$ It is interesting that directly observed accretion disks have spectra which when modelled typically yield $n \sim 1$, in general $0<n<1.5$, e.g. Hartmann et al. (1998), Lada (2006), Hughes et al. (2009). This leads one to expect the transition to radial flow to occur in many of the stellar accretion disks, interior to radii given by equation (10). Finally, we can explore the consequences of introducing an $\alpha$ prescription (Shakura & Sunyaev 1973) into condition (9), $\nu=\alpha c H$, where $c$ and $H$ are respectively the sound speed and height of the disk, with $\alpha$ a dimensionless number, yielding the dimensionless condition: $${\mathcal M} \left( \frac{R}{H} \right) < 6 \pi \alpha (n-1/2),$$ where ${\mathcal M}$ is the ratio of the Keplerian orbital velocity in the disk to the sound speed in the disk. Although the above condition is only valid for constant $\nu$, it illustrates the equivalence between the assumption of quasi-circular orbits for the material in the disk, and the assumption that the disk is thin. We see that the breakdown of the assumption of quasi-circular orbits (condition 9), corresponds to the point where disk is fat. Writing ${\mathcal M}$ in astrophysical units as: $${\mathcal M} = 1.8 \left( \frac{200 K}{T} \right)^{1/2} \left( \frac{50 AU}{R} \right)^{1/2} \left( \frac{M}{0.5 M_{\odot}} \right)^{1/2},$$ we see that for values typical of what is observationally inferred for the stellar mass and position and temperature of accretion disk walls in T Tauri protoplanetary disks, those indicated in the above equation (e.g. D’Alessio et al. 2005, Espaillat et al. 2007, Hughes et al 2009), one should expect the breakdown of the thin disk approximation (${\mathcal M} >>1$, e.g. Pringle 1981), and consequently, the transition to radial flow. Taking typical inferred values for $R/H$ at the wall of $\sim 5$ (e.g. D’Alessio et al. 2005, Espaillat et al. 2007, Hughes et al 2009), we can now write the condition for the transition to radial flow, locally at the wall, as: $$10=6 \pi \alpha (n-1/2).$$ We see that for a standard value of $n\sim1$ the above equation implies a reasonable value of $\alpha \sim 1$ at the thick wall, significantly higher than the values of $\sim 0.01$ and lower, which apply for the body of the thin accretion disk beyond this radius. A substantial increase of $\alpha$ as $R/H$ decreases is expected in any turbulence driven viscosity model for accretion disks, e.g. Firmani, Hernandez & Gallagher (1996), in the context of galactic disks. In terms of the debate surrounding the inference of inner holes in observed accretion disks, many solutions have been proposed in terms of disk clearing mechanisms; grain growth (e.g. Strom et al. 1989, Dullemond & Dominik 2005), photoevaporation (e.g. Clarke et al. 2001), magnetorotational instability inside-out clearing (e.g. Chiang & Murray-Clay 2007, Dutrey et al. 2008), binarity (e.g. Ireland & Kraus 2008) and planet-disk interactions (e.g. Rice et al. 2003). None of the above is entirely satisfactory, as noted by Hughes et al. (2009), mostly due to their incompatibility with a steady state solution. An alternative solution under the proposed scenario, is that there is no actual disk material clearing, only a transition to radial flow at the thick wall, and consequently a shear-free flow interior to this point. Once the disk heating mechanism is removed, as one should expect from the analysis presented in this section, the inner disk disappears from sight. Hydrodynamical Jet Solutions ============================ We shall now model the physical situation resulting from the scenario described above as an axially symmetrical distribution of cold matter in free fall towards a central star of mass $M$. Taking a spherical coordinate system with $\theta$ the angle between the positive vertical direction and the position vector $\vec{r}$ we have: $$\frac{1}{r^{2}} \frac{\partial(r^{2} \rho V)}{\partial r} + \frac{1}{r sin(\theta)}\frac{\partial(sin(\theta) \rho U)} {\partial \theta}=0,$$ $$V\frac{\partial V}{\partial r} +\frac{U}{r}\frac{\partial V}{\partial \theta} -\frac{U^{2}}{r}= -\frac{1}{\rho}\frac{\partial P}{\partial r} - \frac{G M}{r^{2}},$$ $$V\frac{\partial U}{\partial r} +\frac{U}{r}\frac{\partial U}{\partial \theta} +\frac{V U}{r}= -\frac{1}{r \rho}\frac{\partial P}{\partial \theta},$$ for the continuity equation, and the radial and angular components of Euler’s equation. In the above, $V$, $U$, $\rho$ and $P$ give the radial velocity, angular velocity, matter density and pressure, respectively. We have neglected temporal derivatives, as we are interested at this point, in the characteristics of steady state solutions. We take as a background state a free-falling axially symmetrical distribution of cold gas, described by $V_{0}=-(2GM/r)^{1/2}$, $U_{0}=0$ and $\nabla \vec{P_{0}}=0$, a consistent solution to eqs.(15) and (16). Having ignored the inclusion of the radius at which the transition to radial flow takes place in the choice of $V_{0}$ limits the validity of the analysis to radial scales along the plane of the disk much smaller than $R_{T}$. This is justified by the fact that jets appear as phenomena extremely localised towards $R \rightarrow 0$. We now take a density profile given by: $$\rho_{0}(r,\theta)=f(r)g(\theta),$$ were $f(r)$ is a dimensionless function of $r$ and $g(\theta)$ describes the polar angle dependence of the infalling material, for example, one can ask for $g(\theta=\pi/2)=\bar{\rho}_{0}$, diminishing symmetrically towards the poles. The choice of this last function will determine the details of the problem, and can be thought of as something of the type $$g(\theta)=\bar{\rho}_{0}e^{-\left( \frac{\theta- \pi/2}{\sqrt{2} \theta_{0}} \right)^{2}}$$ with $\bar{\rho}_{0}$ a normalisation constant and $\theta_{0}$ a form constant describing the flattening of the disk of infalling material. However, we shall mostly leave results indicated in terms of $g(\theta)$. The continuity equation (14) now fixes $f(r)$ through: $$-\frac{g(\theta)}{r^{2}} \frac{d}{d r} \left[ (2 G M)^{1/2} r^{3/2} f(r) \right]=0,$$ and hence $f(r) r^{3/2} =cte.$, which completes the description of the background state through: $$\rho_{0}(r,\theta)=\left( \frac{\bar{r}}{r} \right)^{3/2} g(\theta),$$ with $\bar{r}$ a constant which determines the point at which $g(\theta=\pi/2)$ $\Rightarrow$ $\rho_{0}=\bar{\rho}_{0}$. We shall now analyse the behaviour of a fraction of somewhat hotter material, through a first order perturbation analysis of the above solution. This small fraction could represent the last of the material to fully cool, or result from the irradiation of the central star onto the upper and lower surfaces of the radially infalling disk, as described by e.g. Hollenbach et al. (1994) or Alexander et al. (2006) for the standard case of Keplerian accretion disks, in connection with the problem of disk photoevaporation. In the above it is shown that ionizing radiation from the star generically creates an ionized layer on the surface of the disk. Regarding the evolution of this component, we shall also be interested interested in a steady state solution given by: $$V(r,\theta)=V_{0}(r) + V_{1}(r, \theta),$$ $$U(r,\theta)=0 + U_{1}(r,\theta),$$ $$\rho(r,\theta) = \rho_{0}(r,\theta) +\rho_{1}(r,\theta),$$ where quantities with subscript (1) denote the perturbation on the background solution. Writing eqs.(14), (15) and (16) to first order in the perturbation one obtains after rearranging terms: $$g(\theta) \frac{\partial \left(r^{1/2} V_{1}\right)}{\partial r} - B\frac{\partial \left( r^{3/2} \rho_{1} \right)}{\partial r}= \frac{-1}{r^{1/2} sin(\theta)} \frac{\partial \left( sin(\theta) g(\theta) U_{1} \right) }{\partial \theta},$$ $$\frac{\partial \left( V_{1}/r^{1/2} \right) }{\partial r} = \frac{A r^{3/2}}{g(\theta)} \frac{\partial \rho_{1}}{\partial r},$$ $$\frac{\partial \left( r U_{1} \right)}{\partial r} = \frac{A r^{2}}{g(\theta)} \frac{\partial \rho_{1}}{\partial \theta},$$ In the above we have assumed an isothermal equation of state for the perturbation $P_{1}=c^{2}\rho_{1}$, an assumption often used in the modeling of astrophysical jets, e.g. the T Tauri jets observed and modelled by Hartigan et al. (2004). This idealised case serves to illustrate clearly the consequences of the physical setup being considered, as it allows for an analytic solution. The generalisation to more realistic adiabatic, polytropic or otherwise equations of state can be performed numerically, and can be expected to yield qualitatively similar results, although interesting differences in the details can be expected to emerge, which will be considered latter. In the above three equations we have introduced the constants $A=(c^{4}/2GM\bar{r}^{3})^{1/2}$ and $B=(2GM/\bar{r}^{3})^{1/2}$. To make further progress we can attempt a solution through the method of separation of variables, proposing a solution of the form: $V_{1}=V_{r} V_{\theta}$, $U_{1}=U_{r} U_{\theta}$, $\rho_{1}=\rho_{r} \rho_{\theta}$. This ansatz yields two independent systems of three equations each, one for the radial, and one for the angular dependences of the perturbations. The radial equations become: $$\frac{d \left( V_{r} /r^{1/2} \right)}{d r} = C_{r} r^{3/2} \frac{d \rho_{r}}{d r},$$ $$\frac{d(r U_{r})}{d r} = C_{\theta} r^{2} \rho_{r},$$ $$\frac{d \left( r^{1/2} V_{r} \right)}{d r} -\left( \frac{B C_{r}}{A}\right)\frac{d\left(r^{3/2} \rho_{r}\right)}{d r}= C_{c} \frac{U_{r}}{r^{1/2}}.$$ While the angular ones result in: $$V_{\theta} g(\theta) =\left( \frac{A}{C_{r}} \right) \rho_{\theta},$$ $$\frac{d \rho_{\theta}}{d \theta} = \left( \frac{C_{\theta}}{A} \right) g(\theta) U_{\theta},$$ $$\frac{d\left( sin(\theta) g(\theta) U_{\theta}\right)}{d \theta} =-C_{c} sin(\theta) g(\theta) V_{\theta}.$$ In splitting the radial and angular dependences of the perturbed continuity equation, eq.(24), we have used the result of the angular equation of the perturbed radial Euler equation, eq.(30). The constants $C_{r}$, $C_{\theta}$ and $C_{c}$ are the separation constants of the problem. Firstly we turn to the angular system, which allows an exact solution. We can take eq.(32) and substitute into it the product $g(\theta)U_{\theta}$ from eq.(31), and the product $g(\theta)V_{\theta}$ from eq.(30) to obtain an equation involving only $\rho_{\theta}$: $$\frac{d^{2} \rho_{\theta}}{d \theta^{2}} + cot(\theta)\frac{d \rho_{\theta}}{d \theta} + \left(\frac{C_{c}C_{\theta}}{C_{r}} \right)\rho_{\theta} =0,$$ having solution: $$\rho_{\theta} = c_{1} P_{m}(cos(\theta)) +c_{2} Q_{m}(cos(\theta)).$$ In the above equation $c_{1}$ and $c_{2}$ are normalisation constants, and $P_{m}$ and $Q_{m}$ are the Legendre polynomials of the first and second kinds, respectively. The index of these functions is given by the relation $2m=(4C_{T}+1)^{1/2} -1$, where $C_{T}=(C_{c} C_{\theta}/C_{r})$. As typical of separation of variables problems, we see the solution as an eigenvalue problem, with the separation constants determining the order of the solution function. The requirement of axial symmetry forces $m$ to be even. Any such desired angular distribution for $\rho_{\theta}$ can now be constructed as an infinite series of the above functions. For simplicity we analyse the case of $m=2$ $(C_{T}=6)$, $c_{1}=\bar{\rho}_{\theta}$, $c_{2}=0$: $$\rho_{\theta} = \bar{\rho}_{\theta} (3 cos^{2}(\theta)-1 )$$ This will result in slightly less material along the plane of the disk, and slightly more along the poles, for the total density $\rho=\rho_{0}+\rho_{1}$, with respect to the background state $\rho_{0}$. At this point eqs.(30) and (31) can be used to obtain: $$V_{\theta} = \left( \frac{A \bar{\rho}_{\theta}}{C_{r}} \right) \left( \frac{3cos^{2}(\theta)-1}{g(\theta)} \right)$$ $$U_{\theta}=- \left( \frac{6 A \bar{\rho}_{\theta}}{C_{\theta}} \right) \left( \frac{cos(\theta)sin(\theta)}{g(\theta)} \right)$$ The case of the radial system is more cumbersome, as the linear operator which appears is of third order. A good approximation can be obtained by discarding the term on the right hand side of eq.(29), which can then be readily integrated to give: $$V_{r}=\left( \frac{B C_{r}}{A} \right) r \rho_{r}.$$ In the above we have taken the integration constant which appears as being equal to zero, from requiring $V_{r} \rightarrow 0$ for $\rho_{r} \rightarrow 0$. Substituting the above relation for $V_{r}$ into eq.(27) leads to: $$\frac{d \rho_{r}}{d r} \left[ \left(\frac{B}{A} \right)r -r^{2} \right] + \left(\frac{B}{A} \right)\frac{\rho_{r}}{2} =0.$$ The second term in the above equation can be dismissed for $r<2GM/c^{2}$, which is in any case the validity regime defined by having previously neglected the right hand side of eq.(29), as it is easy to check from the final complete solution. In this regime we now obtain: $$\rho_{r}=\bar{\rho}_{r} \left( \frac{\bar{r}_{\rho}}{r}\right)^{1/2},$$ $$V_{r}=\left( \frac{B C_{r}}{A} \right) \bar{\rho}_{r} \bar{r}_{\rho}^{1/2} r^{1/2},$$ Where $\bar{r}_{\rho}$ is a characteristic radius at which $\rho_{r}=\bar{\rho}_{r}$. Now from eq.(28), $$U_{r} = \left( \frac{2 C_{\theta} \bar{\rho}_{r} \bar{r}_{\rho}^{1/2} }{5} \right) r^{3/2}.$$ In the above equation we have also taken the integration constant as zero, from requiring $U_{r} \rightarrow 0$ for $r \rightarrow 0$. Choosing without loss of generality the two characteristic radii $\bar{r}$ and $\bar{r}_{\rho}$ both equal to $GM/c^{2}$, we can now write the full solution as: $$\rho_{1}=\bar{\rho}_{J}\left( \frac{G M}{c^{2} r} \right)^{1/2} \left(3cos^{2}(\theta)-1 \right),$$ $$V_{1}=\left( \frac{\bar{\rho}_{J}}{\bar{\rho}_{0}} \right)\left( \frac{2c^{4}r}{GM} \right)^{1/2} \left( \frac{3cos^{2}(\theta)-1 }{g_{\theta}} \right),$$ $$U_{1}=-\left( \frac{12 \bar{\rho}_{J}}{5\bar{\rho}_{0}} \right) \left( \frac{r}{GM} \right)^{3/2} \left( \frac{c^{4}cos(\theta)sin(\theta)}{2^{1/2}g_{\theta}} \right),$$ where we have introduced $\bar{\rho}_{J}=\bar{\rho}_{\theta} \bar{\rho}_{r}$ and $g_{\theta}$ as the angular part of $g(\theta)$, $g(\theta)/\bar{\rho}_{0}$. The full solution can be seen to depend only on the two parameters $\bar{\rho}_{0}$ and $\bar{\rho}_{J}$, normalisation constants for the densities of the background state and the perturbation, with the velocities of the perturbation solution depending only, and linearly, on the ratio $Q=2^{1/2} \bar{\rho}_{J} / \bar{\rho}_{0}$, which is expected to be small, beyond natural dependences on the physical parameters of the problem, $M$ and $c$. We see form eq.(45) that the angular velocity will be zero only for $\theta=\pi/2$, $\theta=0$ and $\theta=\pi$. Thus, movement along the plane of the disk will remain along the plane, but also, along the poles movement will be exclusively radial. This last point, together with the positive sign of the radial velocity along the poles, provides for a well collimated jet along the poles. From eq.(44) we see that one has only to ask for a background state where matter is concentrated on the plane of the disk with relatively empty poles, e.g. $g(\theta) \rightarrow 0$ for $\theta \rightarrow 0$ and $\theta \rightarrow \pi$, in order to obtain extremely large ejection velocities along the poles. The axial symmetry condition imposed on eq.(34) guarantees both axial symmetry and symmetry above and below the plane of the disk for the full solution. We see also that if one takes higher orders for $m$, the index of the Legendre polynomial solution to eq.(33), one obtains increasingly more critical angles at positions intermediary between $0$ and $\pi/2$ at which the angular velocity goes to zero. In fact, more complex geometries and asymmetric jets appear, as inferred observationally by e.g. Ferreira et al. (2006), if $m$ is taken as an arbitrary real number. However, modelling a situation where a polar jet dominates the ejection identifies $m=2$ as the leading order. Notice that the qualitative behaviour of the solution is guaranteed by the exactness of the angular solution, the approximation $r<2GM/c^{2}$ used for solving the radial problem will only introduce an error in the actual values of the radial velocities outside of $r<2GM/c^{2}$, but will not change the fact that velocities will be of radial infall along the plane of the disk, $\theta=\pi/2$, and of radial outflow along the poles (where the background solution becomes very small), the jet solution for $\theta=0,\pi$. The two constants of the problem, $\bar{\rho}_{0}$ and $\bar{\rho}_{J}$, can now be calculated once a choice of $g(\theta)$ is specified, from the two conditions: $$\dot{M}_{a} =2\pi \int_{0}^{\pi} \rho_{0}V_{0} sin(\theta) r^{2} d\theta,$$ $$\dot{M}_{j} =2\pi \int_{0}^{\theta_{J}} \rho_{1}V_{1} sin(\theta) r^{2} d\theta,$$ where $\theta_{J}$ is a suitable angle defining the opening of the jet, in all likelihood very small, as will be see in the following section. In the above equations $\dot{M}_{a}$ gives the matter accretion rate onto the central star, and $\dot{M}_{j}$ the matter ejection rate due to the jet. Dimensionally, the two quantities above will scale as: $$\dot{M}_{a} =C_{a} \frac{ \left( \pi G M \right)^{2} }{c^{3}} \bar{\rho}_{0}$$ $$\dot{M}_{j} =C_{j} \frac{ \left( \pi G M \right)^{2} }{c^{3}} \frac{\bar{\rho}_{J}^{2}}{\bar{\rho}_{0}},$$ where $C_{a}$ and $C_{j}$ are two dimensionless constants which will depend on the choice of $g_{\theta}$, and which would be expected to be of order 1. Qualitatively, this type of model naturally furnishes a tight disk-jet connection (c.f. eq. 44) e.g., as now firmly established in microquasars and AGN jets (see e.g. Marscher et al. 2002, Chatterjee et al. 2009). In the above systems bursts of enhanced jet activity are seen to follow temporal dips in disk luminosity output after small characteristic delay times. In the present model, such a situation would be expected if the critical radius for transition to radial flow in the disk made a sudden transition to higher values. Again, the drop in disk output might not reflect the disk material disappearing (in this case being swallowed by the central black hole, as sometimes proposed), but simply fading from view as heating mechanisms shut down, then naturally enhancing jet activity as the effective $\dot{M}_{a}$ increases. Particular Solutions ==================== In order to present a sample of the trajectories expected in the model, we turn to the full solution to the problem, eqs.(21) and (22), but written in dimensionless form: $$\frac{d {\cal R} }{d {\cal T}} =\frac{-1}{ {\cal R}^{1/2}}+Q {\cal R}^{1/2} \left(\frac{3 cos^{2}(\theta)-1}{g_{\theta}} \right),$$ $$\frac{d \theta}{d {\cal T}} =- \frac{6}{5}Q {\cal R}^{1/2} \left(\frac{sin(\theta)cos(\theta)}{g_{\theta}} \right).$$ The above remain in spherical coordinates, where ${\cal R} = r c^{2}/GM$ and ${\cal T}=t c^{3}/GM$. A choice of $Q$ and $g_{\theta}$ now allows to numerically integrate trajectories. We take $Q=5 \times 10^{-2}$, and $$g_{\theta}=e^{-\left( \frac{\theta- \pi/2}{\sqrt{2} \theta_{0}} \right)^{2}}$$ ![The figure gives trajectories for the complete solution, for a range of initial vertical height positions at $R=2$. The dimensionless parameter of the problem was chosen as $Q=5 \times 10^{-2}$. We see the lowermost two trajectories converging into the central potential, but the six upper ones shooting up into a polar jet, where velocities are over 4 orders of magnitude higher than what was originally present along the disk.](fig1.ps) with $\theta_{0} = 0.38$ radians, or about 22 degrees. This values is far from representing an extremely flattened disk, and hence one which does not force the jet solution, see eq.(44). With these parameters, and initial conditions specified in dimensionless cylindrical coordinates as $R=2$ and $Z$ ranging from 0.4 to 1.4, we solve eqs.(50) and (51) through a finite differences scheme to plot figure 1. For this case, the two lowermost curves present trajectories which all turn downwards to converge onto the central star. These are solutions which essentially follow the background state, infalling onto the bottom of the potential well. As one raises the initial value of $Z$ however, a threshold is crossed and curves of a very different type ensue, the six upper jet trajectories shown in figure 1. We see the large pressure gradients associated with the distribution of matter in the background solution acting to break the fall of the incoming material, turn it back, and then accelerate it vertically through the vertical density gradients. These jet trajectories present a relatively constant thickening at the base of the jet of order $R=0.5$, which then rapidly diminishes with height to eventually yield an extremely collimated structure which rapidly narrows to below the resolution limit of the solution, notice the horizontal displacement of the vertical axis. Note also that although the region beyond ${\cal R}=2$ lies outside the approximation of the radial solution, the qualitative form of the full solution will not deviate much from what is shown in figure 1, due to the exact character of the angular solution. The final well collimated jet ejecting material along the poles will be a generic feature. We can also measure the velocities along the jet, which scale linearly with the value of $Q$, and are sensitive to the choice of $g_{\theta}$ and in this case $\theta_{0}$. For the cases presented here values upwards of $10^{3} c$ commonly appear along the jet, much larger than the values of order $c$ present in the disk. These radial velocities are somewhat inexact, but will always present large values for background state configurations which empty towards the poles (c.f. eq.44), and will present larger values as the vertical density gradients increase. Notice that the generic validity of the solution proposed does not require that all of the disk material should loose all of its angular momentum at the critical radius $R_{T}$, only that some of the disk material should loose most of its angular momentum at that point. Any minor, residual angular momentum remaining on the disk fraction which forms the jet will only establish a finite jet cross-sectional area through the appearance of a centrifugal barrier, which will present only a small correction on the scenario given here. Still, the empirical presence of spectroscopically studied accretion disks truncated interior to certain critical radii suggests the very substantial reduction of the shears in that region, as will be the case when a transition to radial flow takes place. In going back to eqs.(50) and (51) we see that one expects $$Q^{2} \sim \frac{\dot{M}_{j}}{\dot{M}_{a}}.$$ It is reassuring that for the jets associated with T Tauri stars for example, values of $\dot{M}_{j} / \dot{M}_{a}$ of between 0.1 and 0.01 on average are observationally inferred and therefore of order $0.3<Q<0.1$ (e.g Hartigan et al. 1996, Gullbring et al. 1998, Hartigan et al. 2004, Ferreira et al. 2006). These are compatible with the value of $Q=5 \times 10^{-2}$ used to plot fig. (1), and hence ones which will readily yield jet solutions. Given the substantial spread in the inferred values quoted above, we see that vales of $10^{-3}$ or even $10^{-4}$ occur. We note that for smaller values of $Q$ the jet becomes more centrally localised, making the numerical problem of integrating trajectories challenging. In general, thin disk jet solutions will appear for lower values of $Q$, or conversely, jet solutions for low values of $Q$ require thinner disks, small values of $\theta_{0}$, for the function $g_{\theta}$ used in the example. ![The figure gives trajectories for the complete solution, for a range of initial vertical height positions at $R=3.0$. The dimensionless parameter of the problem was chosen as $Q=10^{-4}$. We see a very thin disk developing a central bulge inwards of $R=1$, from which an extremely well collimated jet of very fast material exits along the poles.](fig2a.ps) Figure 2 presents an example with a much smaller value of $Q=10^{-4}$, requiring a slightly thinner disk with $\theta_{0}=0.24$, close to 14 degrees. The figure shows a contour plot for dimensionless momentum flow, the product of the dimensionless velocity, $d {\cal R} /d {\cal T} $, and density in units of $M/ {\cal R}^{3}$ from eqs. (43) and (44). Within the disk contours span the from $-8$ to $-0.1$, in intervals of $0.05$, while in the jet contours span from $2\times 10^{-4}$ to $10^{-2}$ in intervals of $5 \times 10^{-5}$. The disk appears clearly identified as being made up of infalling material, while the jet shows up as a region of pure outflow carrying little mass, but at velocities of order $10^{5} c$. Again, we see the main features of the solution being well established in the region interior to $R<2$, the details of the radial part of the solution beyond this region will be somewhat off, within the qualitative solution shown. The long range stability and coherence of these structures lies outside the scope of this work, and is in all probability furnished by a series of mechanisms extensively explored in the literature including angular momentum, magnetic fields and pressure containment of the surrounding medium, e.g. Begelman et al. (1984), Blandford (1990), Falle (1991), Kaiser & Alexander (1997). In going to the more extreme jet phenomena associated with stellar black holes (e.g Mirabel & Rodriguez 1999), quasars (e.g. Marscher 2002) and gamma ray bursts (e.g. Mészáros 2002), it is natural to expect the ideas presented here to apply, but amplified to a much more extreme regime by the appearance of corresponding relativistic and general relativistic effects, to first order, the shift in the divergence in the potential from $r=0$ in the Newtonian case to $r=r_{s}$ in the general relativistic one. It is therefore natural to expect purely hydrodynamic jet generation mechanism to apply across all classes of astrophysical objects, specially given the qualitative scalings and similarities which appear over all astrophysical jet classes (e.g. Miarabel & Rodriguez 1999, Mendoza, Hernandez & Lee 2005), in addition to the magnetically driven processes traditionally found in the literature. Notice from eq.(44) the intimate link between the jet velocity and the physical state of the infalling material, $c$ and $\dot{M}_{a}$ through $\bar{\rho}_{0}$. This implies that temporal variations in the parameters of the infalling disk will result in temporal variations in the density and velocity of the jet material, in a way described by eqs.(43) and (44). The above can serve as a physical description of the key processes relevant to the formation of internal shocks in astrophysical jets, the main ingredient behind phenomena such as HH objects and gamma ray bursts. Conclusions {#ccl} =========== We show that given a radially infalling accretion disk, a purely hydrodynamical jet ensues. We calculate the condition for the transition from quasi-circular to quasi-radial flow in a standard accretion disk, and show it will always occur for power law surface density profiles of the form $\Sigma \propto R^{-n} $, interior to a critical radius, provided $n>1/2$. Comparison with inferred inner holes in observed accretion disks yields results consistent with our estimates for the above transition radius, the point where shears in the flow (and hence heating) end. Well collimated jets readily appear, proving the existence of purely hydrodynamical mechanisms for the generation of astrophysical jets. acknowledgements {#acknowledgements .unnumbered} ================ Xavier Hernandez acknowledges the hospitality of the Observatoire de Paris for the duration of a sabbatical stay during which many of the ideas presented here were first developed. Pablo Rendon and Rosa Rodriguez acknowledge financial support from project PAPIIT IN110411 DGAPA UNAM. Antonio Capella acknowledges financial support from project PAPIIT IN101410 DGAPA UNAM. [99]{} Alexander, R. D., Clarke, C. J., Pringle, J. E., 2006, MNRAS, 369 229 Begelman, M. Blandford, R. D., Rees, M., 1984, Rev. Mod. Phys., 56, 256 Blandford, R. D., Payne, D. G., 1982, MNRAS, 199, 883 Blandford, R. D., 1990, in “Active Galactic Nuclei, eds. Courvoisier, T. L., Mayor, M., Saas-Fee Advanced Course 20 (Les Diablerets: Springer-Verlag), 161-275 Chatterjee, R., et al., 2009, ApJ, 704, 1689 Chiang, E., Murray-Clay, R., 2007, Nat. Phys., 3, 604 Clarke, C. J., Gendrin, A., Sotomayor, M., 2001, MNRAS, 328, 485 D’Alessio, P., et al., 2005, ApJ, 621, 461 Dullemond, C. P., Dominik, C., 2005, A&A, 434, 971 Dutrey, A., et al., 2008, A&A, 490, L15 Falle, S. A. E. G., 1991, MNRAS, 250, 581 Ferreira, J., Dougados, C., Cabrit, S., 2006, A&A, 453, 785 Firmani, C., Hernandez, X., Gallagher, J., 1996, A&A, 308 403 Glauert, M. B., 1956, J. Fluid Mech., 1, 625 Gullbring, E., Hartmann, L., Briceño, C., Calvet, N., 1998, ApJ, 492, 323 Hartmann, L., Calvet, N., Gullbring, E., D’Alessio, P., 1998, ApJ, 495, 385 Hartigan, P., Edwards, S., Gandhour, L., 1995, ApJ, 452, 736 Hartigan, P., Edwards, S., Pierson, R., 2004, ApJ, 609, 261 Henriksen, R. N., Valls-Gabaud, D., 1994, MNRAS, 266, 681 Hollenbach, D., Johnstone, D., Lizano, S., Shu, F., 1994, ApJ, 428, 654 Hughes, A. M., Wilner, D. J., Calvet, N., D’Alessio, P., Claussen, M. J., Hogerheijde, M. R., ApJ, 2007, 664, 536 Hughes, A. M., et al., 2009, ApJ, 698, 131 Ireland, M. J., Kraus, A. L., 2008, ApJ, 678, L59 Keiser, C. R., Alexander, P., 1997, MNRAS, 286, 215 King, A. C., Needham, D. J., 1994, J. Fluid Mech., 268, 89 Lada, C., et al., 2006, AJ, 131, 1574 Marscher, A. P., Jorstad, S. G., Gomez, J., Aller, M. F., Teräsranta, H., Lister, M. L., Stirling, A. M., 2002, Nature, 417, 625 Meier, D. L., Koide, S., Uchida, Y., 2001, Science, 291, 84 Mendoza, S., Hernandez, X., Lee, W. H., 2005, Rev. Mex. Astron. Astrofis., 41, 61 Mészáros, P., 2002, ARA&A, 40, 137 Mirabel, I. F., Rodriguez, L. F., 1999, ARA&A, 37, 409 Price, D. J., Pringle, J. E., King, A. R., 2003, MNRAS, 339, 1223 Pringle, J. E., 1981, ARA&A, 19, 137 Reipurth, B., Bally, J., 2001, ARA&A, 39, 403 Rice, W. K. M., Wood, K., Armitage, P. J., Whitney, B. A., Bjorkman, J. E., 2003, MNRAS, 324, 79 Shakura, N. I., Sunyaev, R. A., 1973, A&A, 24, 337 Scheuer, P. A. G., 1974, MNRAS, 166, 513 Strom, K. M.,Strom, S. E., Edwards, S., Cabrit, S., Skrutskie, M. F., 1989, AJ, 97, 1451 Taylor, G., 1960, Proc. R. Soc. Lond. A, 259, 1 von Weizsäcker, C. F., 1943, [*Z. Astrophys*]{}., 22, 319
{ "pile_set_name": "ArXiv" }
--- abstract: 'The opacity of typical objects in the world results in occlusion — an important property of natural scenes that makes inference of the full 3-dimensional structure of the world challenging. The relationship between occlusion and low-level image statistics has been hotly debated in the literature, and extensive simulations have been used to determine whether occlusion is responsible for the ubiquitously observed power-law power spectra of natural images. To deepen our understanding of this problem, we have analytically computed the 2- and 4-point functions of a generalized “dead leaves" model of natural images with parameterized object transparency. Surprisingly, transparency alters these functions only by a multiplicative constant, so long as object diameters follow a power law distribution. For other object size distributions, transparency more substantially affects the low-level image statistics. We propose that the universality of power law power spectra for both natural scenes and radiological medical images – formed by the transmission of x-rays through partially transparent tissue – stems from power law object size distributions, independent of object opacity.' author: - Joel Zylberberg - David Pfau - Michael Robert DeWeese title: 'Dead leaves and the dirty ground: low-level image statistics in transmissive and occlusive imaging environments' --- Introduction ============ Natural images are surprisingly statistically uniform. The autocorrelation function, a measure of how similar nearby pixels tend to be, is virtually universal for natural images [@stephens; @ruderman; @dong_atickB; @field; @olshausen_simoncelli; @torralba; @vandersschaff] (Fig. 1). This is typically quantified by measuring image power spectra (Fourier transform of the autocorrelation function), which are well-described by scale-invariant power law functions with power $\mathcal{P}$ and spatial frequency $k$ related by $\mathcal{P}(k)\propto k^{-\alpha}$, with exponents $\alpha \approx 2$. The exponents $\alpha$ vary slightly from image-to-image, and there are small differences in average exponent $\alpha$ between terrestrial [@field; @ruderman; @vandersschaff] and aquatic [@balboa] environments, and between natural and man-made ones [@torralba]. Intriguingly, even radiological images like mammograms have power law power spectra [@heine_velthuizen; @li], typically with larger $\alpha$ values, despite the fact that the physics of image formation are very different for radiological and natural images. In natural images, formed by reflection of light off of surfaces, objects tend to be opaque, and thus they occlude one another, whereas in mammograms, formed by the transmission of x-rays through breast tissue, objects are more transmissive and do not completely occlude one another. The statistics of radiological images have received less attention and are less well understood. Interestingly, however, the powers $\alpha$ typically vary between mammogram images of patients with low vs. high risk of developing breast cancer [@li], and vary as a function of the density of the breast tissue [@metheany], highlighting the potential clinical importance of these image statistics. The statistical regularity of natural scenes implies that engineers can design, and evolution might have selected for, coding schemes that exploit this structure [@barlow61; @olshausen_simoncelli]. Indeed, the peripheral mammalian visual system appears to exploit this homogeneity by using simple filters to decorrelate the incoming signal [@dong_atickA; @atick_redlich; @dan] and more complex feature dictionaries to efficiently encode the decorrelated signal [@jz_plos; @olshausen_simoncelli; @rehn_sommer]. ![ (Color online) **Natural images have nearly identical scale invariant power spectra.** Even very different natural images (**A**,**B**) have similar rotation-averaged power spectra that each follow a power law (**C**). Line colors in panel **C** match the borders of corresponding panels **A** and **B**. The upper curve in panel **C** corresponds to the image in panel **A** while the lower one corresponds to **B**. ](fig1){width="3.6in"} Using the intuition that the environment is composed of distinct objects, Ruderman studied a “dead leaves" model [@matherton_1968; @bordenave] for natural scenes, in which images are created by sequentially placing opaque, potentially overlapping circles of random brightness in random locations on a 2-dimensional image plane [@ruderman_scaling] (Fig. 2A). Ruderman modeled correlations between pixels by assuming a different correlation function for points falling within a visible circle than for points falling in different visible circles. Using analytical calculations he demonstrated that, so long as the diameters $s$ of the circles follow a power law distribution with probabilities $p(s) \propto s^{-(3 + \eta)}$, the images exhibit power law correlation functions, $C(q) \propto q^{-\eta}$, where $q$ is the separation between pixels, and power law power spectra, $\mathcal{P}(k) \propto k^{-(2-\eta)}$. If the circle sizes are drawn from other distributions, Ruderman’s analytical calculations suggest that the power spectra that could be made to differ from a power law, contrary to the old notion [@carlson] that the $1/k^2$ power spectra result from the mere presence of edges, each of which has a $1/k^2$ 1-dimensional power spectrum (*cf.* Balboa et al. [@balboa_2001]). More recently, Balboa et al. [@balboa_2001] simulated the analytical examples presented by Ruderman [@ruderman_scaling], including images with the exponential distribution of object sizes that was claimed [@ruderman_scaling] to yield non-power-law power spectra. They found that these images had nearly power law power spectra, and subsequently reiterated the previous claim that occlusion, and not object size distributions, are the cause of power law power spectra in natural images. This “edges vs. size distributions" debate was subsequently resolved when Hsiao and Milane demonstrated, via numerical simulations, that dead leaf models with partially transparent objects (and thus only partial occlusion) whose sizes follow a power law distribution yield power law power spectra, and that dead leaf models with opaque objects from other size distributions can have non power-law power spectra [@hsiao]. In other words, occlusion is neither necessary, nor sufficient, to yield power law image power spectra. In the same paper, Hsiao and Milane computed the power spectrum of a simplified ensemble of images formed by summing the intensities of different randomly placed disks. This model was simpler than the images with partially occluding leaves that they simulated. The linearity of this model makes it relatively straightforward to compute the Fourier transform of the model images, and thus to estimate the power spectra. Thus, to date, the 2-point statistics of dead leaf image models have been analytically calculated for both fully opaque leaves [@ruderman_scaling], and for fully transmissive leaves [@hsiao]. What remains is to solve for the 2-point function of images with partial occlusion, which will deepen our understanding of how opacity and image statistics inter-relate along this continuum of object properties. Thusly motivated, we studied a generalized dead leaves model, in which the leaves have variable transparency. While general feature probabilities have been solved exactly for the fully opaque dead leaves model [@pitkow], our transparent generalization requires other methods and has not previously been systematically explored. We show herein that, so long as leaf sizes follow a power-law distribution, transparency results in an overall multiplicative factor in the 2- and 4-point functions but does not change their functional (power-law) form. For other size distributions, transparency does change the form of the autocorrelation function, suggesting that power-law size distributions, unify the observed power spectra of natural and radiological images. Analytical calculation of the 2-point function in the transmissive dead leaves model ==================================================================================== We begin by analytically computing the 2-point functions of images in our “transmissive dead leaves" environment. For image pixels values $I(\vec{x})$, the 2-point function is given by $C(\vec{x},\vec{x}') = \left< I(\vec{x}) I(\vec{x}') \right> = C(|\vec{x}-\vec{x}'|)$, where the angle brackets denote averaging over images drawn from this ensemble and the second step stems from the fact that, since our model world is invariant under both translations and rotations, the 2-point function depends only on the distance $|\vec{x}-\vec{x}'| = q$ between sample points. The image is formed by randomly placing a circle whose diameter $s$ is drawn from some distribution, with brightness value $b$, and transparency $a$, on a surface of diameter $L$. The brightnesses $b$ will be drawn from a zero-mean distribution, and the transparencies $a \in [0,1]$ can also be random. A value $a=1$ specifies a fully transparent (invisible) circle, while a value of $a=0$ specifies a fully opaque circle, as in Ruderman’s model [@ruderman_scaling]. When a new circle is added, the pixel value $I(\vec{x})$ at a point $\vec{x}$ that falls within the circle undergoes the transformation $$I(\vec{x}) \to (1-a)b + aI(\vec{x}).$$ Pixels not lying under the circle are unaffected by its addition. This process is continued ad infinitum to create model images (Fig. 2). ![ (Color online) **For power law object size distributions, the 2-point statistics of opaque and transmissive dead leaves images differ by a multiplicative constant.** (**A**) A representative image from the opaque ($a=0$) dead leaves model with circle diameters drawn from the distribution $p(s) \propto s^{-3.2}$ for $s>s_0 = 1$ pixel and circle brightnesses drawn uniformly within $b \in [-1,1]$. (**B**) When the circles are partially transparent ($a=0.25$ for all circles), but all other parameters are the same, previously occluded circles are partially visible. (**C**) A higher level of transparency ($a = 0.75$) results in an image that begins to approximate Gaussian pink noise, as expected from the central limit theorem [@pitkow]. (**D**) Autocorrelation functions of dead leaves image ensembles of different opacity levels differ only by a multiplicative constant for power-law object size distributions. The 2-point functions are power law functions of distance, with power $\sim-0.2$, in good agreement with our analytical calculation. (**E**) Similarly, the power spectra of these image ensembles are roughly power-law functions and are all the same up to a multiplicative constant. The ratio of the opaque and most transparent power spectra is nearly flat. At relatively high spatial frequencies (above $\sim 20$ cycles/image), corresponding to small length scales, the $q \gg s_0$ approximation in our analytical calculation fails, and slight deviations from power-law power spectra can be observed, as can deviations from constancy in the ratio.](fig2){width="3.6in"} We will compute $\left< I(\vec{x})^2 \right>$ and $C(q)$ recursively by noting that adding another leaf to an image creates a new image from the same transmissive dead leaves ensemble and thus the (average) statistical properties must remain unchanged by this transformation [@ruderman_scaling]. Using Eq. (1), we can compute the pixel variance $$\begin{aligned} \left< I^2(\vec{x}) \right> &=& \left( 1-P_{in} \right) \left< I^2(\vec{x}) \right> \\ &+& P_{in} \left< \left( aI(\vec{x}) + (1-a)b \right)^2 \right> \nonumber \\ \nonumber \Rightarrow \left< I^2(\vec{x}) \right> &=& \frac{ \left< b^2 \right> \left< (1-a)^2\right>} {1- \left< a^2 \right> } \nonumber,\end{aligned}$$ where $P_{in}$ is the probability that the point in question falls within the newly added circle. The quantity $P_{in}$, and thus the distribution of circle sizes, does not affect the pixel variance. It will however, affect the spatial properties of the image, including $C(q)$. To compute $C(q)$, consider how the pixel values of a pair of points with separation $q$ are affected by the addition of a new leaf. After adding the leaf, either one, both, or neither of the sample points lie under the leaf, resulting in three different possible modifications to the pixel values (Eq. 1). These outcomes occur with probabilities $P_1(q)$, $P_2(q)$, or $P_0(q)$, respectively, which we will later compute. Equating the 2-point functions before and after the addition of a new leaf, we obtain $$\begin{aligned} &C(q)& = P_0(q) C(q) + P_1(q) \left< \left[ a I(\vec{x}) + (1-a)b \right] I(\vec{x}') \right> \nonumber \\ &+& P_2 (q) \left< \left[ a I(\vec{x}) + (1-a)b \right] \left[ a I(\vec{x}') + (1-a)b \right] \right>. \end{aligned}$$ Recalling the definition of the autocorrelation function and the normalization $P_0 (q)+ P_1(q) + P_2(q) = 1$, we find $$C(q) = \frac{ \left< b^2 \right> \left< (1-a)^2 \right> P_2(q)}{P_1(q)\left< 1-a \right> + P_2(q) \left< 1-a^2 \right> }.$$ The quantities $\left< b^2 \right>$, $\left< a^2 \right>$, and $\left< a \right>$ depend on the distributions of circle brightnesses and opacities. To calculate $P_1(q)$, we first define $P^{\star} = \left< s^2 \right> / L^2$, which is the probability that any given point in the image falls within a newly-deposited leaf. Here $L$ is the diameter of the circular image area, $s$ is the diameter of the newly added circle, and we assume $\left<s^2 \right> \ll L^2$. The probability $P_1(q)$ that either point, but not both, falls within the circle is then $P_1(q) = 2 \left( P^{\star} - P_2(q) \right)$, where the factor of 2 comes in because there are two such points to consider. To determine the probability $P_2(q)$, note that, for a circle of diameter $s$, given that one particular point $\vec{x}$ is within the circle (which occurs with probability $s^2/L^2$), the probability that another point, a distance $q$ away, is also within the circle, is given by [@ruderman_scaling] $g(q/s \in [0,1]) = \frac{2}{\pi} \left[ \cos^{-1}(q/s) - (q/s) \sqrt{1-(q/s)^2} \right]$, and thus $$P_2(q) = \int_{0}^{\infty} \frac{s^2}{L^2} g(q/s)p(s) ds.$$ For a power law size distribution $p(s) = (A/s_0) (s/s_0)^{-\alpha}$, where $\alpha > 3$, $A$ is a unitless normalization constant, and $s_0$ is the small-size cutoff, the change of variables $u = s/q$ in the above integral yields $$P_2(q) = A \left( \frac{s_0}{L} \right)^2 \left( \frac{q}{s_0} \right)^{-(\alpha -3)} \int_{1}^{\infty} g(1/u) u^{2-\alpha} du.$$ Define the integral to be $B(\alpha)$. For pixel separations much larger than the small-size cutoff of our leaf diameter distribution, $q \gg s_0$ (in which case $P^{\star} = \frac{A}{\alpha-3} \left( \frac{s_0}{L} \right)^2 \gg P_2(q)$), Eq. (4) becomes $$C(q) = \frac{ B(\alpha) \left(\alpha-3 \right) \left< b^2 \right> \left< (1-a)^2 \right>}{2\left< 1-a \right>} \left( \frac{q}{s_0} \right)^{-(\alpha - 3)},$$ yielding an image power spectrum [@ruderman_scaling] $\mathcal{P}(k) \propto \frac{ \left< b^2 \right> \left< (1-a)^2 \right>}{\left< 1-a \right>} k^{-(5-\alpha)}$ in which the opacity affects the power spectrum only as a multiplicative prefactor. When $a=0$ for all circles (opaque limit), our result is equal to that of Ruderman [@ruderman_scaling], as it must be. Also note that, as one might expect, the 2-point function does not depend on the size $L$ of the image surface. To demonstrate that leaf opacity can affect the functional form of the 2-point function, we repeat the above calculations, but now have all leaves be the same size $s^{\star}$. The size distribution is thus $p(s) = \delta(s-s^{\star})$, in which case the correlation function is $$C_{\delta} (q) = \frac{ \left<b^2 \right> \left<\left( 1-a \right)^2 \right> g(q/s^{\star})}{2 \left<1-a \right> - \left< \left(1+a\right)^2 \right> g(q/s^{\star})},$$ which depends non-trivially on $a$: for $q>s^{\star}, g(q/s^{\star})=0$ and the correlation function vanishes, so the large-$q$ limit in which Eq. (7) was derived is irrelevant for delta-function size distributions. Furthermore, even for fully opaque leaves, it is clear that this correlation function, which is identically zero for $q>s^{\star}$, is not described by a power-law function of distance. A comparison of Eqs. (2) and (7) shows that the pixel variance, and the image autocorrelation function, are multiplied by different opacity dependent pre-factors. For $q=0$, the variance and the 2-point function are equal, so the fact that for $q \gg s_0$, they scale differently with changing opacity highlights that there is a qualitative change in the 2-point function near the $q \sim s_0$ boundary. For natural images, the minimum object size is much smaller than our cameras can resolve, so this boundary is never encountered in practice. Furthermore, this comparison demonstrates that not all image statistics vary in the same way with changing leaf opacity. Analytical calculation of the 4-point function for collinear points in the transmissive dead leaves model ========================================================================================================= As we have seen, the form of the 2-point function is independent of leaf opacity for power-law object size distributions. At the same time, the images generated with different leaf opacities (Fig. 2) are visibly different, so there must be some difference in the image statistics (aside from the overall pixel variance) from ensembles with different object opacities. To understand this difference, we consider higher-order statistics beyond the 2-point function. If the leaf brigthnesses $\left<b \right>$ are symmetrically distributed about zero, then the 3-point function will vanish, and so the next possible candidate beyond the 2-point function is the 4-point function. In this section, we will compute the 4-point function $C^{coll}_4(\vec{x}, \vec{x}',\vec{x}'',\vec{x}''') = \left< I(\vec{x}) I(\vec{x}') I(\vec{x}'') I(\vec{x}''') \right>$ for equidistant collinear points; $|\vec{x}-\vec{x}'| = |\vec{x}'-\vec{x}''| = |\vec{x}''-\vec{x}'''| = q$ and $|\vec{x}-\vec{x}''| = |\vec{x}'-\vec{x}'''| = 2q$, for the dead leaves model with power-law leaf size distribution. We chose this arrangement of points because it considerably simplifies the analysis of the 4-point function, for reasons that will become apparent during the calculation. Nevertheless, the calculation itself is still somewhat tedious, so some readers may wish to skip to the result at the end of this section. As in the case of the 2-point function described above, since our image ensemble is invariant under translations and rotations, the result depends only on the pixel spacing $q$: $C^{coll}_4(\vec{x}, \vec{x}',\vec{x}'',\vec{x}''') = C^{coll}_4(q)$. We apply the same recursive logic that we used for computing the 2-point function in order to infer the 4-point function, and start by enumerating all of the possible modifications to the 4-point function upon the addition of a new circle. We will number the points from left to right. Thus, $$\begin{aligned} C^{coll}_4(q) &=& P^{coll}_{\o}(q) C^{coll}_4(q) \\ &+& P^{coll}_{1,\o} (q) \left< \left[ a I(\vec{x}) + (1-a)b \right] I(\vec{x}') I(\vec{x}'') I(\vec{x}''') \right> \nonumber \\ &+& P^{coll}_{2,\o}(q) \left< I(\vec{x}) \left[a I(\vec{x}') + (1-a)b \right] I(\vec{x}'') I(\vec{x}''') \right> \nonumber \\ &+& P^{coll}_{3,\o}(q) \left< I(\vec{x}) I(\vec{x}') \left[a I(\vec{x}'') + (1-a)b \right] I(\vec{x}''') \right> \nonumber \\ &+& P^{coll}_{4,\o}(q) \left< I(\vec{x}) I(\vec{x}') I(\vec{x}'') \left[a I(\vec{x}''') + (1-a)b \right] \right> \nonumber \\ &+& P^{coll}_{1,2}(q) \left< \left[a I(\vec{x}) + (1-a)b \right]\left[a I(\vec{x}') + (1-a)b \right] I(\vec{x}'') I(\vec{x}''') \right> \nonumber \\ &+& P^{coll}_{2,3}(q) \left< I(\vec{x}) \left[a I(\vec{x'}) + (1-a)b \right]\left[a I(\vec{x}'') + (1-a)b \right] I(\vec{x}''') \right> \nonumber \\ &+& P^{coll}_{3,4}(q) \left< I(\vec{x}) I(\vec{x}') \left[a I(\vec{x''}) + (1-a)b \right]\left[a I(\vec{x}''') + (1-a)b \right] \right> \nonumber \\ &+& P^{coll}_{1,2,3}(q) \left< \left[a I(\vec{x}) + (1-a)b \right] \left[a I(\vec{x'}) + (1-a)b \right] \left[a I(\vec{x''}) + (1-a)b \right] I(\vec{x}''' )\right> \nonumber \\ &+& P^{coll}_{2,3,4}(q) \left< I(\vec{x}) \left[a I(\vec{x'}) + (1-a)b \right] \left[a I(\vec{x''}) + (1-a)b \right] \left[a I(\vec{x'''}) + (1-a)b \right] \right> \nonumber \\ &+& P^{coll}_{1,2,3,4}(q) \left< \left[a I(\vec{x}) + (1-a)b \right] \left[a I(\vec{x'}) + (1-a)b \right] \left[a I(\vec{x''}) + (1-a)b \right] \left[a I(\vec{x'''}) + (1-a)b \right] \right> \nonumber, \end{aligned}$$ where $P^{coll}_{\o}$ is the probability that none of the four collinear points fall under the newly-deposited circle, $P^{coll}_{i,\o}$ is the probability that only the $i^{th}$ point falls under the newly-deposited circle, $P^{coll}_{i,j}$ is that probability that only the $i^{th}$ and $j^{th}$ collinear points fall under the newly-deposited circle, and so on. Because the points are collinear, it is impossible for non-neighboring pixels to fall under a given circle unless all of the pixels in between them also fall under that circle. Hence, there are no terms like $P^{coll}_{1,3}$ or $P^{coll}_{1,2,4}$ in the above equation, since they would require there to be “gaps" between neighboring pixels. Alternatively, one can include those terms but note that the probabilities associated with them are zero. To simplify Eq.(9) to the point that we can easily solve for $C^{coll}_4(q)$, we will first expand and simplify all of the average products $\left< \cdot \right>$, then compute all of the probabilities $P^{coll}_{\{ \cdot \}}$, and finally assemble all of these pieces. Expanding and simplifying the average pixel-value-products ---------------------------------------------------------- Since the circle brightnesses $b$ are zero-mean and independently drawn, each of the terms in which a single pixel is modified (the second through fifth terms in Eq. (9)) reduces to $\left<a\right> P^{coll}_{i,\o} C^{coll}_4(q)$. Similarly, expanding the terms in which 2 points fall under the circle (the sixth through eighth terms in Eq. (9)), recalling that $\left<b \right> = 0$, and performing a bit of algebra, each of those terms can be simplified to $$P^{coll}_{i,j}(q) \left[ \left< a^2 \right> C^{coll}_4(q) + \left< (1-a)^2 \right> \left< b^2 \right> C_2(|k-m|q) \right],$$ where $k \ne m$, $k,m \in \{1,2,3,4\} \backslash \{i,j\} $, $C_2(.)$ is the 2-point function that we calculated in the previous section (Eqs. (4) and (7) for power-law object size distributions), and we now denote it with a subscript 2 to avoid confusion with the 4-point function. Assuming that the circle brightnesses are symmetrically distributed about zero (and thus $\left<b^3 \right> = 0$), the $P^{coll}_{i,j,k}$ terms in which 3 points fall under the circle reduce to $$P^{coll}_{i,j,k}(q) \left[ \left< a^3\right> C^{coll}_4(q) + \left< a(1-a)^2 \right> \left< b^2 \right> \left( C_2(q) + C_2(2q) + C_2(3q) \right) \right].$$ Finally, the last term in Eq. (9), in which all 4 points fall under the new circle, simplifies to $$P^{coll}_{1,2,3,4}(q) \left[ \left< a^4\right> C^{coll}_4(q) + \left<a^2 (1-a)^2 \right> \left<b^2 \right> \left( 3C_2(q) + 2C_2(2q) + C_2(3q) \right) + \left< (1-a)^4 \right> \left<b^4 \right> \right].$$ Computing the probabilities $P^{coll}_{\{ \cdot \}}$ ---------------------------------------------------- We now require the probabilities $P^{coll}_{\o}, P^{coll}_{1,\o}, P^{coll}_{2,\o}, P^{coll}_{1,2}, P^{coll}_{2,3}, P^{coll}_{1,2,3}$, and $P^{coll}_{1,2,3,4}$. The remaining probabilities in Eq. (9) are equivalent to these because of the symmetry of the arrangement of points (and of the image ensemble). Because all intervening pixels must lie under the circle if the bounding ones do, $P^{coll}_{1,2,3,4}(q) = P_2(3q)$, where $P_2(.)$ is the probability that 2 pixels of a given separation lie under the same circle, and is calculated in the previous section (Eq. (5) for power-law distributions of circle sizes). We will use similar arguments to obtain the other 6 probability functions that we require. The “triplet" probability $P^{coll}_{1,2,3}(q)$ is thus given by the probability that 3 of the (adjoining) pixels fall under the circle, minus the probability that all four pixels fall under it: $P^{coll}_{1,2,3} = P_2(2q) - P_2(3q)$. And by the same logic, $P^{coll}_{1,2} = P_2(q) - P_2(2q)$. For the “inner" pairs, we compute the probability of the 2 “inner" points falling under the circle minus the probability that those two points *and any adjoining ones* all fall under the circle. Thus, $$\begin{aligned} P^{coll}_{2,3}(q) &=& P_2(q) - P^{coll}_{1,2,3} - P^{coll}_{2,3,4} - P^{coll}_{1,2,3,4} \\ \Rightarrow P^{coll}_{2,3}(q) &=& P_2(q) - 2P_2(2q) + P_2(3q). \nonumber\end{aligned}$$ Similarly, $P^{coll}_{1,\o}(q) = P^{\star} - P_2(q)$, where $P^{\star} = \left< s^2 \right> / L^2$ is the probability of any given point falling under the newly-deposited circle, and $$\begin{aligned} P^{coll}_{2,\o}(q) &=& P^{\star} - P^{coll}_{1,2}(q) - P^{coll}_{2,3}(q) - P^{coll}_{1,2,3}(q) - P^{coll}_{2,3,4}(q) - P^{coll}_{1,2,3,4}(q) \\ \Rightarrow P^{coll}_{2,\o}(q) &=& P^{\star}- 2P_2(q) + P_2(2q) \nonumber.\end{aligned}$$ Finally, $$\begin{aligned} P^{coll}_{\o}(q) &=& 1 - \sum_{i} P^{coll}_{i,\o}(q) - \sum_{i,j\ne i} P^{coll}_{i,j}(q) - \sum_{i,j\ne i,k\ne i,j}P^{coll}_{i,j,k}(q) - P^{coll}_{1,2,3,4}(q) \\ \Rightarrow P^{coll}_{\o}(q) &=& 1 - 4P^{\star} + 3P_2(q) \nonumber.\end{aligned}$$ Assembling the pieces to find $C^{coll}_4(q)$ --------------------------------------------- Before substituting all of our results into Eq. (9) and solving for $C^{coll}_4(q)$, it will be useful to first consider the $q \gg s_0$ limit, in which we derived the 2-point function. In that limit (Eq. (6)), $$P_2(q) = A B(\alpha) \left( \frac{s_0}{L} \right)^2 \left( \frac{q}{s_0} \right)^{-(\alpha-3)} \ll 1$$ and $$C_2(q) = \frac{ B(\alpha) \left(\alpha-3 \right) \left< b^2 \right> \left< (1-a)^2 \right>}{2\left< 1-a \right>} \left( \frac{q}{s_0} \right)^{-(\alpha - 3)} \ll 1,$$ so only the lowest-order terms in these quantities need to be considered. Because of the power-law nature of these functions, $C_2(2q)$ and $P_2(2q)$ have the same dependence on distance $q$ as do the $C_2(q)$ and $P_2(q)$ terms, but are smaller by a factor of $2^{-(\alpha-3)} $, and similarly for the $f(3q)$ type terms. Substituting all of the products and probabilities derived in the preceding subsections into Eq. (9), keeping only the lowest-order terms in $(q/s_0)^{-(\alpha-3)}$, which dominate for $q \gg s_0$, and solving for $C^{coll}_4(q)$, we find that $$\begin{aligned} C^{coll}_4(q) &\approx& \frac{ B(\alpha) \left(\alpha-3 \right) \left< b^4 \right> \left< (1-a)^4 \right>}{4\left< 1-a \right>} \left( \frac{3q}{s_0} \right)^{-(\alpha - 3)}.\end{aligned}$$ Thus, the 4-point function for this arrangement of points (in the $q \gg s_0$ limit) has the same power-law form as does the 2-point function (Eq. 7), and it also only depends on opacity by a multiplicative pre-factor. Given that this (collinear) arrangement of points is so similar to the arrangement of points in the 2-point function (two points will always be collinear), this result is perhaps unsurprising. To test the generality of this result, we will compute the 4-point function for a square arrangement of points in the next section. Analytical calculation of the 4-point function for a square arrangement of points in the transmissive dead leaves model ======================================================================================================================= In this section, we calculate the 4-point function for our transmissive dead leaves ensemble, for the case in which the 4 points lie on the vertices of a square with edge length $q$. Similar to the collinear arrangement of points, the symmetry in this arrangement will greatly simplify our calculations and, since it has non-trivial geometry when compared to the collinear arrangement, there is a possibility for interesting features to arise in this 4-point function that are not apparent in either the 2-point function, or the 4-point function for collinear points. We will label these points ${1,2,3,4}$, going clockwise, and beginning in the upper left-hand corner. Similar to the calculation for the collinear case, we first list all of the possible modifications to the 4-point function, and the probabilities with which they occur. We will then simplify this expression, calculate the relevant probabilities, and use recursion to solve for the 4-point function. Similar to the previous calculations, the translation and rotation invariance of our image ensemble means that this 4-point function will depend only on the edge length of the square: $C^{square}_4(\vec{x}_1,\vec{x}_2,\vec{x}_3,\vec{x}_4) = C^{square}_4(q)$. Enumerating all possible modifications caused by the addition of a new circle, we find that $$\begin{aligned} C^{square}_4(q) &=& P^{square}_{\o}(q) C^{square}_4(q) \\ &+& 4 P^{square}_{1,\o} (q) \left< \left[ a I(\vec{x}_1) + (1-a)b \right] I(\vec{x}_2) I(\vec{x}_3) I(\vec{x}_4) \right> \nonumber \\ %&+& P_{2,\o}(q) \left< I(\vec{x}_1) \left[a I(\vec{x}_2) + (1-a)b \right] I(\vec{x}_3) I(\vec{x}_4) \right> \nonumber \\ %&+& P_{3,\o}(q) \left< I(\vec{x}_1) I(\vec{x}_2) \left[a I(\vec{x}_3) + (1-a)b \right] I(\vec{x}_4) \right> \nonumber \\ %&+& P_{4,\o}(q) \left< I(\vec{x}_1) I(\vec{x}_2) I(\vec{x}_3) \left[a I(\vec{x}_4) + (1-a)b \right] \right> \nonumber \\ &+& 4 P^{square}_{1,2}(q) \left< \left[a I(\vec{x}_1) + (1-a)b \right]\left[a I(\vec{x}_2) + (1-a)b \right] I(\vec{x}_3) I(\vec{x}_4) \right> \nonumber \\ %&+& P_{2,3}(q) \left< I(\vec{x}_1) \left[a I(\vec{x}_2) + (1-a)b \right]\left[a I(\vec{x}_3) + (1-a)b \right] I(\vec{x}_4) \right> \nonumber \\ %&+& P_{3,4}(q) \left< I(\vec{x}_1) I(\vec{x}_2) \left[a I(\vec{x}_3) + (1-a)b \right]\left[a I(\vec{x}_4) + (1-a)b \right] \right> \nonumber \\ %&+& P_{1,4}(q) \left< \left[a I(\vec{x}_1) + (1-a)b \right] I(\vec{x}_2) I(\vec{x}_3) \left[a I(\vec{x}_4) + (1-a)b \right] \right> \nonumber \\ &+& 4 P^{square}_{1,2,3}(q) \left< \left[a I(\vec{x}_1) + (1-a)b \right] \left[a I(\vec{x}_2) + (1-a)b \right] \left[a I(\vec{x}_3) + (1-a)b \right] I(\vec{x}_4)\right> \nonumber \\ %&+& P_{2,3,4}(q) \left< I(\vec{x}_1) \left[a I(\vec{x}_2) + (1-a)b \right] \left[a I(\vec{x}_3) + (1-a)b \right] \left[a I(\vec{x}_4) + (1-a)b \right] \right> \nonumber \\ &+& P^{square}_{1,2,3,4}(q) \left< \left[a I(\vec{x}_1) + (1-a)b \right] \left[a I(\vec{x}_2) + (1-a)b \right] \left[a I(\vec{x}_3) + (1-a)b \right] \left[a I(\vec{x}_4) + (1-a)b \right] \right> \nonumber, \end{aligned}$$ where $P^{square}_{\o}(q)$ is the probability that none of the four corners of the square fall under the newly-deposited circle, $P^{square}_{i,\o}$ is the probability that only the $i^{th}$ corner falls under the newly-deposited circle, $P^{square}_{i,j}$ is that probability that only the $i^{th}$ and $j^{th}$ corners fall under the newly-deposited circle, and so on. The symmetries in the square configuration (all edges are equivalent, and all corners are equivalent) allow us to collapse the (equivalent) $P^{square}_{i,\o}$ terms, and similarly for the $P^{square}_{i,j}$ terms and the $P^{square}_{i,j,k}$ terms. We further note that terms like $P^{square}_{1,3}$ and $P^{square}_{2,4}$, which contain opposite corners of the circle, are omitted because it is impossible for a circle to cover diagonally opposite corners of the square without covering at least one other corner. The factors of $4$ in the above equation come in because there are 4 corners to a square, and 4 edges to a square, and ${ 4 \choose 3} = 4$ different ways to choose groupings of three of the four corners. We can expand and simplify the averages of the products of the pixel values, as in the previous section, to find $$\begin{aligned} C^{square}_4(q) &=& P^{square}_{\o}(q) C^{square}_4(q) \\ &+& 4 \left< a \right > P^{square}_{1,\o} (q) C^{square}_4(q) \nonumber \\ &+& 4 P^{square}_{1,2}(q) \left[ \left< a^2 \right> C^{square}_4(q) + \left< (1-a)^2 \right> \left< b^2 \right> C_2(q) \right] \nonumber \\ &+& 4 P^{square}_{1,2,3}(q) \left[ \left< a^3 \right> C^{square}_4(q) + 2 \left< a(1-a)^2 \right> \left< b^2 \right> C_2(q) + \left< a(1-a)^2 \right> \left< b^2 \right> C_2(\sqrt{2}q) \right] \nonumber \\ &+& P^{square}_{1,2,3,4}(q) \left[ \left< a^4\right> C^{square}_4(q) + \left<a^2 (1-a)^2 \right> \left<b^2 \right> \left( 4 C_2(q) + 2C_2(\sqrt{2} q) \right) + \left< (1-a)^4 \right> \left<b^4 \right> \right] \nonumber, \end{aligned}$$ where the function $C_2(\cdot)$ is the 2-point function we discuss in Eq. 7. Computing the probabilities $P^{square}_{\{ \cdot \}}$ ------------------------------------------------------ To finish our calculation of the 4-point function for square geometries, we require the probabilities $P^{square}_{\o}(q)$, $P^{square}_{1,\o}(q)$, $P^{square}_{1,2}(q)$, $P^{square}_{1,2,3}(q)$, and $P^{square}_{1,2,3,4}(q)$. For the calculation of $P^{square}_{1,2,3,4}(q)$, we first note that, given that one of the corners of the square falls under a newly-deposited circle (with diameter $s$), the probability that all 4 points fall under it is $g_4(q/s \in [0,1/\sqrt{2}]) = \frac{4}{\pi} \left[ \cos^{-1}(q/s) - (\pi/4) + (q/s)^2 - (q/s) \sqrt{1 - (q/s)^2} \right]$. Using the same logic (and variable substitution) as in Eq. 6, we find that $$\begin{aligned} P^{square}_{1,2,3,4}(q) &=& \int_{0}^{\infty} \frac{s^2}{L^2} g_4(q/s)p(s) ds \\ &=& A \left(\frac{s_0}{L} \right)^2 \left( \frac{q}{s_0} \right)^{-(\alpha - 3)} B_4(\alpha), \nonumber \\\end{aligned}$$ where $B_4(\alpha) = \int_{\sqrt{2}}^{\infty} g_4(1/u) u^{2-\alpha} du$. To derive $P^{square}_{1,2,3}(q)$, we seek the probability that 3 of the points, but not all 4, lie under the newly-deposited circle. If the two diagonal points are under the circle, so will at least one of the corners, and thus $P^{square}_{1,2,3}(q) = (P_2(\sqrt{2} q) - P^{square}_{1,2,3,4}(q))/2$, where $P_2(x)$ is the probability that two points a distance $x$ apart lie under a newly-deposited circle, and is calculated in Eqs. 5 and 6 (above). The “doublet" probability $P^{square}_{1,2}(q)$ is the probability that 2, but not 3 or 4 of the points fall under the circle, and thus is given by $P^{square}_{1,2}(q) = P_2(q) - P^{square}_{1,2,3}(q) - P^{square}_{1,2,4}(q) - P^{square}_{1,2,3,4}(q) = P_2(q) - P_2(\sqrt{2}q)$. The “singlet" probability $P^{square}_{1,\o}(q)$ is the probability that 1, but not more, of the points fall under the circle, and is thus given by $P^{square}_{1,\o} (q)= P^{\star} - P^{square}_{1,2}(q) - P^{square}_{1,4}(q) - P^{square}_{1,2,3} (q) - P^{square}_{1,3,4}(q) - P^{square}_{1,2,4}(q) - P^{square}_{1,2,3,4}(q)$, where $P^{\star} = \left< s^2 \right> / L^2$ is the probability that a newly-deposited circle covers any given point, and is calculated in the previous sections. Simplifying this expression using our previously-derived results, we find that $P^{square}_{1,\o} (q)= P^{\star} - 2P_2(q) + \frac{1}{2} P_2(\sqrt{2} q) + \frac{1}{2} P^{square}_{1,2,3,4} (q)$. Finally, the probability that none of the points falls under a newly-deposited circle is given by $P^{square}_{\o}(q) = 1 - \sum_i P^{square}_{i,\o}(q) - \sum_{i,j\ne i} P^{square}_{i,j} - \sum_{i,j\ne i,k \ne i,j} P^{square}_{i,j,k} - P^{square}_{1,2,3,4} = 1 - 4 P^{\star} + 4 P_2(q) - P^{square}_{1,2,3,4}(q)$. Combining the pieces to find $C^{square}_4(q)$ ---------------------------------------------- As in our calculation of the 4-point function for collinear points, we again consider the $q/s_0 \gg 1$ limit, in which we need only consider the lowest-order terms in $(q/s_0)^{-(\alpha-3)}$. In that limit, we find that $$\begin{aligned} C^{square}_4(q) &\approx& \frac{ B_4(\alpha) \left(\alpha-3 \right) \left< b^4 \right> \left< (1-a)^4 \right>}{4\left< 1-a \right>} \left( \frac{q}{s_0} \right)^{-(\alpha - 3)}.\end{aligned}$$ Like the other n-point functions computed thus far, the 4-point function for square geometries is a power law with power $-(\alpha-3)$, and it depends on opacity only as a multiplicative pre-factor. We note that, for $\alpha = 3.2$, $B(\alpha) \approx 4.014$, while $B_4(\alpha) \approx 3.581$, where these values come from numerical integration using Simpson’s method. These values are similar in magnitude, and thus the 4-point function is not inherently much smaller than the 2-point function. Finally, we note that the 2- and 4-point functions depend differently on object opacity, and thus the visible difference in the different image ensembles likely arises from the relative amplitudes of these (power-law) functions, and not any difference in their functional forms. Numerical analysis of the transmissive fallen-leaf images ========================================================= To confirm our analytical calculations of the 2-point functions, we simulated 500-frame ensembles of $256 \times 256$ pixel images, using the procedure described in Eq. 1: circles of random size (following a power law distribution $p(s) \propto s^{-3.2}$ above the cutoff of $s_0 = 1$ pixel), brightness, and position were iteratively placed on the image frame to build up the images. For each frame, $10^6$ circles were deposited, which is the number required to cover the image surface $\sim 100$ times. To avoid edge effects, circle centers were allowed to fall up to $256 + s/2$ pixels away from the center of the image frame, where $s$ is the circle diameter in pixels. We used a large maximum circle size, $s_{max} = 10^8$ pixels, because prior work [@huang] on dead leaves models found that the functional form of the measured autocorrelation function approaches the analytically calculated curve only in the $ s_{\max} \to \infty $ limit. The heavy tail of the power-law distribution contains a non-negligible number of very large leaves, which contribute to the long-range correlations in the images. ![ (Color online) **For delta-function object size distributions, opaque and transmissive dead leaves images yield different 2-point statistics.** (**A,B**) Sample images in which the leaves are all the same size ($s^{\star} = 25$ pixels), from opaque (a=0, **A**) and transmissive (a=0.75, **B**) ensembles. (**C**) The autocorrelation functions of these image ensembles do not follow power laws, and they differ from one another. (**D**) Their power spectra also differ non-trivially: the ratio between the power spectra is not constant. The ripples are at multiples of the $256/25 \approx 10$ cycles/image frequency imposed by the uniform circle size.](fig3){width="3.6in"} We then measured the difference functions $D(q) = \left<| I(\vec{x}) - I(\vec{x'})|^2 \right> = 2\left<I(\vec{x})^2\right> -2C(q)$ for the image ensembles. $D(q)$ is clearly related to the autocorrelation function $C(q)$, but is easier to measure [@ruderman_scaling] as it is unaffected by the mean values of the individual images. We fit the measured difference functions to power law functions of the form $D(q) = \eta \times q^{\mu} + \nu$, as is suggested by our analytical calculations (Eq. 7). The best-fit parameters $(\eta,\mu,\nu)$ for the image ensembles with $a= \{0,0.25,0.5,0.75 \}$ were $(-0.48 \pm 0.01, -0.24 \pm 0.04, 0.69 \pm 0.03)$, $(-0.32 \pm 0.01, -0.23 \pm 0.03, 0.41 \pm 0.02)$, $(-0.191 \pm 0.004, -0.22 \pm 0.03, 0.23 \pm 0.01)$, $(-0.086 \pm 0.002, -0.21 \pm 0.02, 0.098 \pm 0.005)$, respectively, where the uncertainties represent $95\%$ confidence intervals. These values are in good agreement with the analytical calculations that predict $\mu = -0.2$ for all ensembles, and $\nu = \{ 0.66,0.396,0.22,0.094 \} $ for the ensembles with $a= \{0,0.25,0.5,0.75 \}$, respectively. The correlation functions shown (Fig. 2D) are the measured difference functions subtracted from the constants $\nu$ measured in the fit: $C(q) = [\nu - D(q)]/2$. These correlation functions are power-law functions of distance (linear on the log-log plot), and differ by a multiplicative constant. Similarly, the power spectra of the image ensembles (Fig. 2E), differ only by a multiplicative constant for low spatial frequencies, where the $q \gg s_0$ approximation holds. Fig. 3 demonstrates that the 2-point function is affected substantially by leaf opacity for delta-function size distributions. In particular, the modulation depth of the “ripples" in the power spectra depend on the leaf opacity, and thus the opacity does not modify the power spectra simply by a multiplicative factor. The procedures used to generate the data shown in Fig. 3 were the same as for the power-law object size distribution, except for the different distribution of object sizes. A more realistic model of radiological images ============================================= ![ **A shadowing dead leaves model with finite optical depth also exhibits scale invariant 2-point statistics for power law object size distributions.** (**A**) An example image from a dead leaves model with the same power law distributed leaf sizes as in Fig. 2, in which each leaf leaves a shadow by multiplying the brightness of the pixels it subtends by a factor no greater than one, drawn uniformly within $[0.5,1]$. Unlike the previous models, each pixel starts out at full brightness, and only a finite number of circles is added to generate the image. The autocorrelation function (**B**) and power spectrum (**C**) of this ensemble show scale invariance (for relatively low frequencies, which corresponds to $q \gg s_0$), just like the previous models.](fig4){width="3.6in"} Our transmissive dead leaves model is not a perfect model for radiological images. Image formation in mammograms and other projectional radiographs results from the partial blockage of a roughly uniform illumination of x-rays due to local regions of dense tissue, unlike our dead leaves model. Moreover, imaged tissue is typically much thinner than the path length required to fully block the x-rays throughout the image, unlike the effectively infinite optical depth of our “additive" transparent dead leaves model (Eq. (1)). It is thus natural to ask whether our conclusions about variable object opacity generalize to these types of images. Analytically computing the 2-point statistics for this radiographic model is more involved than for the infinite depth models, since recursion is more complex in this case. For this reason, we chose to verify via simulation that the qualitative results from our analytical calculations hold for these types of images. Fig. 4A shows a typical image from a shadowing dead leaves model with finite optical depth and the same power law leaf size distribution as in the previous models. To generate these model images, a uniform background illumination (of 1) was imposed across the whole image. Randomly sized and located circles were then deposited onto the image plane, with each leaf multiplying the brightness of the pixels it subtends by a factor drawn uniformly within $[0.5,1]$. The circle sizes were drawn from the same power law distribution as in the previous simulations, and the simulation code was thus very similar. For an ensemble of these “radiographic" images, the empirically measured 2-point function (Fig. 4B) and power spectrum (Fig. 4C) exhibit the same power laws as we found for our previous models (Figs. 2,3), suggesting that our calculation holds more generally than for the specific model for which we performed the analytical calculations. Intuitively, one might expect the same scale-invariant 2-point function for this model as for the previous one since no new length scale has been introduced. Conclusions =========== For the special case of power-law object size distributions, object opacity does not affect the form of either the 2- or 4-point functions, or the power spectrum of images: it is manifest only by a multiplicative constant in these power-law functions. Ours is the first analytic calculation that demonstrates these facts, and thus deepens our understanding of image statistics. For object size distributions other than power-law, object opacity can (potentially dramatically) alter the low-level image statistics. Occlusion is important for natural image formation, but we find that it does not change the form of the power spectrum. Since images formed by opaque leaves that are all the same size have oscillatory, non-power-law, power spectra (Fig. 3), and transmissive leaves can yield power law power spectra (Figs. 2 and 4), occlusion is likely not responsible for scale invariance of images. We propose that the universality of power law power spectra in both occlusive imaging environments, such as natural photographic images, and transmissive ones, such as mammography, is likely due to power-law object size distributions in both settings. JZ’s contribution to this work was supported by an international student research fellowship from the Howard Hughes Medical Institute (HHMI). This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to DP under Grant No. DGE Ð 11-44155. MRD thanks the Hellman Family Foundation, the James S. McDonnell Foundation, and the McKnight Foundation for support. [99]{} Stephens, G.J., Mora, T., Tkačik, G., and Bialek, W. (2008). arXiv:0806.2694. Ruderman, D.L. and Bialek, W. (1994). Phys. Rev. Lett. 73, 814-817. Dong, D.W. and Atick, J.J. (1995). Network: Comput. Neural Syst. 6, 345-358. Field, D.J. (1987). J. Opt. Soc. Am. A. 4, 2379-2394. Simoncelli, E.P. and Olshausen, B.A. (2001). Annu. Rev. Neurosci. 24, 1193-1216. Torralba, A. and Oliva, A. (2003). Network: Comput. Neural Syst. 14, 391-412. van der Schaff, A. and van Hateren, J. (1996). Vis. Res. 36, 2759-2770. Balboa, R.M. and Grzywacz, N.M. (2003). Vis. Res. 43, 2527-2537. Heine, J.J. and Velthuizen, R.P. (2002). Med. Phys. 29, 647-661. Li, H., M.L. Giger, O.I. Olopade, and M.R. Chinander (2008). J. Digit. Imaging 21, 145-152. K.G. Metheany, C.K. Abbey, N. Packard, and J.M. Boone (2008). Med. Phys. 35, 4685-4694. Barlow, H.B. (1961). In Sensory Communication, W.A. Rosenblith, ed. (Cambridge, MA: MIT Press), pp. 217-234. Dong, D.W. and Atick, J.J. (1995). Network: Comput. Neural Syst. 6, 159-178. Atick, J.J. and Redlich, A.N. (1992). Neural Comput. 4, 196-210. Dan, Y., Atick, J.J. and Reid, R.C. (1996). J. Neurosci. 16, 3351-3362. Zylberberg, J., Murphy, J.T. and DeWeese, M.R. (2011). PLoS Comput. Biol. 7, e1002250. Rehn, M. and Sommer, F.T. (2007). J Comput. Neurosci. 22, 135-146. Matheron, G. (1968). Modèle séquentiel de partition aléatoire. Tech. Rep., Centre de Morphologie Mathématique, Fontainebleau. Bordenave, C., Gousseau, Y., and Roueff, F. (2006). Adv. in Appl. Probab. 38, 31-46. Ruderman, D.L. (1997). Vis. Res. 37, 3385-3398. Carlson, C. R. (1978). Photographic science and engineering. 22, 69-71. Balboa, R.M., Tyler, CW., and Grzywacz, N.M. (2001). Vis. Res. 41, 955-964. Hsiao, W.H. and Milane, R.P. (2005). J. Opt. Soc. Am. A 22, 1789-1797. Pitkow, X. (2010). J. Vis. 10, 42. Lee, A.B., Mumford, D. and Huang, J. (2001). Int. J. Comp. Vis. 41, 35-59.
{ "pile_set_name": "ArXiv" }
--- abstract: | We propose a statistical method for decomposition of contributions to iron production from various sources: supernovae Type II and the subpopulations of supernovae Type Ia – prompt (their progenitors are short-lived stars of ages less then $\sim$100 Myr) and tardy (whose progenitors are long-lived stars of ages $>$100 Myr). To do that, we develop a theory of oxygen and iron synthesis which takes into account the influence of spiral arms on amount of the above elements synthesized by both the supernovae Type II and prompt supernovae Ia. In the framework of the theory we processed statistically the new more precise observational data on Cepheids abundances, which, as it is well known, demonstrate nontrivial radial distributions of oxygen and iron in the Galactic disc with bends in the gradients. In our opinion, such fine structure in the distribution of the elements along the Galactic disc enables to decompose unambiguously the amount of iron into 3 components produced by the above 3 sources of it. Besides, by means of our statistical methods we solve this task without of any preliminary suppositions about the ratio among the portions of iron synthesized by the above sources. The total mass supplied to the Galactic disc during its life by all Types of SNe happens to be $\sim (4.0 \pm 0.4)\cdot 10^7$ M$_{\odot}$, the mass of iron occurs in the present ISM is $\sim (1.20 \pm 0.05)\cdot 10^7$ M$_{\odot}$, i.e., about 2/3 of iron is contained in stars and stellar remnants. The relative portion of iron synthesized by tardy supernovae Ia for the life-time of the Galaxy is $\sim$35 per cent (in the present ISM this portion is $\sim$50 per cent). Correspondingly, the total portion of iron supplied to the disc by supernovae Type II and prompt supernovae Ia is $\sim$65 per cent (in the present ISM this portion is $\sim$50 per cent). The above result slightly depends on the adopted mass of oxygen and iron synthesized during one explosion of supernovae and the shape (bimodal or smooth) of the so-called Delay Time Distribution function. The portions of iron mass distributed between the short-lived supernovae are usually as follows: depending on the ejected masses of oxygen or iron during one supernovae Type II event the relative portion of iron, supplied to the Galactic disc for its age, varies in the range 12 - 32 per cent (in the present ISM 9-25 per cent); the portion supplied by prompt supernovae Ia to the Galactic disc is 33 - 53 per cent (in ISM 26 - 42 per cent). Our method also confirm that the bend in the observed slope of oxygen radial distribution and the minimum in \[O/Fe\] at $\sim$7 kpc form in the vicinity of location of the corotation resonance. date: 'Accepted 2011 xxxx. Received 2011 xxxx; in original form 2011 xxxx' title: Galactic restrictions on iron production by various Types of supernovae --- \[firstpage\] Galaxy: fundamental parameters – Galaxy: abundances – ISM: abundances – galaxies: spiral – galaxies: star formation – ([*stars:*]{}) supernovae general. Introduction ============ In the present paper, we extend the statistical method, proposed by Acharova et al. (2011) for analysis of the radial distribution of oxygen in the Galaxy Milky Way, to explain the nontrivial distribution of iron along the Galactic disc, revealed in a series papers by Andrievsky et al. (2002 a,b,c) and Luck et al. (2003; 2006; 2011). This problem is of great importance not only for the chemical evolution of the Galactic disc and the history of star formation, but also for the search of independent restrictions on the models of supernovae, especially SNe Type Ia (SNe Ia) whose outstanding role in discovery of accelerated expansion of the Universe is well known (Riess et al. 1998; Perlmutter et al. 1999). According to the cited papers of Andrievsky, Luck and their collaborators the spectroscopic study of heavy element abundances in Cepheids demonstrate that, in the Milky Way, the radial distributions both oxygen and iron are to be described by a multi-slope function rather than by a linear one. For instance, the distribution of oxygen along the Galactic disc is characterized by a steep gradient in the inner part of the disc for Galactocentric radii 5 $\le r \le$ 7 kpc and a plateau-like distribution for $r >$ 7 kpc and up to 10 kpc (for the solar Galactocentric distance $r_0$ the value 7.9 kpc is adopted), so that at $r \sim$ 7 kpc there is an inflection in the slope of the distribution. This fine structure of the radial distribution of oxygen was first explained by Acharova et al. (2005 a,b). For this, they took into account the influence of spiral arms since oxygen is mainly synthesized during explosions of SNe Type II (SNe II) which are strongly concentrated in spiral arms. As it was shown in the last papers the combined effect of the corotation resonance and turbulent diffusion results in formation of the radial distribution of oxygen in the Galactic disc with the bend in the slope. But some time ago it was difficult to explain the similar behavior of iron along the Galactic radius by means of the same mechanism since it is generally agreed that $\sim$ 70 % of iron is synthesized during SNe Ia explosions and only $\sim$ 30 % is produced by SNe II (Matteucci 2004). The point is that the progenitors of SNe Ia were thought to be of ages of the order of several billion years. Before outbursts, after such a long period of time they have to be dispersed over a very large portion of the Galactic disc (Mishurov & Acharova 2011). Hence, if all precursors of SNe Ia would be old stars, they did not keep in their memory that they were born in spiral arms. So, we could not expect any noticeable influence of spiral arms on the radial distribution of iron unless to increase significantly the output of iron per one SNe II event which would entail serious consequences in the theory of pre SNe II evolution. The opportunity to solve simultaneously the problem of formation of the above fine structure in the radial distribution both oxygen and iron was offered when it has become evident that there exist two subpopulations of SNe Ia precursors – short-lived and long-lived. They were called ‘prompt’ and ‘tardy’, respectively (see, e.g., Mannucci et al. 2005; 2006; Aubourg et al. 2008; Brandt et al. 2010; Maoz et al. 2011; Li et al. 2011). Acharova et al. (2010) incorporated the results of Mannucci et al. (2006) and Matteucci et al. (2006) in their theory and explained the formation of the above fine structure of iron radial distribution in the Galactic disc. Besides, the discovery of the 2 subpopulations of SNe Ia enables to understand their concentration in spiral arms first revealed by Bartunov et al. (1994). [^1] Nevertheless, several questions arise. First, to decompose the contributions of the above 3 sources to iron synthesis, Acharova et al. (2010) assume that SNe II and the two subpopulations of SNe Ia supply to the present ISM approximately equal portions of iron. In a certain sense, it is a rather arbitrary supposition. However, the extension of the statistical method proposed by Acharova et al. (2011) enables to estimate the contributions to iron production from various Types of SNe independently of any preliminary suppositions. Second, the estimations for ages of prompt progenitors of SNe Ia (denote them as SNe Ia-P) vary from $\sim$ 100 Myr and up to $\sim$ 400 Myr or even more, depending on used methods of data processing and observational material. Can the radial distribution of iron along the Galactic disc apply restriction on the age of SNe Ia-P precursors? Third, for the so-called ‘DTD’ (Delay Time Distribution) function [^2] were proposed two different approximations: a bimodal-like (e.g., Mannucci et al. 2006; Matteucci et al., 2006) and smooth one peaked at early times (e.g., Maoz et al. 2010). How do such different representations for DTD function result in metallicity distribution along the Galactic disc and the amount of iron synthesized by various Types of SNe? The answer this question, perhaps, will impose additional constraints on the models of SNe Ia which have not been fully built, as yet. At the time, the discussed discovery poses a problem for cosmology, as well. Indeed, if the population of SNe Ia is inhomogeneous, can we consider them as standard candles (Maeda et al. 2010)? In any case to reach the necessary accuracy for constraining dark energy, one has to take into account some corrections which may depend on various parameters (Aubourg et al. 2008). So, any additional constraints on metallicity production by various types of heavy elements sources may occur to be very useful. It is also important to notice that, in the present paper, we use new much more precise determinations of oxygen and iron abundances in Cepheids. Basic ideas and equations ========================= [*Equations for the formation of the fine structure of oxygen and iron radial distribution along the Galactic disc*]{} ---------------------------------------------------------------------------------------------------------------------- The chemical evolution of the Galactic disc is governed by the following equations: $$\begin{aligned} \dot{\mu}_O&=&\int\limits_{m_L}^{m_U}{(m-m_w)\,Z_O(t-\tau_m)\psi(t-\tau_m)\phi (m)\,dm}\nonumber\\ &&\nonumber\\ &&+E_O^{\rm II} + fZ_{Of} - Z_O\psi\nonumber\\ &&\nonumber\\ &&-\frac{1}{r}\frac{\partial {}}{\partial r}\left(r\mu_O u\right) +\frac {1}{r}\frac {\partial {}}{\partial r} \left(r\mu_g D\frac {\partial Z_O}{\partial r}\right), \end{aligned}$$ $$\begin{aligned} \dot{\mu}_{Fe}&=&\int\limits_{m_L}^{m_U}{(m-m_w)\,Z_{Fe}(t-\tau_m)\psi(t-\tau_m)\phi (m)\,dm} \nonumber\\ &&+E_{Fe}^{\rm Ia-P}+ E_{Fe}^{\rm Ia-T}+E_{Fe}^{\rm II} + fZ_{Fef} - Z_{Fe}\psi\nonumber\\ && \nonumber\\ &&-\frac{1}{r}\frac{\partial {}}{\partial r}\left(r\mu_{Fe} u\right) +\frac {1}{r}\frac {\partial {}}{\partial r}\left (r\mu_g D\frac {\partial Z_{Fe}}{\partial r}\right), \end{aligned}$$ where $\mu_O$ and $\mu_{Fe}$ are the surface mass densities for oxygen and iron, respectively, $\mu_g$ is the gaseous density, $Z_i=\mu_i/\mu_g$ is the fraction of the $i$th element (oxygen or iron) in ISM, $Z_{Of}$ and $Z_{Fef}$ are the oxygen and iron abundances in the infall gas (for the both elements we adopt $Z_{if} = 0.02Z_{i\odot}$: our experiments with various abundances of the infall gas from $Z_{if}=0.02\,Z_{i\odot}$ to $0.1\,Z_{i\odot}$ show that the final abundances weakly depend on the exact value of $Z_{if}$ if it is less than $0.1\,Z_{i\odot}$, see also Lacey & Fall 1985. Below we demonstrate the results for $Z_{if} = 0.02Z_{i\odot}$ which is slightly less than the mean content of heavy elements in halo stars, $\sim0.03\,Z_{i\odot}$, Prantzos 2008), $$\psi=\nu\mu_g^{1.5}$$ is the star formation rate (SFR), $\nu$ is a normalizing coefficient, $\phi(m)$ is Salpeter’s initial mass function with the exponent of - 2.35 (stellar masses $m$ are in solar units), $E^{\rm II}_i$ are the rates of the $i$th element synthesis by SNe II, $E^{Ia-P}_{Fe}$ and $E^{Ia-T}_{Fe}$ are the rates of iron synthesis by prompt and tardy SNe Ia, respectively, [^3] $f$ is the infall rate of the intergalactic gas on to the Galactic disc $$f=A\exp(-\frac{r}{r_d}-\frac{t}{t_f}),$$ $r_d = 3.5$ kpc is the radial scale, $t_f$ is a typical time-scale of gas fall on to the Galactic disc, or in other words, the time-scale of the Galactic disc formation, $u$ is the microscopic radial gas velocity (radial inflow) within the Galactic disc, $t$ is time (in Gyr), $\tau_m$ is the life-time of a star of mass $m$ on the main sequence: $\log(\tau_m)=0.9-3.8\log(m)+\log^{2}(m)$, $m_L=0.1$, $m_U$ is the upper stellar mass (usually we adopt $m_U$ = 70, see below), $m_w$ is the mass of stellar remnants (white dwarfs, neutron stars, black holes: for $m \le 10$ the mass of a remnant is $m_w = 0.65m^{0.333}$; in the range $10 < m < 30$ $m_w = 1.4$; if $30 \le m < m_U$ the remnant is a black hole with $m_w = 10$; finally for $m \ge m_U$ the stars are black holes right away from their birth and they are removed from the nucleosynthesis and returning the mass to ISM). The last terms on the right-hand sides of Eqs. (1,2) describe the turbulent diffusion of heavy elements with the diffusion coefficient $D$. To estimate the coefficient we model the turbulent ISM by a system of clouds and use the gas kinetic approach (for details see Mishurov et al. 2002; Acharova et al. 2010). The enrichment rates $E_i^{\rm II}$ of the ISM by the $i$th heavy element due to SNe II explosions are described by the same expressions: $$E_i^{\rm II}=\eta P_i^{\rm II} R^{\rm II},$$ where $P_i^{\rm II}$ is the mean mass (in solar units) of ejected oxygen or iron per one SN II explosion, $$R^{\rm II}(r,t)=0.9975\int\limits_{8}^{m_U}{\psi(r,\,t-\tau_m)\phi (m)\,dm},$$ is the rate of SNeII events. The factor $\eta$ was introduced in order to take into account the influence of spiral arms. Following the idea, first proposed by Oort (1974) and used by Portinari & Chiosi (1999) and Wyse & Silk (1989), we write $$\eta=\beta |\Omega(r)-\Omega_P|\Theta,$$ where $\Omega(r)$ is the angular rotation velocity of the galactic disc, $\Omega_P$ is the rotation velocity of the wave pattern responsible for the spiral arms, $\Theta$ is a cutoff factor ($\Theta=1$ in the wave zone, i.e. between the inner and outer Lindblad resonances, and $\Theta=0$ beyond them), $\beta$ is a normalizing coefficient which we call as the constant for the rate of oxygen synthesis (for details see Mishurov et al. 2002; Acharova et al. 2005; 2010; 2011). Let us now turn to the synthesis of iron. In equation (2) the rates of enrichment of the Galaxy by iron due to SNe Ia-P and SNe Ia-T events are explicitly separated ($E_{Fe}^{\rm Ia-P}$ and $E_{Fe}^{\rm Ia-T}$, respectively). Being young objects, SNe Ia-P are believed to be concentrated in spiral arms. Hence, in addition to SNe II, they represent a complementary channel by means of which spiral arms influence the formation of multi-slope gradient of iron distribution in the disc. Therefore, by analogy with the representation for the enrichment rates due to SNe II events, $E_{Fe}^{\rm Ia-P}$ has to contain the factor $\eta$. So, the contribution of SNe Ia-P to the enrichment rate of the Galaxy by iron is governed by the following expressions: $$E_{Fe}^{\rm Ia-P}=\eta\gamma P^{\rm Ia}_{Fe} R^{\rm Ia-P},$$ where $\gamma$ is a correction factor, $P^{\rm Ia}_{Fe}$ and $R^{\rm Ia-P}$ have the same sense as the corresponding quantities for the SNe II, $$R^{\rm Ia-P}(r,t)= 0.00711\int\limits_{\tau_8}^{\tau_S}{\psi(r,\,t-\tau)D_P(\tau)d\tau},$$ $D_P$ is the DTD function for prompt SNe Ia, $\tau_8$ is the life-time for a star of mass $m=8$. Unlike prompt SNe Ia, SNe Ia-T do not concentrate in spiral arms since their precursors are long-lived objects. That is why the $\eta$-like factor is absent in the expression for $E_{Fe}^{\rm Ia-T}$. Therefore, the contribution to iron enrichment of the ISM by tardy SNe Ia is described by the following formula: $$E_{Fe}^{Ia-T}=\zeta P^{\rm Ia}_{Fe}R^{\rm Ia-T}.$$ Here unlike the above, $\zeta$ is a constant since this type of subpopulation of SNe Ia is not concentrated in spiral arms. So, they do not keep in their memory that they were born in spiral arms. The corresponding rate for SNe Ia- T events is represented as follows: $$R^{\rm Ia-T}(r,t)= 0.00711\int\limits_{\tau_S}^{t}{\psi(r,\,t-\tau)D_T(\tau)d\tau},$$ where $D_T$ is the DTD function for SNe Ia-T. Let us specify, what we mean saying prompt SNe Ia. In Mannucci et al. (2006) and Matteucci et al. (2006) model the prompt and tardy subpopulations of SNe Ia are clearly separated since their DTD function is [*bimodal*]{}: the first group of objects has the delay time $\tau < \tau_S$, the second one corresponds to $\tau > \tau_S$. The critical time which serves as the boundary between the above subpopulations, $\tau_S$, happens to be $\sim 0.1$ Gyr \[more exactly $\tau_S = 10^{7.93}$ yr $\approx 0.085$ Gyr, Matteucci et al. 2006; see equations (7,8) and figure 2 therein\]. On the other hand, the above critical time $\tau_S \sim 0.1$ Gyr can be considered as a boundary delay time which divides prompt SNe Ia from tardy ones in the case of [*smooth*]{} DTD function, proposed, e.g., by Maoz et al. (2010). Indeed, the typical time, necessary for a star to cross the interarm distance, is $\sim \pi /|\Omega - \Omega_P| \,>$ 200 Myr (in the vicinity of the [*corotation resonance*]{}, where $\Omega \to \Omega_P$, the crossing time $\to \infty$). So, we may adopt that, if the age of SNe Ia progenitor (i.e., delay time $\tau$) is less than $\tau_S \sim 0.1$ Gyr the corresponding SNe Ia are concentrated in spiral arms. In other words, the objects belong to subpopulation SN Ia-P. The above division is very important: as it will be shown below, the multi-slope gradient of iron distribution along the Galactic disc may be explained by the influence of spiral arms only if a significant portion of SNe Ia is concentrated in spiral arms. Hence, we believe that SNe Ia-P have to be sufficiently young, no older than $\sim$100 Myr. Below we consider two types of approximating representations for the DTD function. 1\) [*Bimodal DTD*]{} function of Matteucci et al. (2006): $\log(D_P)=1.4-50{[\log(\tau)+1.3]}^2$) for $\tau \le \tau_S$ and $\log(D_T)=-0.8-0.9{[\log(\tau)+0.3]}^2$ for $\tau > \tau_S$, $\tau_S = 0.085$ Gyr. From the above representation it is seen that $D_P$ has a very sharp maximum at $\tau_{max} \approx 0.05$ Gyr due to the large parameter \[$=50\ln(10) \approx 115$\]. So, the main contribution to the integral for $R^{\rm Ia-P}$ brings the region of $\tau$ close to $\tau_{max}$. Hence, the integral may be estimated asymptotically by means of Laplace method. 2\) [*Smooth DTD*]{} function of Maoz et al. (2010). In this model we use a power-like DTD function ($DTD \propto \tau^{-1.2}$), proposed in the cited paper. However, we slightly modify it for small time, say, $\tau < 0.045$ Gyr assume $DTD$ to be proportional to $\exp \{-[(\tau - 0.04)/0.02]^2\}$ in order to avoid the step-like behavior of it at early times. Normalizing DTD to 1 within the time range from 20 Myr to 18 Gyr [^4] and suppose that DTD is a continuous function, finally we have: $DTD = 0.135\tau^{-1.2}$ for $\tau \ge 0.045$ Gyr and $DTD = 5.940 \exp \{-[(\tau - 0.04)/0.02]^2\}$ for $\tau < 0.045$ Gyr. In this model, as SNe Ia-P we consider the stars for which the delay time $\tau < 100$ Myr. Otherwise we refer the stars to SNe Ia-T. As it was noticed in Introduction, Cepheids in our sample are very young: their ages usually do not exceed 100 Myr (see Table 1 in Appendix). So, they give the distribution of abundances in ISM almost at present epoch. And it is important, since the above chemical equations describe the evolution of the abundances of heavy elements in ISM. Therefore, by means of our chemical equations we have to compute the theoretical distributions of oxygen and iron for the present moment of time $t = T_D =10$ Gyr ($T_D$ is the age of the Galactic disc) and compare the theoretical distributions with the observed ones. To do that we have to transform our final $Z_i$ to metallicities: $[X_i/H]^{\rm th} = log(Z_i/Z_{i,\odot})$, where $Z_{i,\odot}$ is the abundance of oxygen or iron for the Sun. The corresponding values for $Z_{i,\odot}$ were adopted according to Asplund et al. (2009). The fundamental feature of the above equations for the chemical evolution of the Galactic disc is that they result in formation of the nontrivial radial distribution of the elements in the Galactic disc. Indeed, from the galactic density wave theory of Lin et al. (1969) it is known that the spiral wave pattern, responsible for spiral arms, rotates as a rigid body ($\Omega_P = const$) whereas the galactic matter rotates differentially (the rotation velocity of the Galactic disc $\Omega$ is a function of the Galactocentric distance $r$; to compute the rotation curve we use CO data of Clemens 1985, adjusted them for $r_{\odot} = 7.9$ kpc, see Acharova et al. 2010). The radius $r_c$, where both the velocities coincide \[$\Omega(r_c) = \Omega_P$\], is called the [*corotation*]{} radius. From the above expression for $\eta$ it is obvious that in the vicinity of the corotation radius the enrichment of ISM by SNe II and SNe Ia-P is depressed since here the difference $|\Omega-\Omega_P| \to 0$. The combined effect of the corotation resonance and turbulent diffusion results in formation of the radial distribution of heavy elements with the slope which varies along the galactocentric radius. For completeness we also include in our theory the radial gas inflow within the Galactic disc \[see the divergent terms in the last lines of equations (1,2)\]. For the radial velocity $u(r)$ we adopt the same model representations as in the paper by Acharova et al. (2011). Equations (1,2) are the ones in partial derivatives. So, besides the initial conditions (at $t=0$ the initial values $\mu_i = \mu_g = 0$) we adopt the so-called natural conditions of the finiteness of the solutions at the Galactic center and at the Galactic disc end, $r_G$ (for models with radial gas inflow we locate the Galactic end at $r_G = 35$ kpc, in the case $u = 0$ the value $r_G = 25$ kpc is adopted). Strictly speaking the full system of equations for the Galactic chemical evolution includes also the equations for the disc formation, which describe the exchange by mass among the intergalactic matter, gaseous and stellar components. However, we do not write them since they and their solutions are the same as in Acharova et al. (2011, see figure 1 therein). We only notice that according to the last paper, the short time-scale of the Galactic disc formation, $t_f \sim 2$ Gyr, fits the best both to the observations of low present rate of gas infall on to the Galactic disc, $\sim$ 0.1-0.2 M$_{\odot}$yr$^{-1}$, and the star formation rate which is expected to be of the order of magnitude higher (Sancisi & Fraternali 2008, Bregman 2009, Robitaille & Whitney 2010). So, for the Galactic disc formation we adopt the results of Acharova and collaborators \[the values of constants $A$ and $\nu$ for various models of inflow, i.e. $u(r)$, see in Table 1 of their paper\]. [*Statistical method*]{} ------------------------ The above system of equations for chemical evolution of the Galaxy has 4 free parameters: $\beta$, $\Omega_P$, $\gamma$ and $\zeta$. To derive them we try to fit the theory to observations minimizing the merit function (or discrepancy) $\Delta$ $$\Delta^2 = \frac{1}{n-p}\sum_{i=1}^n\{(\langle[X/H]^{ \rm ob}\rangle_i - [X/H]^{ \rm th}_i)w_i\}^2$$ over the above free parameters. Here $[X/H] = log(N_X/N_H)_s - log(N_X/N_H)_{\odot}$, $N_X$ and $N_H$ are the numbers of atoms of the element $X$ and that of hydrogen in the object, respectively, the first term on the right-hand side of the last relation refers to a star, the second one - to the Sun, the superscript ‘ob’ corresponds to the observational and ‘th’ to theoretical data, the symbol $\langle ... \rangle$ means that we apply our theory to a group of stars which fall into a bin centered at the $i$th Galactocentric radius $r$, $w_i$ is the weight, $n$ is the number of bins, $p$ is the number of the sought for free parameters, the summation is taken over all $i$th bins of the Galactocentric radius where the abundances of the elements were measured. To estimate the errors of the sought for parameters we compute the confidence (at the level 95 %) contour $$\Delta_c^2 = \Delta_m^2 [1 + \frac{p}{n-p}F(p,n-p,0.95)],$$ where $\Delta_m$ is the minimal value for the discrepancy, $F$ is Fisher’s $F$ statistics (see Draper & Smith 1981). Below we show that the process of the statistical treatment of observational data may be divided into two steps. Indeed, as it was noticed in Sec. 2.1 we can neglect by the contribution of SNe Ia to oxygen synthesis. Hence, the values $\Omega_P$ and $\beta$ can be derived independently of other two target parameters $\gamma$ and $\zeta$ since the last 2 quantities do not enter the corresponding equations describing oxygen production. So, at [*Step 1*]{} we analyze the oxygen distribution to evaluate $\Omega_P$ and $\beta$. For this, we solve equations (1,3-7) for a set of $\Omega_P$ and $\beta$. After that we construct the surface $\Delta$ as a function of $\Omega_P$ and $\beta$ ($p = 2$), find out the minimum of $\Delta$ which determines the best values for the above parameters and compute the corresponding confidence contour for them. At [*Step 2*]{} the radial distribution of iron is analyzed. Now we seek for the last 2 free parameters, $\gamma$ and $\zeta$. The idea for evaluation them is similar to the method used at the previous step, that is, we solve equations (2-11), describing iron synthesis, for a set of $\gamma$ and $\zeta$, consider $\Omega_P$ and $\beta$ as have been derived at previous step, than compute the discrepancy between the theoretical and observed distributions of iron as a function of $\gamma$ and $\zeta$, again construct the surface of $\Delta$, but now as a function of $\gamma$ and $\zeta$, look for its minimum which gives the best values for $\gamma$ and $\zeta$. However, at this step we have to take into account that the equations, describing the synthesis of iron, contain the parameters $\Omega_P$ and $\beta$, obtained independently at the previous step. Therefore we have to take into account the influence of their errors on biases and errors in $\gamma$ and $\zeta$. For this, we propose a kind of numerical experiment (see Sec. 4). Above we discussed the methods of estimation the statistical errors for the free parameters. But there is a source of errors which have another nature, namely: the uncertainties in oxygen and iron yields. As starting values, in our computations, we adopt the masses of oxygen and iron, ejected per one SN event, from Tsujimoto et al. (1995): $P^{\rm II}_{O} = 2.47$, $P^{\rm II}_{Fe} = 0.084$, and $P^{\rm Ia}_{Fe} = 0.613$ (following Matteucci et al. 2006, for the both SNe Ia subpopulations here we use the same ejected masses of iron). On the other hand, in literature one can find other values for the ejected masses (see, e.g., Woosley & Weaver 1995, Thielemann 1996 and others). Besides prompt and tardy SNe Ia may have different outputs of iron (Howell et al. 2009). How do the changes in the ejected masses influence the final amounts of the elements supplied by various Types of SNe to the Galactic disc? To feel the answer this question, first, let us look at the structure of the rate for oxygen enrichment: from equation (5) it is seen that $P_O^{\rm II}$ enters the expression for $E^{\rm II}_{O}$ as a product $\beta P_{O}^{\rm II}$. Similarly, the enrichment rate by iron due to SNe II explosion is proportional to the product $\beta P_{Fe}^{\rm II}$. Hence, if we adopt another value for $P_{O}^{\rm II}$, the constant $\beta$ will change, so that the product $\beta P_{O}^{\rm II}$ to be kept the same in order the final amount of oxygen to be unalterable. But this will influence the enrichment rate by iron due to SNe II even if $P_{Fe}^{\rm II}$ is retained unchanged. In turn, the enrichment rates for iron by SNe Ia are proportional to products $\beta \gamma P_{Fe}^{\rm Ia-P}$ for prompt and $\zeta P_{Fe}^{\rm Ia-T}$ for tardy objects \[see equations (8,10)\]. So, it is obvious, the variation in mass of oxygen (!), ejected during SNe II explosions, influences the output of iron due to SNe Ia-P, since the corresponding enrichment rates by iron for SNe II and SNe Ia-P have close functional representations along the Galactic radius. But the amount of iron supplied by SNe Ia-T does not change. In other words, in this case, we should have a redistribution of amount of iron among the 3 sources of it. Consider the second possible case: $P_O^{\rm II}$ is equal to the starting value but $P_{Fe}^{\rm II}$ is changed. Hence, the enrichment rate $E_{Fe}^{\rm Ia-P}$ has to be inversely changed in order to compensate the variation in the rate of iron enrichment due to SNe II, but again we do not expect any significant variations in the amount of iron synthesized by SNe Ia-T. At last, in the third case, let us consider the result if $P_{Fe}^{\rm Ia}$ for SNe Ia-P and SNe Ia-T are different, but other ejected masses are equal to the starting values. It is easy to see that the final amounts of iron supplied by all Types of SNe will not change at all. Only constants $\gamma$ and $\zeta$ will alter in order the products $\gamma P_{Fe}^{\rm Ia-P}$ and $\zeta P_{Fe}^{\rm Ia-T}$ do not change relative to the starting case. In Sec. 4 we illustrate the discussion of this problem by some results of our numerical experiments. After evaluation of the free parameters we compute the amount of iron synthesized by each type of its sources. Observational data ================== In the present paper, we use the most extensive spectroscopic (only) data on oxygen and iron abundances derived for classical 283 Cepheids (872 spectra in total). A part of the data were previously published ([@lu11] and references therein). For completeness we give the data in Table 1 (see Appendix). Below we describe briefly our methods and analysis of spectra. Spectral material ----------------- The spectra of additional Cepheids were obtained using the facilities of the 1.93m telescope at the Haute-Provence Observatoire (France) equipped with échelle-spectrograph ELODIE. In the region of wavelengths 4400–6800 Å the resolving power was R=42000, the signal-to-noise ratio, S/N, being about 80–130. The initial processing of the spectra (image extraction, cosmic particles removal, flatfielding, etc.) was carried out following to Katz et al. (1998). Also we use échelle-spectrograph SOPHIE at this telescope, the spectra stretch from 3870 to 6940 Åin 39 orders with resolution R=75000. Some spectra of Cepheids were obtained with the fiber échelle-spectrograph HERMES mounted on the 1.2m Belgian telescope on La Palma. A high-resolution configuration with R= 85000 and wavelength coverage 3800–9000 Åis used. The spectra were reduced using a Python-based pipe-line, following a procedure of the order extraction, wavelength calibration using Thr-Ne-Ar arcs, division by the flat field, cosmic-ray clipping, and the order merging. For more details on the spectrograph and the pipe-line see Raskin et al. (2011). We also made use of spectra obtained with the Ultraviolet-Visual Echelle Spectrograph (UVES) instrument at the Very Large Telescope (VLT) Unit 2 Kueyen (Bagnulo et al. 2003). All supergiants were observed in two instrumental modes, $Dichroic \,1$ and $Dichroic \,2$, in order to provide almost complete coverage of the wavelength interval 3000-10000 Å. The spectral resolution is about 80000, and for most of the spectra the typical S/N ratio is 150–200. Further processing of the spectra (continuum level location, measurement of the equivalent widths, etc.) was performed using the software package DECH20 (Galazutdinov 1992). The equivalent widths were measured using the Gaussian fitting. [*Atmospheric parameters*]{} ---------------------------- Effective temperatures for our program stars were established from the processed spectra using the method developed by @ko07 that is based upon $T_{\rm eff}$–line depth relations. The technique can establish $T_{\rm eff}$ with exceptional precision. It relies upon the ratio of the central depths of two lines that have very different functional dependences on $T_{\rm eff}$, and uses tens of pairs of lines for each spectrum. The method is independent of interstellar reddening, and only marginally dependent on the individual characteristics of stars, such as rotation, microturbulence, metallicity, etc. The microturbulent velocities, $V_{\rm t}$, and surface gravities, $\log g$, were derived using a modification of the standard analysis proposed by @ka99. As described there, the microturbulence is determined from the Fe II lines rather than the Fe I lines, as in classical abundance analyses. The surface gravity is established by forcing equality between the total iron abundance obtained from both Fe I and Fe II lines. Typically with this method the iron abundance determined from Fe I lines shows a strong dependence on equivalent width (NLTE effects), so we take as the proper iron abundance the extrapolated total iron abundance at zero equivalent width. Kurucz’s WIDTH9 code was used with an atmospheric model for each star interpolated from a grid of models calculated with a microturbulent velocity of 4 km s$^{-1}$ Kurucz (1992). At some phases Cepheids can have microturbulent velocities deviating significantly from that value; however, our previous test calculations suggest that changes in the model microturbulence over a range of several km s$^{-1}$ has an insignificant impact on the resulting element abundances. The oscillator strengths used in this and all preceding Cepheid analyses of this series are based on an inverted solar analysis. [*Distances, masses and ages of the Cepheids*]{} ------------------------------------------------ The heliocentric distance, $d$, of a Cepheid is estimated in a usual way: $$d = 10^{-0.2 (M_{\rm v} - <V> -5 + A_{\rm v})},$$ where $M_{\rm v}$ is the absolute magnitude, $<V>$ is the mean visual magnitude, $A_{\rm v}$ is the line of sight extinction, $A_{\rm v} = 3.23 E(B-V)$ (pulsate periods, mean visual magnitudes, colors and $E(B-V)$ values are taken from @fernie95; $M_{\rm V} - P$ relation from @fouque07 and @ko08). To transform $d$ to the galactocentric distance, $r$, of the Cepheid we use the Galactocentric solar distance $r_0 = 7.9$ kpc. The masses and ages for our Cepheids are derived using $Period - Mass$ relation from @turner96 and $Period - Age$ relation from @Bono05. Our estimates show that the ages of the most portions of Cepheids from our sample are less than 100 Myr. Only several stars are older, but in any case their ages do not exceed 130 Myr. Hence they did not undergo significant radial scattering. So, all Cepheids demonstrate the distribution of the elements in the ISM, almost at the Galactocentric radius, where we observe them at the present moment of time. The table with the derived abundances and other parameters for all Cepheids is given in Appendix. Radial distributions of oxygen and iron along the Galactic disc --------------------------------------------------------------- For modeling, we divide the galactocentric radius in bins of some width and average the abundances within the bins over the stars which have fallen to the bin. As in our previous papers, in Fig.1 we show the radial distributions of the mean abundances for oxygen, $\langle [O/H]^{ob}\rangle$, and iron, $\langle [Fe/H]^{ob}\rangle$, and their relation along the Galactic disc at step of 0.25 kpc, the bin width being equal to 0.5 kpc. Bars in the figure describe the scatter of the above mean abundances within the bin. In our statistic analysis, we adopt the weight $w_i$ \[see equation (1)\] to be inversely equal to the length of the bar in the $i$th bin. ![Radial distributions of oxygen, iron and their relation along the Galactic disc, averaged over bins of 0.5 kpc width. Bars correspond to the scatter of the mean abundance (see text for details).[]{data-label="f1"}](fig1.eps) Let us discuss some features of the distributions in figure 1. First of all, notice that the scatter of the mean abundances happens to be much less than the one which was computed on the basis of previous observational data of Andrievsky et al. (2002 a,b,c) and Luck et al. (2003, 2006; see figure 1 in Acharova et al. 2010). Such decrease in the scatter is obviously a result of improvement of the abundances determinations and increase in number of objects. However, at large Galactocentric distances the scatter happens to be much greater than in the inner region. Further, the radial distribution of oxygen demonstrates sufficiently sharp inflection of the slope in the distribution at $r \sim 7$ kpc. But for iron there is no such sharp bend in the gradient at the same Galactocentric distance. Nevertheless, it is obvious that the radial distribution of iron cannot be satisfactorily described by a trivial linear function. At last, the distribution of iron is rather smooth up to $r \sim 13$ kpc. New data do not show any visible gaps or jumps for smaller radii. The increase followed by the decrease in its content between 13 and 15 kpc takes place at approximately the same distance that of for oxygen although it is not so prominent and the both radial patterns differ in details. For instance, there is no noticeable variation in iron distribution at $r \sim 10.5$ kpc where in oxygen distribution we see a rather sharp step-like decrease. In our opinion, the above peculiarities may be associated with some local effects, say, with a sudden fall of a pristine gas on to the Galactic disc at $r \sim 10-11$ kpc or due to the Magellanic Stream at $r \sim 10-15$ kpc. However, we will not try to explain them: in spite of our model is difficult from the mathematical point of view such local effects cannot be simply incorporated in our theory. That is why for the statistical analysis we restrict ourselves by the region $r \le 10$ kpc. So, the number $n$ in equations (1,2) is $n = 20$. Since at the both steps (oxygen and iron analysis) $p =2$, Fisher’s statistics $F(2,18,0.95) \approx 3.55$ (Draper & Smith 1981). Results and discussion ====================== *[Step 1: Oxygen]{}* -------------------- We performed calculations for the same models of the radial gas inflow \[i.e., for the dependences of $u(r)$ of Acharova et al. (2011)\] and the ejected mass of oxygen, $P_O^{\rm II}$ per one SNE II event of Tsujimoto et al. (1995). Unlike our previous paper, new observational data unambiguously lead to the least value for the discrepancy $\Delta$ (for oxygen we denote it as $\Delta^{\rm O}_m$) which corresponds to the model with no radial inflow (in notations of Acharova et al. 2011 it is the model ‘M20’ with $u = 0$): $\Delta^{\rm O}_m = 0.641$ [^5] which corresponds to the best values of $\Omega_P = 33.4$ km s$^{-1}$ kpc$^{-1}$ and $\beta = 0.0126$ Gyr. Comparing the above parameters with the ones from the last paper we see that the rotation velocity for the spiral density waves occurs to be the same (correspondingly, the corotation radius $r_c \approx 7$ kpc) whereas the coefficient $\beta$ has decreased by about 20 per cent. Besides, the confidence contour has changed distinctly (see figure 2): now the axes of the ellipse are not parallel to the axes of $\Omega_P$ and $\beta$. To indicate the confidence borders for the target parameters we adopt the following lower and upper values for them: $\Omega_P = 32.9\, -\, 34.2$ km s$^{-1}$ kpc$^{-1}$ (correspondingly $r_c = 7.1\, -\, 6.8$ kpc); $\beta = 0.0129\, -\, 0.0122$ Gyr (in figure 2 they are labeled by filled circles marked as ‘A’ and ‘B’). For simplicity we adopt the symmetrical errors in $\beta$, so finally $\beta = 0.0126 \pm 0.0004$ Gyr. ![The confidence contour for $\Omega_P$ and $\beta$. The cross corresponds to the best values of $\Omega_P$ and $\beta$.[]{data-label="f2"}](fig2.eps) In figure 3 we show the theoretical radial distributions of oxygen computed for the best above parameters and for their values, corresponding to the extreme points of the confidence contour from figure 2 (‘A’ and ‘B’), superimposed on the observational distribution. Within the radius range $5 \le r \le 10$ kpc the coincidence of the theory with observations is excellent. Notice the very good agreement of the theory with observations both at $r \sim 7$ kpc where there is the bend in the gradient slope and in the range of the flat (a plateau-like) oxygen distribution. ![The comparison of the theoretical radial distribution of oxygen with the observations. [*Solid*]{} line is for the best values of $\Omega_P,\, \beta$ which correspond to the cross in figure 1; [*dashed*]{} lines correspond to the parameters labeled by points ‘A’ and ‘B’ in the previous figure.[]{data-label="f3"}](fig3.eps) Our computer experiments confirm the statement, made in Sec. 2.1, that the radial distribution of oxygen, its full synthesized mass and $\Omega_P$ do not change if we adopt another value for $P_{O}^{\rm II}$. Only $\beta$ alters but so as the product $\beta P_{O}^{\rm II}$ has to be kept the same. *[Step 2: Iron]{}* ------------------ Oxygen is mainly synthesized during SNe II events. So, it is the most pure indicator of spiral arms influence on heavy elements synthesis in the Galactic disc. Besides, since we can neglect by the contribution of SNe Ia to its abundance, of the discussed two elements the kinetics of oxygen synthesis is simpler. Unlike oxygen, iron is synthesized by SNe II, SNe Ia-P and SNe Ia-T. To estimate the contributions of each type of the above sources to the production of iron is more difficult problem than the one for oxygen. Indeed, to solve the posed task we have to derive the constants for the rates of iron synthesis by means of fitting our theory to the observed fine structure of the radial distribution of iron in the Galactic disc. Now we set out, in short, our method for evaluation of the free parameters $\gamma$ and $\zeta$. For this, first of all, notice that the enrichment rate of ISM by iron due to SN II explosions is described by the same relation (5) with the only substitution: $P_{i}^{\rm II} \rightarrow P_{Fe}^{\rm II}$. Since the constant $\beta$ was derived at [*Step 1*]{} the contribution of SN II to iron synthesis is determined entirely. Hence, we only need to derive the constants $\gamma$ and $\zeta$ \[see equations (8,10)\] by means of fitting the theory to the observed radial distribution of iron. In a general way, the procedure of evaluation them is similar to the one for derivation of $\Omega_P$ and $\beta$, namely, for the fixed values of $\Omega_P$ and $\beta$ we solve numerically the equations of iron synthesis in the Galactic disc, varying the sought for parameters $\gamma$ and $\zeta$. Then, using the theoretical and observational data for the radial distribution of iron, we again compute the net of the merit function $\Delta^{\rm Fe}$ (the superscript ‘Fe’ means that the discrepancy refers to iron) as a function of $\gamma$ and $\zeta$, find out its minimum over $\gamma$ and $\zeta$ ($min \, \Delta^{\rm Fe} = \Delta^{\rm Fe}_m$) and derive the confidence contour for them. To control the process, we construct the surface $\Delta^{\rm Fe} (\gamma, \zeta)$, an example of which is given in figure 4 for the best values of $\Omega_P$ and $\beta$. In figure 5 we demonstrate the corresponding confidence contour for $\gamma$ and $\zeta$ computed for the best values of $\Omega_P$ and $\beta$. However, at this step we have to take into account that the quantities $\Omega_P$ and $\beta$, derived at the previous step, have errors which may result in biases of the best values of $\gamma$ and $\zeta$ and their errors. To solve this problem we made a numerical experiment, repeating the above procedure for various values of $\Omega_P$ and $\beta$ from their confidence contour and averaging $\gamma$ and $\zeta$ and their errors. Our computations show that $\gamma$ and $\zeta$ and the disposition of the confidence ellipse for them change slightly if we use $\Omega_P$ and $\beta$ from their confidence region. Therefore, we test only the 7 pairs of values $\Omega_P$ and $\beta$ which are labeled by filled circles and by the cross in figure 2. ![An example of the surface $\Delta^{\rm Fe} (\gamma, \zeta)$ for the best values of $\Omega_P$ and $\beta$. For better visualization we draw the surface ‘bottom-up’.[]{data-label="f4"}](fig4.eps) ![An example of the confidence contour for $\gamma$ and $\zeta$ computed for the best values of $\Omega_P$ and $\beta$. The cross corresponds to $\Delta^{\rm Fe}_m$ for the above values of $\Omega_P$ and $\beta$.[]{data-label="f5"}](fig5.eps) As a result, we derive 7 pairs of values for $\gamma$ and $\zeta$ corresponding to a particular minimum $\Delta^{\rm Fe}_m$ (denote them as $\gamma_m$ and $\zeta_m$) and to the largest deviations of $\gamma$ and $\zeta$ corresponding to the points ‘C’ and ‘D’ in the last figure \[denote them as ($\gamma_C$, $\zeta_C$) and ($\gamma_D$, $\zeta_D$)\]. Averaging ($\gamma_m$,$\zeta_m$) over the computed 7 results we find out the best values for the sought for parameters \[denote them as ($\langle \gamma_m \rangle$, $\langle \zeta_m \rangle $\]. By analogy we compute the mean extremal confident values for them, correspondingly, the pairs ($\langle \gamma_C \rangle$, $\langle \zeta_C \rangle$) and ($\langle \gamma_D \rangle$, $\langle \zeta_D \rangle$)). In figure 6 is shown the comparison of the theoretical radial distribution of iron with the observations computed for the ejected mass of oxygen and iron from Tsujimoto et al. (1995) and supposing that SNe Ia-P and SNe Ia-T eject the same mass of iron per one event (i.e., $P_{Fe}^{Ia-P} = P_{Fe}^{Ia-T}$, see Sec. 2.2). Notice that in this figure we demonstrate the theoretical curve up to $r = 13$ kpc despite for the statistic analysis was treated only the region within 10 kpc. Nevertheless the agreement of our theory with the observations happens to be very good even in the extended region of Galactocentric radius. ![Theoretical radial distribution of iron superimposed on the observational data. [*Solid line*]{} corresponds to the theoretical distribution computed for ($\langle \gamma_m \rangle$, $\langle \zeta_m \rangle$); [*dashed lines*]{} correspond to ($\langle \gamma_{C} \rangle$, $\langle \zeta_{C} \rangle$) and ($\langle \gamma_{D} \rangle$, $\langle \zeta_{D} \rangle$).[]{data-label="f6"}](fig6.eps) In figure 7 is shown the theoretical relation for $[O/Fe]$ vs Galactocentric radius superimposed on the observations. As it was expected, the coincidence of the theory with observations again is good. This result independently demonstrates that the corotation resonance is located at the minimum of the radial distribution of the relation of oxygen to iron. ![Comparison of the theoretical radial distribution of the relation $[O / Fe]$ with observations for the best values of $\beta$, $\gamma$, $\zeta$ and $\Omega_P$. Notice that the minimum of the relation of oxygen to iron at $r \sim 7$ kpc fits to the location of the corotation resonance.[]{data-label="f7"}](fig7.eps) Let us now discuss the effects of using DTD function of Maoz et al. (2010) which, unlike the bimodal function of Mannucci et al. (2006) and Matteucci et al. (2006), is smooth and peaked at early times (the corresponding approximation for the smooth DTD function see in Sec. 2.2). In figure 8 is shown the radial distribution of iron computed for the above smooth DTD function and the same ejected masses adopted from Tsujimoto et al. (1995). It is seen that the distribution differs slightly from the one derived for the bimodal DTD of Mannucci et al. (2006) and Matteucci et al. (2006). ![The same as in figure 6 but for the smooth DTD function of Maoz et al. (2010).[]{data-label="f8"}](fig8.eps) Amount of iron synthesized by various sources --------------------------------------------- Now we can answer the question: how much iron is synthesized by each Type of SNe during the life-time of our Galaxy? Below we see that the mass of iron, synthesized by each Type of SNe for the period of life of the Galactic disc, differs from the corresponding masses which are kept in the present ISM. [^6] Hence, saying the amount of iron supplied to the Galaxy by any Type of SNe we mean the mass which was synthesized by the corresponding SNe Type for the age of the Galactic disc. To compute these quantities (denote them as $M_{Fe}^{\rm II}$, $M_{Fe}^{\rm Ia-P}$ and $M_{Fe}^{\rm Ia-T}$) we simply integrated the corresponding enrichment rates \[see eqs. (5,8,10)\] over the surface of the Galactic disc and time. However, the procedure for evaluation of the above masses of iron which occur in the present ISM differs from the one, described before. Indeed, equation (2) governs the evolution of iron content in ISM. So, to find out $M_{Fe}^{\rm II}$, $M_{Fe}^{\rm Ia-P}$ and $M_{Fe}^{\rm Ia-T}$ in the present ISM, we have to solve the corresponding equations separately for each type of iron sources, using the constants of the rates of iron synthesis evaluated at steps of fitting our theory to observations, and then integrate the derived $\mu_{Fe}(r,T_D)$ over the Galactic disc. In all experiments, considered by us, the radial distributions of iron along the Galactic disc are very close to each other and the distribution shown in figures 6 and 7. That is why we do not demonstrate the distributions derived for other input parameters and restrict our discussion by numerical values given in Table 2. Let us consider them in some details. [cccccccccccc]{} & & & & \ &$\beta$& $\gamma$& $\zeta$ & & $M_{\rm Fe}^{II}$& $M_{\rm Fe}^{Ia-P}$& $M_{\rm Fe}^{Ia-T}$ & & $M_{\rm Fe}^{II}$ & $M_{\rm Fe}^{Ia-P}$ & $M_{\rm Fe}^{Ia-T}$\ Case 1: & 0.0126& 0.67 & 0.24 & & 0.75 & 1.80 & 1.43 & & 0.18 & 0.43 & 0.59\ &$(\pm$0.0004)& ($\pm$0.2)& ($\pm$0.03)& & ($\pm$0.02) & ($\pm$0.60) & ($\pm$0.18) & & ($\pm$0.01) & ($\pm$0.14) & ($\pm$0.08)\ & & 19 % & 45 % & 36 % & & 15 % & 36 % & 49 %\ Case 2: & 0.0174 & 0.44 & 0.24 & & 1.03 & 1.63 & 1.43 & & 0.25 & 0.39 & 0.59\ & & & & & 25 % & 40 % & 35 % & & 20 % & 31 % & 49 %\ Case 3: & 0.0084 & 1.2 & 0.24 & & 0.50 & 2.16 & 1.43 & & 0.12 & 0.51 & 0.59\ & & & & & 12 % & 53 % & 35 % & & 9 % & 42 % & 49 %\ Case 4: & 0.0126 & 0.49 & 0.24 & & 1.25 & 1.32 & 1.43 & & 0.30 & 0.31 & 0.59\ & & & & & 32 % & 33 %& 35 % & & 25 % & 26 %& 49 %\ Case 5: & 0.0126 & 0.54 & 0.32 & & 0.75 & 1.81 & 1.43 & & 0.18 & 0.43 & 0.59\ & & & & & 18 % & 44 % & 38 % & & 15 % & 36 % & 49 %\ Case 6: & 0.0126 & 1.64 & 0.15 & & 0.75 & 1.81 & 1.60 & & 0.18 & 0.43 & 0.59\ & & & & & 18 % & 44 % & 38 % & & 15 % & 36 % & 49 %\ \ \ &\ \ \ \ \ \ \ \ \ \ \ \ ### [*Bimodal DTD function of Matteucci et al. (2006)*]{} In [*Cases*]{} 1 - 5 we examine the bimodal DTD function of Matteucci et al. (2006) varying the ejected masses of oxygen or iron during SNe events and analyzing the results of such changes. The input parameters in [*Case 1*]{} are considered as starting ones. For them we adopt the ejected masses per one SNe event from Tsujimoto et al. (1995) and outputs of iron from SNe Ia-P and SNe Ia-T are assumed to be the same (Matteucci et al. 2006). In the following 4 [*Cases*]{} we estimate the effects of variations of $P_{O}^{\rm II}$ or $P_{Fe}^{\rm II}$ and the supposition that $P_{Fe}^{\rm Ia-P} \ne P_{Fe}^{\rm Ia-T}$. Thus, in [*Case*]{} 2 we make an experiment with $P_O^{II}$ = 1.8: this value was proposed by Tsujimoto et al. (1995) for the upper stellar mass $m_U$ = 50 M$_{\odot}$ (in other [*Cases*]{} we use $m_U$ = 70 M$_{\odot}$). To illustrate the effect of increase of the mass of oxygen ejected by one SNe II on iron output, in [*Case*]{} 3 we compute the amounts of iron for $P_O^{II}$ = 3.7. This value is about 1.5 times greater than the starting one (notice that the ejected masses, derived by Woosley & Weaver 1995, are systematically greater than the corresponding value of Tsujimoto et al. 1995 just about 1.5 times). At last, in [*Case*]{} 5 we compute the final masses of iron if we adopt that $P_{Fe}^{\rm Ia-P} \ne P_{Fe}^{\rm Ia-T}$. However, since the completed theory for the two subpopulation of SNe Ia is not built, we use for illustration the largest and the least values from Nomoto et al. (1997). In the second row of [*Case*]{} 1, in parenthesis, are shown the random errors of $\beta$, $\gamma$ and $\zeta$ evaluated by means of our statistical method and the errors for the masses of iron following from the above random errors in the constants for the rates of enrichment of the Galaxy by iron and oxygen. In other [*Cases*]{} the errors happen to be the same order of magnitude and we do not demonstrate them. Moreover, as it is seen from Table 2, sometimes the variations in the synthesized masses of iron due to uncertainties in the ejected masses, especially in $P_O^{\rm II}$ and $P_{Fe}^{\rm II}$, lead to greater variations in $M_{Fe}^{\rm II}$ and $M_{Fe}^{\rm Ia-P}$, although for long-lived SNe Ia progenitors the final value of $M_{Fe}^{\rm Ia-T}$ is sufficiently stable. The portions of iron masses supplied by various SNe to the Galactic disc and ISM are shown in per cent. It is interesting to notice that in all [*Cases*]{} the portion of mass of iron, synthesized by SNe Ia-T does not vary and happens to be equal to about 35 per cent. Correspondingly, the total portion of iron, produced by SNe II and SNe Ia-P, is $\sim$65 per cent. The only effect of the changes in ejected mass consists in redistribution of iron between SNe II and SNe Ia-P. This result confirms our suppositions made in Sec. 2.2. The same situation holds for the abundance of iron in the present ISM: about 49 per cent of it was supplied by SNe Ia-T, other 51 per cent were captured from SNe II and SNe Ia-P. And again, these 51 per cent of iron are redistributed between SNe II and SNe Ia-p depending on input parameters. ### [*Smooth DTD function of Maoz et al. (2010)*]{} The radial distribution of iron computed for DTD function of Maoz et al. (2010) is demonstrated in figure 8, the corresponding values, computed for the starting ejected masses, are presented in [*Case*]{} 6 of Table 2. Comparing the results with the ones of [*Case*]{} 1 we see that, the constants $\gamma$ and $\zeta$ are changed significantly: $\gamma$ has increased by 2.4 times, $\zeta$ has decreased about 1.6 times. Nevertheless, the masses of iron synthesized for the age of the Galactic disc in the framework of smooth representation for the DTD function happens to be close to the ones corresponding to [*Case*]{} 1. This statement is also valid for the mass of iron confined in the present ISM. Conclusions =========== On the basis of a new observational data on abundances of Cepheids we have studied the problem of how much amount of iron was synthesized by various Types of SNe – SNe II, prompt and tardy SNe Ia, for the age of the Galactic disc. For this, we develop a statistical method which enables to evaluate the constants $\beta$, $\gamma$ and $\zeta$ for the rates of synthesis of oxygen and iron without any preliminary suppositions like the equipartition among the above 3 Types of SNe. To do that, we develop a theory of iron and oxygen synthesis in the Galactic disc. This theory explains the nontrivial distributions along the Galactic disc of oxygen which demonstrates the bend in the radial gradient at $r \sim 7$ kpc with a rather steep gradient for $5 < r < 7$ kpc and a plateau-like distribution in the region of $7 < r < 10$ kpc, as well as the multi-slope radial distribution of iron in the same range of the Galactocentric radius. In order to understand the mechanism of formation of such fine structure of radial distributions of oxygen and iron we use two main ideas. First, there are 2 Types of SNe Ia - [*prompt*]{} SNe Ia which progenitors are short-lived stars no older than 100 Myr and [*tardy*]{} SNe Ia whose progenitors may have the ages in the range from 100 Myr to 10 Gyr. For the [*Delay Time Distribution*]{} function we study both the bimodal approximation of Matteucci et al. (2006) and smooth representation of Maoz et al. (2010). Second, we take into account the influence of spiral arms on the formation of the fine structure in the radial distribution of oxygen and iron in the Galactic disc. To realize that, we use the representations for the rate of explosions of short-lived SNe progenitors – SNe II and SNe Ia-P, proposed by Oort (1974), Wyse & Silk (1989) and Portinari & Chiosi (1999) (see also Mishurov et al. 2002; Acharova et al. 2005; 2010; 2011). Our statistical method of treatment of the observational data enables to derive simultaneously the location of the corotation resonance which happens to be located at $r_c \approx 7$ kpc and is situated close to the bend in the slope of oxygen distribution or the minimum in $[O/Fe]$. Besides, by means of the proposed statistical methods we may estimate the contributions of the 3 Types of SNe to iron synthesis without any preliminary suppositions. The results are as follows. For the age of the Galactic disc about 35 - 38 per cent of iron was produced due to SNe Ia-T and this portion does not varies depending on the input parameters. The total portion of iron produced by SNe II and SNe Ia-P is of the order of 65 per cent. However, the ration of iron between SNe II and SNe Ia-P may changes depending on the ejected mass of oxygen (!) or iron per one SNe II event. Nevertheless, the amounts of iron synthesized by the 3 Types of SNe do not differ significantly from the ones adopted by Matteucci (2004). However, for the present ISM the situation is another. Thus about 50 per cent of iron in ISM was supplied by SNe Ia-T. The portion of it produced by SNe Ia-P varies from 26 to 42 per cent. Correspondingly, about 9 - 25 per cent of iron, injected by SNe II, was captured by ISM. At last, the total mass of iron supplied to the Galactic disc during its life by all Types of SNe is $\sim (4.0 \pm 0.4)\cdot 10^7$ M$_{\odot}$, the mass of iron in the present ISM is $\sim (1.20 \pm 0.05)\cdot 10^7 $ M$_{\odot}$ i.e., about 2/3 of iron is contained in stars and stellar remnants. Our computations show that the result weakly depend on the exact shape of the DTD function - bimodal (Matteucci et al. 2006) or smooth (Maoz et al. 2010). We only need that there have to be a subpopulation of SNe Ia which progenitors are young, i.e. their ages are not more than 100 Myr in order we can to use the idea that spiral arms influence the formation of radial distribution of iron. Our infer may be considered as an argument in favour of the above estimate for the prompt SNe Ia progenitors. The result of Bartunov et al. (1994), that a significant portion of SNe Ia is concentrated in spiral arms, supports this idea. Acknowledgments {#acknowledgments .unnumbered} =============== We are gratefull to the anonymos referee for very important comments and suggestions. Authors also thank to Profs. A.Zasov and S.Blinnikov for helpful discussions. The work was supported in part by grants No. 02.740.0247 and P685 of Federal agency for science and innovations. IAA thanks to the Russian funds for basic research, grant No. 11-02-90702. The spectra were collected with the 1.93-m telescope of the OHP (France), the ESO Telescopes at the Paranal Observatory under program ID266.D-5655, and the Mercator Telescope, operated on the island of La Palma by the Flemish Community, at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. Drs. C. Soubiran, B. Lemasle, A. Fry and B. Carney are acknowledged for their help with spectral material. [99]{} Acharova I., Lépine J., & Mishurov Yu., 2005 a, MNRAS, 359, 819 Acharova I., Lépine J., Mishurov Yu. [*et al.*]{}, 2010, MNRAS, 402, 1155 Acharova I., Mishurov Yu., & Lépine J., 2005 b, AstRep, 49, 361 Acharova I., Mishurov Yu., & Rasulova M., 2011, MNRAS Lett., 415, 11L Andrievsky S., Kovtyukh V., Luck R. [*et al.*]{}, 2002 a, A&A, 381, 32 Andrievsky S., Bersier D., Kovtyukh V., [*et al.*]{}, 2002 b, A&A, 384, 140 Andrievsky S., Kovtyukh V., Luck R., [*et al.*]{}, 2002 c, A&A, 392, 491 Asplund, M., Grevesse, N., & Sauval, A.J., and Scott, P. 2009, ARAA 47, 481 Aubourg É., Tojeiro R., Jimenez R., [*et al.*]{} 2008, A&A, 492, 631 Bagnulo S., Jehin E., Ledoux C., et al., 2003, ESO Messenger, 114, 10 Bartunov O., Tsvetkov D., Filimonova I., 1994, PASP, 106, 1276 Bono G., Marconi M., Cassisi S., Caputo F., Gieren W., Pietrzynski G., 2005, ApJ, 621, 966 Brandt T.D., Tojeiro R., Aubourg É., [*et al.*]{} 2010, AJ, 140, 804 Bregman J. 2009, arXiv:0907.3494 Clemens D., 1985, ApJ, 295, 422 Draper N., Smith H. 1981, Applied Regression Analysis. Wiley, New York Fernie J., Evans N., Beattie B., & Seager S. 1995, IBVS 4148, 1 Fouqué P., Arriagada P., Storm J., Barnes T. , Nardetto N., Merand A., Kervella P., Gieren W., Bersier D., Benedict G. , McArthur B., 2007, A&A, 476, 73 Galazutdinov G., 1992, Prep. SAO RAS, 92 Greggio L. 2005, A&A, 441, 1055 Howell D.A., Sullivan M., Brown E.F. [*et al.*]{}, 2009, ApJ, 691, 661 Katz D., Soubiran C., Cayrel R., Adda M., Cautain R., 1998, A&A, 338, 151 Kovtyukh V., 2007, MNRAS, 378, 617 Kovtyukh V., Andrievsky S., 1999, A&A, 351, 597 Kovtyukh V. , Soubiran C., Luck R. , Turner D. , Belik S. , Andrievsky S. , Chekhonadskikh F. 2008, MNRAS, 389, 1336 Kurucz R. L. 1992, in The Stellar Populations of Galaxies, ed. B. Barbuy, & A. Renzini, IAU Symp. 149, 225 Lacey C.G. & Fall S.M. 1985, ApJ, 290, 154 Lin C.C., Yuan C., Shu F.H, 1969, ApJ, 155, 721 Li W., Chornock R., Leaman J., [*et al.*]{},2011, MNRAS Lett., 412, 1473L Luck R., Gieren W., Andrievsky S. [*et al.*]{}, 2003, A&A, 401, 939 Luck R., Kovtyukh V., Andrievsky S., 2006, AJ, 132, 902 Luck R. E., Andrievsky S. M., Kovtyukh V. V., Gieren W., Graczyk D. 2011, AJ, 142, 51 Maeda K., Benetti S., Stritzinger M., [*et al.*]{} 2010, Nature, 466, 82 Mannucci F., Della Valle, Panagia N., [*et al.*]{},2005, A& A, 433, 807 Mannucci F., Della Valle M., Panagia N. 2006, MNRAS, 370, 773 Maoz., Mannucci F., Li W., [*et al.*]{},2011, MNRAS, 412, 1508 Maoz.D., Keren S., Gal-Yam G. 2010, ApJ 722, 1879 Matteucci F., 2004, in Dettmar R., Klein U., Salucci P., eds, Baryons in Dark Matter Halos. Proceedings of Science, SISSA, Italy, p. 72.1 Matteucci F., Panagia N., Pipino A., Mannucci F., Recchi S., Della Valle M., 2006, MNRAS, 372, 265 Mishurov Yu., & Acharova I.,, 2011, MNRAS, 412, 1771 Mishurov Yu., Lépine, J. & Acharova I., 2002, ApJL, 571, L113 Nomoto K., Iwamoto K., Nakasato N., [*et al.*]{}, 1997, NPh A, 621, 467 Oort J., 1974, in Shakeshaft J. R., ed., Proc. IAU Symp. 58, The Formation and Dynamics of Galaxies. Reidel, Dordrecht, p. 375 Perlmutter S., Aldering G., Goldhaber G., [*et al.*]{}, 1999, ApJ, 517, 565 Portinari L., & Chiosi C., 1999, A&A, 350, 827 Prantzos N., 2008, in Charbonnel C., Zhan J., eds, Stellar Nuclear Synthesis: 50 Years after BrSH, p. 311 Raskin G., van Winckel H., Hensberge H. [*et al.*]{}, 2011, A&A 526, 69 Riess A.G., Filippenko A.V., Challis P. [*et al.*]{}, 1998, AJ, 116, 1009 Robitaille T., Whitney B., 2010, ApJ, 710, L11 Sancisi R., Fraternali F., 2008, A & A, 15, 189 Thielemann F.-K., Nomoto K., Hashimoto M. 1996, ApJ, 460, 408 Tsujimoto T., Nomoto K., Yoshii Y. [*et al.*]{}, 1995, MNRAS, 277, 945 Turner D., 1996, JRASC, 90, 82 Woosley S.E. & Weaver T.A.,1995, ApJ Suppl., 101,181 Wyse R. & Silk J.,1989, ApJ. 339, 700 Name No. Spectra $P$, days $V$ $(B-V)$ E([*B–V*]{}) r, kpc Mv \[O/H\] \[Fe/H\] age, Myr Mass ----------- ------------- ------------ -------- --------- -------------- -------- ------- --------- ---------- ---------- ------ T Ant 1 5.8977098 9.337 0.750 0.300 8.38 –3.34 –0.43 –0.24 62 6.2 U Aql 1 7.0239582 6.446 1.024 0.399 7.45 –3.54 0.01 0.01 55 6.8 SZ Aql 11 17.1408482 8.599 1.389 0.537 6.42 –4.58 –0.03 0.17 30 10.6 TT Aql 8 13.7547073 7.141 1.292 0.438 7.10 –4.32 0.02 0.10 35 9.5 FF Aql 14 4.4709158 5.372 0.756 0.196 7.63 –3.40 –0.09 0.04 60 6.4 FM Aql 2 6.1142302 8.270 1.277 0.589 7.29 –3.38 –0.19 0.08 61 6.4 FN Aql 4 9.4816027 8.382 1.214 0.483 6.68 –3.89 –0.08 –0.02 45 7.9 V496 Aql 2 6.8070550 7.751 1.146 0.397 6.88 –3.89 –0.15 0.05 45 7.9 V600 Aql 1 7.2387481 10.037 1.462 0.798 6.83 –3.58 0.11 0.03 54 6.9 V733 Aql 1 6.1789999 9.970 0.960 0.106 6.19 –3.39 0.04 0.08 60 6.4 V1162 Aql 2 5.3761001 7.798 1.366 0.195 6.74 –3.61 –0.19 0.01 53 7.0 V1359 Aql 1 3.7320000 9.059 1.350 0.661 7.26 –2.81 0.29 0.09 84 5.0 Eta Aql 14 7.1767349 3.897 0.789 0.130 7.71 –3.57 –0.06 0.08 55 6.9 V340 Ara 1 20.8090000 10.164 1.539 0.546 4.34 –4.81 0.07 0.31 27 11.7 Y Aur 2 3.8595021 9.607 0.911 0.375 9.63 –2.85 –0.30 –0.20 83 5.0 RT Aur 10 3.7281899 5.446 0.595 0.059 8.30 –2.81 –0.07 0.06 85 5.0 RX Aur 16 11.6235371 7.655 1.009 0.263 9.40 –4.13 0.07 –0.01 39 8.8 SY Aur 2 10.1446981 9.074 1.000 0.432 9.98 –3.97 –0.10 –0.05 43 8.2 YZ Aur 5 18.1932125 10.332 1.375 0.538 12.28 –4.65 –0.13 –0.35 29 11.0 AN Aur 4 10.2905598 10.455 1.218 0.565 11.16 –3.99 –0.25 –0.15 43 8.2 AO Aur 2 6.7630062 10.860 1.060 0.431 11.81 –3.50 –0.19 –0.26 57 6.7 AX Aur 1 3.0466399 12.412 1.155 0.598 11.95 –2.57 –0.02 –0.09 97 4.5 BK Aur 2 8.0024319 9.427 1.062 0.425 10.01 –3.69 0.08 0.06 51 7.3 CY Aur 1 13.8476496 11.851 1.600 0.768 13.21 –4.33 –0.41 –0.40 35 9.6 ER Aur 2 15.6907301 11.520 1.124 0.494 15.36 –4.48 –0.63 –0.34 32 10.2 V335 Aur 1 3.4132500 12.461 1.137 0.626 12.11 –2.70 ... –0.27 90 4.7 RW Cam 16 16.4148121 8.691 1.351 0.633 9.35 –4.53 0.02 0.09 31 10.4 RX Cam 9 7.9120240 7.682 1.193 0.532 8.61 –3.68 –0.10 0.04 51 7.2 TV Cam 1 5.2949700 11.659 1.198 0.613 11.20 –3.21 –0.30 –0.08 67 5.9 AB Cam 1 5.7876401 11.849 1.235 0.656 11.44 –3.32 –0.29 –0.09 63 6.2 AD Cam 1 11.2609911 12.564 1.588 0.864 13.05 –4.09 0.00 –0.22 40 8.6 RY CMa 3 4.6782498 8.110 0.847 0.239 8.78 –3.07 –0.13 –0.00 73 5.6 RZ CMa 3 4.2548318 9.697 1.004 0.443 9.11 –2.96 –0.03 –0.03 77 5.3 TW CMa 2 6.9950700 9.561 0.970 0.329 9.76 –3.54 –0.20 –0.17 55 6.8 VZ CMa 1 3.1262300 9.383 0.957 0.461 8.75 –2.98 –0.39 –0.06 76 5.4 AO CMa 1 5.8154202 12.603 1.316 0.738 11.30 –3.32 ... –0.14 63 6.2 U Car 1 38.7681007 6.288 1.183 0.265 7.54 –5.53 ... 0.01 18 16.0 V Car 2 6.6966720 7.362 0.872 0.169 7.88 –3.49 –0.15 0.00 57 6.7 SX Car 1 4.8600001 9.089 0.887 0.318 7.59 –3.11 –0.30 –0.09 71 5.7 UW Car 1 5.3457732 9.426 0.971 0.435 7.62 –3.22 –0.28 –0.06 66 5.9 UX Car 2 3.6822460 8.308 0.627 0.112 7.66 –2.79 –0.05 0.02 85 4.9 UY Car 1 5.5437260 8.967 0.818 0.180 7.55 –3.27 –0.15 0.03 65 6.1 UZ Car 1 5.2046599 9.323 0.875 0.178 7.54 –3.19 –0.10 0.07 68 5.9 VY Car 1 18.9137611 7.443 1.171 0.237 7.58 –4.69 –0.05 0.12 28 11.2 WW Car 1 4.6768098 9.743 0.890 0.379 7.52 –3.07 –0.55 –0.07 73 5.6 WZ Car 1 23.0132008 9.247 1.142 0.370 7.57 –4.92 ... 0.03 25 12.3 XX Car 1 15.7162399 9.322 1.054 0.347 7.38 –4.48 –0.06 0.11 32 10.2 XY Car 1 12.4348297 9.295 1.214 0.411 7.33 –4.21 –0.29 0.04 38 9.1 XZ Car 1 16.6499004 8.601 1.266 0.365 7.41 –4.55 ... 0.14 31 10.5 YZ Car 1 18.1655731 8.714 1.124 0.381 7.63 –4.65 –0.15 0.02 29 11.0 AQ Car 2 9.7689600 8.851 0.928 0.165 7.63 –3.93 –0.10 0.00 44 8.0 CN Car 1 4.9326100 10.700 1.089 0.399 7.80 –3.13 –0.11 0.06 70 5.7 CY Car 1 4.2659302 9.782 0.953 0.370 7.47 –2.96 –0.08 0.10 77 5.3 DY Car 1 4.6746101 11.314 1.003 0.372 7.69 –3.07 –0.28 –0.07 73 5.6 ER Car 2 7.7185502 6.824 0.867 0.096 7.60 –3.65 0.00 0.01 52 7.1 FI Car 1 13.4582005 11.610 1.514 0.691 8.11 –4.30 –0.25 0.06 36 9.4 FR Car 1 10.7169704 9.661 1.121 0.334 7.39 –4.03 –0.18 0.02 42 8.4 GH Car 1 5.7255702 9.177 0.932 0.394 7.41 –3.69 –0.17 –0.01 51 7.2 GX Car 1 7.1967301 9.364 1.043 0.386 7.76 –3.57 –0.07 0.01 54 6.9 HW Car 1 9.2002001 9.163 1.055 0.184 7.56 –3.86 0.02 0.04 46 7.8 IO Car 1 13.5970000 11.101 1.221 0.502 8.08 –4.31 –0.35 –0.05 36 9.5 IT Car 1 7.5331998 8.097 0.990 0.184 7.45 –3.62 –0.15 0.06 53 7.1 V397 Car 2 2.0634999 8.320 0.754 0.266 7.67 –2.50 –0.14 0.03 101 4.4 Name No. Spectra $P$ (days) V $(B-V)$ E([*B–V*]{}) r, kpc Mv \[O/H\] \[Fe/H\] age, Myr Mass ---------- ------------- ------------ -------- --------- -------------- -------- ------- --------- ---------- ---------- ------ l Car 5 35.5513420 3.724 1.299 0.147 7.79 –5.43 0.12 0.02 19 15.3 SZ Cas 1 13.6377468 9.853 1.419 0.794 9.40 –4.31 0.06 0.04 35 9.5 RY Cas 1 12.1388798 9.927 1.384 0.618 9.34 –4.18 0.13 0.10 38 9.0 RW Cas 2 14.7915478 9.117 1.096 0.380 9.96 –4.41 –0.07 0.06 34 9.9 SU Cas 13 1.9493220 5.970 0.703 0.259 8.13 –2.43 –0.02 0.06 105 4.2 SW Cas 1 5.4409499 9.705 1.081 0.467 8.75 –3.25 0.27 0.02 66 6.0 SY Cas 1 4.0710979 9.868 0.992 0.442 8.93 –2.91 0.31 0.04 80 5.2 TU Cas 12 2.1392980 7.733 0.582 0.109 8.31 –2.16 –0.03 0.03 123 3.8 XY Cas 1 4.5016971 9.935 1.147 0.533 8.98 –3.02 –0.09 0.03 75 5.5 BD Cas 3 3.6508999 11.000 1.565 1.006 8.57 –2.78 –0.09 –0.07 86 4.9 CE CasA 1 5.1409001 10.922 1.171 0.556 9.55 –3.18 –0.04 –0.16 68 5.8 CE CasB 1 4.4793000 11.062 1.042 0.527 9.62 –3.02 –0.04 –0.03 75 5.4 CF Cas 5 4.8752198 11.136 1.174 0.553 9.70 –3.12 0.06 –0.01 71 5.7 CH Cas 1 15.0861902 10.973 1.650 0.894 9.60 –4.43 ... –0.08 33 10.0 CY Cas 1 14.3768597 11.641 1.738 0.963 10.06 –4.38 –0.04 0.06 34 9.7 DD Cas 1 9.8120270 9.876 1.188 0.450 9.60 –3.93 0.07 0.10 44 8.1 DF Cas 1 3.8324721 10.848 1.181 0.570 9.72 –2.84 ... 0.13 83 5.0 DL Cas 3 8.0006685 8.969 1.154 0.488 8.85 –3.69 –0.01 –0.01 51 7.3 FM Cas 1 5.8092842 9.127 0.989 0.325 8.94 –3.32 –0.21 –0.09 63 6.2 V379 Cas 2 4.3057499 9.053 1.139 0.600 8.59 –3.36 0.07 0.06 62 6.3 V636 Cas 8 8.3769999 7.199 1.365 0.666 8.24 –3.75 –0.18 0.07 49 7.4 V Cen 3 5.4938612 6.836 0.875 0.292 7.43 –3.26 –0.16 –0.01 65 6.0 XX Cen 1 10.9533701 7.818 0.983 0.266 7.00 –4.06 –0.03 0.16 41 8.5 AY Cen 1 5.3097501 8.830 1.009 0.295 7.42 –3.22 –0.15 0.01 67 5.9 AZ Cen 1 3.2119811 8.636 0.653 0.168 7.41 –3.01 –0.10 –0.05 75 5.4 BB Cen 1 3.9976599 10.073 0.953 0.377 7.12 –3.27 0.06 0.13 65 6.1 KK Cen 1 12.1802998 11.480 1.282 0.611 7.54 –4.18 0.01 0.12 38 9.0 KN Cen 1 34.0296402 9.870 1.582 0.797 6.40 –5.38 ... 0.35 19 15.0 MZ Cen 1 10.3529997 11.531 1.570 0.869 6.53 –3.99 –0.13 0.20 43 8.3 QY Cen 1 17.7523994 11.762 2.150 1.447 6.64 –4.62 –0.09 0.16 30 10.8 V339 Cen 1 9.4659996 8.753 1.191 0.413 6.77 –3.89 –0.20 0.04 45 7.9 V378 Cen 1 6.4593000 8.460 1.035 0.376 7.05 –3.83 –0.09 –0.02 47 7.7 V381 Cen 1 5.0787802 7.653 0.792 0.195 7.24 –3.17 –0.09 0.02 69 5.8 V419 Cen 1 5.5069098 8.186 0.758 0.168 7.41 –3.64 –0.12 0.07 52 7.1 V496 Cen 1 4.4241900 9.966 1.172 0.541 7.06 –3.00 –0.16 0.00 75 5.4 V659 Cen 1 5.6217999 6.598 0.758 0.128 7.45 –3.28 0.00 0.07 64 6.1 V737 Cen 1 7.0658498 6.719 0.999 0.206 7.34 –3.55 –0.09 0.13 55 6.8 CP Cep 1 17.8589993 10.590 1.668 0.649 9.60 –4.63 –0.16 –0.01 30 10.9 CR Cep 1 6.2329640 9.656 1.396 0.709 8.44 –3.79 –0.09 –0.06 48 7.6 IR Cep 2 2.1141241 7.784 0.888 0.413 8.06 –2.53 0.04 0.05 99 4.4 V351 Cep 3 2.8060000 9.440 0.942 0.436 8.49 –2.86 0.07 0.02 82 5.1 Del Cep 18 5.3662701 3.954 0.657 0.075 7.97 –3.23 0.01 0.09 66 6.0 AV Cir 1 3.0651000 7.439 0.910 0.378 7.46 –2.96 –0.08 0.10 77 5.3 AX Cir 2 5.2733059 5.880 0.741 0.146 7.53 –3.21 –0.07 –0.06 67 5.9 BP Cir 1 2.3984001 7.560 0.702 0.224 7.35 –2.67 –0.18 –0.06 91 4.7 R Cru 1 5.8257499 6.766 0.772 0.183 7.54 –3.32 –0.11 0.08 63 6.2 S Cru 1 4.6895962 6.600 0.761 0.166 7.55 –3.07 –0.06 –0.12 73 5.6 T Cru 1 6.7332001 6.566 0.922 0.184 7.55 –3.49 –0.03 0.09 57 6.7 X Cru 1 6.2199702 8.404 1.001 0.272 7.20 –3.40 0.08 0.14 60 6.4 VW Cru 1 5.2652202 9.622 1.309 0.643 7.28 –3.21 –0.07 0.10 67 5.9 AD Cru 1 6.3978901 11.051 1.279 0.647 6.99 –3.43 –0.06 0.06 59 6.5 AG Cru 1 3.8372540 8.225 0.738 0.212 7.35 –2.84 –0.16 –0.13 83 5.0 BG Cru 2 3.3427200 5.487 0.606 0.132 7.69 –3.06 0.01 0.04 73 5.5 X Cyg 26 16.3863316 6.391 1.130 0.228 7.73 –4.53 0.07 0.10 31 10.4 SU Cyg 12 3.8454919 6.859 0.575 0.080 7.60 –2.84 –0.29 –0.03 83 5.0 SZ Cyg 1 15.1096420 9.432 1.477 0.571 8.06 –4.43 0.10 0.09 33 10.0 TX Cyg 2 14.7081566 9.511 1.784 1.130 7.87 –4.40 –0.25 0.07 34 9.9 VX Cyg 1 20.1334076 10.069 1.704 0.753 8.06 –4.77 0.17 0.09 27 11.5 VY Cyg 1 7.8569822 9.593 1.215 0.606 7.88 –3.67 0.06 0.00 51 7.2 VZ Cyg 1 4.8644528 8.959 0.876 0.266 8.13 –3.11 0.18 0.05 71 5.7 BZ Cyg 1 10.1419315 10.213 1.573 0.888 7.95 –3.97 ... 0.07 43 8.2 CD Cyg 16 17.0739670 8.947 1.266 0.493 7.47 –4.58 –0.03 0.11 31 10.6 DT Cyg 14 2.4990821 5.774 0.538 0.042 7.80 –2.72 0.01 0.10 89 4.8 Name No. Spectra $P$ (days) V $(B-V)$ E([*B–V*]{}) r, kpc Mv \[O/H\] \[Fe/H\] age, Myr Mass ----------- ------------- ------------ -------- --------- -------------- -------- ------- --------- ---------- ---------- ------ MW Cyg 1 5.9545860 9.489 1.316 0.635 7.55 –3.35 0.14 0.09 62 6.3 V386 Cyg 1 5.2576060 9.635 1.491 0.841 7.89 –3.21 –0.06 0.11 67 5.9 V402 Cyg 1 4.3648362 9.873 1.008 0.391 7.60 –2.99 0.11 0.02 76 5.4 V532 Cyg 1 3.2836120 9.086 1.036 0.494 7.98 –3.04 0.03 –0.04 74 5.5 V924 Cygs 1 5.5714722 10.710 0.847 0.261 7.53 –3.65 ... –0.09 52 7.2 V1154 Cyg 1 4.9254599 9.190 0.925 0.319 7.70 –3.13 0.00 –0.10 70 5.7 V1334 Cyg 11 3.3330200 5.871 0.504 0.025 7.86 –3.06 –0.13 0.03 73 5.5 V1726 Cyg 1 4.2370601 9.009 0.885 0.339 8.18 –3.34 ... –0.02 62 6.2 TX Dels 1 6.1659999 9.147 0.766 0.222 6.82 –3.39 0.16 0.24 60 6.4 Beta Dor 1 9.8424253 3.731 0.807 0.052 7.90 –3.93 –0.08 –0.01 44 8.1 W Gem 8 7.9137788 6.950 0.889 0.255 8.78 –3.68 –0.15 –0.01 51 7.2 RZ Gem 2 5.5292859 10.007 1.025 0.563 9.84 –3.26 –0.14 –0.19 65 6.0 AA Gem 2 11.3023281 9.721 1.061 0.309 11.55 –4.10 0.07 –0.27 40 8.6 AD Gem 2 3.7879801 9.857 0.694 0.173 10.48 –2.82 –0.31 –0.16 84 5.0 BB Gem 1 2.3080001 11.364 0.881 0.430 10.64 –2.25 –0.45 –0.09 117 3.9 DX Gem 1 3.1374860 10.746 0.936 0.430 10.76 –2.99 –0.28 –0.02 76 5.4 Zeta Gem 11 10.1507301 3.918 0.798 0.014 8.25 –3.97 –0.05 0.00 43 8.2 BB Her 4 7.5079999 10.090 1.100 0.392 6.05 –3.62 0.04 0.15 53 7.0 V Lac 1 4.9834681 8.936 0.873 0.335 8.48 –3.14 0.17 0.00 70 5.7 X Lac 1 5.4449902 8.407 0.901 0.336 8.48 –3.63 0.10 –0.02 53 7.1 Y Lac 9 4.3237758 9.146 0.731 0.207 8.42 –2.98 –0.26 –0.04 77 5.3 Z Lac 9 10.8856134 8.415 1.095 0.370 8.56 –4.05 –0.11 0.01 41 8.5 RR Lac 1 6.4162431 8.848 0.885 0.319 8.55 –3.44 0.09 0.00 59 6.5 BG Lac 3 5.3319321 8.883 0.949 0.300 8.16 –3.22 0.10 –0.01 67 5.9 GH Lup 1 9.2779484 7.635 1.210 0.335 6.94 –3.87 –0.04 0.08 46 7.8 V473 Lyr 2 1.4907800 6.182 0.632 0.025 7.72 –2.12 –0.24 –0.06 125 3.7 T Mon 20 27.0246487 6.124 1.166 0.181 9.15 –5.11 0.04 0.13 22 13.4 SV Mon 10 15.2327805 8.219 1.048 0.234 10.14 –4.44 –0.17 –0.02 33 10.0 TW Mon 2 7.0969000 12.575 1.339 0.663 13.61 –3.55 –0.35 –0.18 55 6.8 TX Mon 2 8.7017307 10.960 1.096 0.485 11.74 –3.79 –0.11 –0.08 48 7.6 TY Mon 1 4.0226951 11.740 1.158 0.572 11.07 –2.89 0.01 –0.06 80 5.2 TZ Mon 3 7.4280138 10.761 1.116 0.420 11.44 –3.61 0.20 –0.04 53 7.0 UY Mon 2 2.3979700 9.391 0.527 0.064 10.08 –2.67 –0.16 –0.13 91 4.7 WW Mon 2 4.6623101 12.505 1.128 0.605 12.94 –3.07 –0.41 –0.36 73 5.6 XX Mon 3 5.4564729 11.898 1.139 0.567 11.95 –3.25 –0.04 –0.09 66 6.0 AA Mon 1 3.9381640 12.707 1.409 0.792 11.36 –2.87 –0.19 –0.21 81 5.1 AC Mon 2 8.0142498 10.067 1.165 0.484 10.12 –3.70 –0.25 –0.22 51 7.3 BE Mon 1 2.7055099 10.578 1.134 0.622 9.36 –2.43 0.03 0.00 105 4.2 BV Mon 1 3.0149601 11.431 1.109 0.612 10.17 –2.56 ... –0.14 97 4.5 CU Mon 1 4.7078729 13.607 1.393 0.751 14.45 –3.08 –0.06 –0.26 72 5.6 CV Mon 2 5.3788981 10.299 1.297 0.722 9.46 –3.23 –0.12 –0.06 66 6.0 EE Mon 1 4.8089600 12.941 0.966 0.465 15.028 –3.10 ... –0.51 71 5.6 EK Mon 2 3.9579411 11.048 1.195 0.556 10.19 –2.88 –0.28 –0.06 81 5.1 FG Mon 1 4.4965901 13.310 1.209 0.651 13.94 –3.02 –0.46 –0.20 75 5.5 FI Mon 1 3.2878220 12.924 1.068 0.513 13.11 –2.66 –0.41 –0.18 92 4.7 V465 Mon 1 2.7131760 10.379 0.762 0.244 10.09 –2.44 0.22 0.03 105 4.2 V495 Mon 2 4.0965829 12.427 1.241 0.609 12.10 –2.92 –0.09 –0.20 79 5.2 V504 Mon 1 2.7740500 11.814 1.036 0.538 10.67 –2.84 –0.35 –0.31 83 5.0 V508 Mon 2 4.1336079 10.518 0.898 0.307 10.71 –2.93 –0.22 –0.21 79 5.2 V510 Mon 2 7.3071752 12.681 1.527 0.802 12.96 –3.59 –0.20 –0.17 54 6.9 V526 Mon 1 2.6749849 8.597 0.593 0.089 9.32 –2.80 –0.52 –0.13 85 5.0 R Mus 1 7.5104671 6.298 0.757 0.114 7.50 –3.62 –0.02 0.10 53 7.0 S Mus 1 9.6598749 6.118 0.833 0.212 7.56 –3.91 –0.19 –0.02 45 8.0 RT Mus 1 3.0861700 9.022 0.834 0.344 7.43 –2.59 0.02 0.02 96 4.5 TZ Mus 1 4.9448848 11.702 1.287 0.664 7.06 –3.13 –0.16 –0.01 70 5.7 UU Mus 1 11.6364098 9.781 1.150 0.399 7.05 –4.13 –0.06 0.05 39 8.8 S Nor 3 9.7542439 6.394 0.941 0.179 7.17 –3.92 –0.13 0.06 44 8.0 U Nor 1 12.6437101 9.238 1.576 0.862 6.82 –4.23 0.04 0.15 37 9.1 SY Nor 1 12.6456871 9.513 1.340 0.756 6.44 –4.23 0.21 0.31 37 9.1 TW Nor 1 10.7861795 11.704 1.930 1.157 5.84 –4.04 0.28 0.28 41 8.4 GU Nor 1 3.4528770 10.411 1.273 0.651 6.55 –2.72 0.15 0.15 89 4.8 V340 Nor 2 11.2869997 8.375 1.149 0.321 6.31 –4.09 0.07 0.05 40 8.6 Y Oph 14 17.1269073 6.169 1.377 0.645 7.42 –4.58 0.00 0.06 30 10.6 Name No. Spectra $P$ (days) V $(B-V)$ E([*B–V*]{}) r, kpc Mv \[O/H\] \[Fe/H\] age, Myr Mass ---------- ------------- ------------ -------- --------- -------------- -------- ------- --------- ---------- ---------- ------ BF Oph 2 4.0675101 7.337 0.868 0.235 7.12 –2.91 –0.08 0.03 80 5.2 RS Ori 6 7.5668812 8.412 0.945 0.352 9.36 –3.63 –0.11 –0.06 53 7.1 CS Ori 2 3.8893900 11.381 0.924 0.383 11.74 –2.85 –0.61 –0.28 82 5.1 GQ Ori 2 8.6160679 8.965 0.976 0.249 10.23 –3.78 –0.03 0.01 48 7.5 SV Per 1 11.1293182 9.020 1.029 0.408 10.09 –4.08 0.15 0.01 41 8.6 UX Per 1 4.5658150 11.664 1.027 0.512 11.10 –3.04 –0.42 –0.21 74 5.5 VX Per 9 10.8890400 9.312 1.158 0.475 9.63 –4.05 –0.18 –0.04 41 8.5 AS Per 1 4.9725161 9.723 1.302 0.674 9.15 –3.14 –0.02 0.10 70 5.7 AW Per 4 6.4635892 7.492 1.055 0.489 8.62 –3.45 –0.03 0.01 58 6.5 BM Per 4 22.9519005 10.388 1.793 0.871 10.85 –4.92 –0.21 0.00 25 12.3 HQ Per 2 8.6379299 11.595 1.234 0.564 12.90 –3.78 0.13 –0.31 48 7.6 MM Per 1 4.1184149 10.802 1.062 0.490 10.30 –2.92 –0.08 –0.01 79 5.2 V440 Per 10 7.5700002 6.282 0.873 0.260 8.47 –3.63 –0.12 –0.04 53 7.1 X Pup 7 25.9610004 8.460 1.127 0.402 9.73 –5.06 –0.22 –0.03 23 13.1 RS Pup 3 41.3875999 6.947 1.393 0.457 8.54 –5.60 0.10 0.17 17 16.5 VW Pup 1 4.2853699 11.365 1.065 0.489 10.34 –2.97 0.19 –0.19 77 5.3 VX Pup 1 3.0108700 8.328 0.610 0.129 8.64 –2.56 –0.05 –0.08 98 4.5 VZ Pup 2 23.1709995 9.621 1.162 0.459 10.40 –4.93 –0.10 –0.12 25 12.4 WW Pup 1 5.5167241 10.554 0.874 0.379 10.07 –3.26 ... –0.18 65 6.0 WX Pup 1 8.9370499 9.063 0.968 0.319 9.25 –3.82 –0.01 0.06 47 7.7 AD Pup 2 13.5939999 9.863 1.049 0.314 10.61 –4.31 0.03 –0.17 36 9.5 AP Pup 5 5.0842738 7.371 0.838 0.198 8.19 –3.17 –0.10 0.00 69 5.8 AQ Pup 3 30.1040001 8.791 1.423 0.518 9.49 –5.23 0.04 –0.09 21 14.1 AT Pup 1 6.6648788 7.957 0.783 0.191 8.41 –3.48 –0.31 –0.14 57 6.6 BC Pup 2 3.5443399 13.841 - 0.800 12.44 –2.75 –0.55 –0.23 87 4.8 BN Pup 2 13.6731005 9.882 1.186 0.416 9.92 –4.32 0.00 0.02 35 9.5 CE Pup 1 49.5299988 11.959 1.745 0.740 14.74 –5.81 –0.32 –0.05 15 18.1 HW Pup 3 13.4540005 12.050 1.237 0.688 12.33 –4.30 –0.23 –0.23 36 9.4 MY Pup 2 5.6953092 5.677 0.631 0.061 8.03 –3.68 –0.15 –0.11 51 7.2 NT Pup 1 15.5649996 12.144 1.389 0.670 12.42 –4.47 –0.39 –0.15 32 10.1 V335 Pup 1 4.8609848 8.717 0.759 0.154 9.19 –3.50 0.13 –0.01 57 6.7 S Sge 9 8.3820858 5.622 0.805 0.100 7.55 –3.75 –0.06 0.08 49 7.4 U Sgr 12 6.7452288 6.695 1.087 0.403 7.32 –3.50 0.03 0.08 57 6.7 W Sgr 8 7.5949039 4.668 0.746 0.111 7.51 –3.63 –0.14 0.02 52 7.1 Y Sgr 12 5.7733798 5.744 0.856 0.191 7.42 –3.31 –0.18 0.05 63 6.2 VY Sgr 1 13.5572004 11.511 1.941 1.221 5.58 –4.31 0.18 0.26 36 9.5 WZ Sgr 12 21.8497887 8.030 1.392 0.431 5.96 –4.86 0.00 0.19 26 12.0 XX Sgr 1 6.4241400 8.852 1.107 0.521 6.63 –3.44 0.07 0.10 59 6.5 YZ Sgr 8 9.5536871 7.358 1.032 0.281 6.80 –3.90 –0.12 0.06 45 7.9 AP Sgr 1 5.0579162 6.955 0.807 0.178 7.10 –3.16 –0.23 0.10 69 5.8 AV Sgr 1 15.4150000 11.391 1.999 1.206 5.48 –4.46 0.36 0.34 33 10.1 BB Sgr 1 6.6371021 6.947 0.987 0.281 7.14 –3.48 –0.13 0.08 57 6.6 V350 Sgr 1 5.1541781 7.483 0.905 0.299 7.06 –3.18 0.23 0.18 68 5.8 RV Sco 2 6.0613060 7.040 0.955 0.349 7.20 –3.37 –0.03 0.05 61 6.3 RY Sco 1 20.3201447 8.004 1.426 0.718 6.67 –4.78 0.06 0.09 27 11.6 KQ Sco 1 28.6896000 9.807 1.934 0.869 5.41 –5.18 0.21 0.16 22 13.8 V482 Sco 1 4.5278072 7.965 0.975 0.336 6.94 –3.03 –0.05 0.07 74 5.5 V500 Sco 5 9.3168392 8.729 1.276 0.593 6.53 –3.87 –0.12 0.01 46 7.8 V636 Sco 1 6.7968588 6.654 0.936 0.207 7.15 –3.50 –0.08 0.07 57 6.7 V950 Sco 1 3.3804500 7.302 0.775 0.254 7.10 –3.07 –0.05 0.11 72 5.6 Z Sct 1 12.9013252 9.600 1.330 0.492 5.52 –4.25 0.16 0.29 37 9.2 SS Sct 1 3.6712530 8.211 0.944 0.325 7.03 –2.79 –0.04 0.06 85 4.9 UZ Sct 1 14.7441998 11.305 1.784 1.020 5.12 –4.40 0.49 0.33 34 9.9 EV Sct 1 3.0909901 10.137 1.160 0.679 6.54 –2.97 ... –0.02 77 5.3 EW Sct 3 5.8232999 7.979 1.725 1.074 7.57 –3.32 –0.04 0.04 63 6.2 V367 Sct 1 6.2930698 11.596 1.769 1.231 6.43 –3.41 0.53 –0.01 60 6.4 BQ Ser 3 4.2708998 9.501 1.399 0.815 7.17 –2.96 –0.13 –0.04 77 5.3 ST Tau 4 4.0342989 8.217 0.847 0.368 8.83 –2.90 –0.12 –0.05 80 5.2 SZ Tau 16 3.1483800 6.531 0.844 0.295 8.39 –2.99 –0.03 0.07 76 5.4 AE Tau 1 3.8964500 11.679 1.129 0.575 11.33 –2.86 –0.17 –0.19 82 5.1 AV Tau 1 3.6158099 12.338 1.376 0.892 10.67 –2.77 ... –0.09 86 4.9 EF Tau 1 3.4481499 13.113 0.931 0.360 16.32 –2.71 –0.23 –0.74 89 4.8 EU Tau 2 2.1024799 8.093 0.664 0.164 8.93 –2.52 –0.05 –0.06 100 4.4 Name No. Spectra $P$ (days) V $(B-V)$ E([*B–V*]{}) r, kpc Mv \[O/H\] \[Fe/H\] age, Myr Mass --------- ------------- ------------ -------- --------- -------------- -------- ------- --------- ---------- ---------- ------ R TrA 1 3.3892870 6.660 0.722 0.142 7.48 –2.69 –0.08 0.06 90 4.7 S TrA 1 6.3234649 6.397 0.752 0.084 7.28 –3.42 –0.13 0.12 59 6.5 LR TrA 1 2.4549999 7.808 0.781 0.268 7.29 –2.70 0.03 0.25 90 4.7 Alp UMi 1 3.9696000 1.982 0.598 0.000 7.96 –3.26 0.12 0.10 65 6.0 T Vel 4 4.6398191 8.024 0.922 0.289 8.05 –3.06 –0.05 –0.02 73 5.5 V Vel 2 4.3710432 7.589 0.788 0.186 7.85 –2.99 –0.25 –0.21 76 5.4 RY Vel 4 28.1357002 8.397 1.352 0.547 7.73 –5.16 –0.03 0.05 22 13.6 RZ Vel 4 20.3982391 7.079 1.120 0.299 8.22 –4.78 –0.03 0.04 27 11.6 ST Vel 2 5.8584251 9.704 1.195 0.479 8.18 –3.33 –0.26 0.02 62 6.2 SV Vel 1 14.0970697 8.524 1.054 0.373 7.59 –4.35 –0.16 0.08 35 9.7 SW Vel 5 23.4410000 8.120 1.162 0.344 8.43 –4.94 –0.11 –0.10 25 12.4 SX Vel 4 9.5499296 8.251 0.888 0.263 8.24 –3.90 –0.03 –0.02 45 7.9 XX Vel 1 6.9845700 10.654 1.162 0.545 7.71 –3.54 –0.29 –0.05 56 6.8 AE Vel 1 7.1335702 10.262 1.243 0.635 7.98 –3.56 –0.03 0.05 55 6.9 AH Vel 3 4.2272310 5.695 0.579 0.070 8.00 –3.33 0.00 0.05 62 6.2 AX Vel 1 3.6731000 8.197 0.691 0.224 8.11 –2.79 ... –0.08 85 4.9 BG Vel 2 6.9236550 7.635 1.175 0.426 7.92 –3.53 0.01 –0.02 56 6.8 CS Vel 1 5.9047399 11.681 1.448 0.737 8.20 –3.34 –0.01 0.08 62 6.2 CX Vel 1 6.2554250 11.374 1.413 0.723 8.36 –3.41 –0.30 0.06 60 6.4 DK Vel 1 2.4816401 10.614 0.774 0.287 8.13 –2.33 0.03 –0.02 111 4.0 DR Vel 2 11.1992998 9.520 1.518 0.656 8.04 –4.08 –0.02 0.08 40 8.6 EX Vel 1 13.2341003 11.562 1.561 0.775 8.87 –4.28 –0.11 0.05 36 9.4 EZ Vel 2 34.5345993 12.448 1.716 0.822 12.51 –5.39 –0.01 –0.08 19 15.1 FG Vel 1 6.4531999 11.814 1.493 0.810 8.29 –3.44 –0.06 –0.05 59 6.5 FN Vel 1 5.3242202 10.292 1.186 0.588 7.85 –3.22 –0.17 0.06 67 5.9 S Vul 4 68.4639969 8.962 1.892 0.727 7.07 –6.19 –0.20 –0.01 12 21.3 T Vul 12 4.4354620 5.754 0.635 0.064 7.76 –3.01 –0.09 0.01 75 5.4 U Vul 7 7.9906292 7.128 1.275 0.603 7.58 –3.69 –0.04 0.09 51 7.3 X Vul 6 6.3195429 8.849 1.389 0.742 7.53 –3.42 –0.03 0.07 59 6.5 SV Vul 23 44.9947739 7.220 1.442 0.461 7.26 –5.70 –0.01 0.05 16 17.2 \ \[lastpage\] [^1]: By the way, they also suggested that, in spiral and elliptical galaxies, SNe Ia appear to have different origin. [^2]: DTD function is the probability distribution of the time period between the SNe progenitor birth and explosion. [^3]: Acharova et al. (2010) showed that SNe Ia produce only about 2 per cent of oxygen (see also Matteucci 2004 and Tsujimoto et al. 1995). That is why we neglect by the contribution of SNe Ia to synthesis of oxygen. [^4]: The upper limit for $\tau$ is dictated by the least mass ($\sim$0.8 M$_{\odot}$) of a white dwarf companion in order the binary system results in SNe Ia outburst (see Greggio 2005; Matteucci et al. 2006). So, there is no any contradiction with the shorter age of the Universe. [^5]: The increase of $\Delta^{\rm O}_m$ relative to the corresponding value of Acharova et al. (2011) is associated with that the weight $w_i > 1$. [^6]: We neglect by the mass of iron which have fallen on to the Galactic disc with the infall gas during the life of the Galaxy since this mass happens to be $\sim~2.2\cdot 10^5$ M$_{\odot}$ and is much less than the mass of iron produced by SNe.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the linear polarization of the radio cores of eight blazars simultaneously at 22, 43, and 86 GHz with observations obtained by the Korean VLBI Network (KVN) in three epochs between late 2016 and early 2017 in the frame of the Plasma-physics of Active Galactic Nuclei (PAGaN) project. We investigate the Faraday rotation measure (RM) of the cores; the RM is expected to increase with observing frequency if core positions depend on frequency due to synchrotron self-absorption. We find a systematic increase of RMs at higher observing frequencies in our targets. The RM–$\nu$ relations follow power-laws with indices distributed around 2, indicating conically expanding outflows serving as Faraday rotating media. Comparing our KVN data with contemporaneous optical polarization data from the Steward Observatory for a few sources, we find indication that the increase of RM with frequency saturates at frequencies of a few hundreds GHz. This suggests that blazar cores are physical structures rather than simple $\tau=1$ surfaces. A single region, e.g. a recollimation shock, might dominate the jet emission downstream of the jet launching region. We detect a sign change in the observed RMs of CTA 102 on a time scale of $\approx$1 month, which might be related to new superluminal components emerging from its core undergoing acceleration/deceleration and/or bending. We see indication for quasars having higher core RMs than BL Lac objects, which could be due to denser inflows/outflows in quasars.' author: - 'Jongho Park, Minchul Kam, Sascha Trippe, Sincheol Kang, Do-Young Byun, Dae-Won Kim, Juan-Carlos Algaba, Sang-Sung Lee, Guang-Yao Zhao, Motoki Kino, Naeun Shin, Kazuhiro Hada, Taeseok Lee, Junghwan Oh, Jeffrey A. Hodgson, and Bong Won Sohn' title: 'Revealing the Nature of Blazar Radio Cores through Multi-Frequency Polarization Observations with the Korean VLBI Network' --- =1 Introduction \[sect1\] ====================== Blazars, characterized by violent flux variability across the entire electromagnetic spectrum, are a sub-class of active galactic nuclei (AGNs) which show highly collimated, one-sided relativistic jets (see @UP1995 for a review). Large-scale magnetic fields which are strongly twisted in the inner part of the accretion disc or the black hole’s ergosphere play a crucial role in launching and powering of relativistic jets [@BZ1977; @BP1982]. Jets appear to be gradually accelerated and collimated magneto-hydrodynamically [@VK2004; @komissarov2007; @komissarov2009; @AN2012; @TT2013; @hada2013; @asada2014; @mertens2016; @hada2017; @walker2018] and they are directly linked to accretion process onto supermassive black holes [@marscher2002; @chatterjee2009; @chatterjee2011; @ghisellini2014; @PT2017]. Their parsec-scale radio morphology is characterized by (a) the ‘VLBI core’, a (radio) bright, optically thick, compact feature, and (b) an extended, optically thin, jet (e.g., @fromm2013). The nature of the core is a matter of ongoing debate. The standard Blandford & Königl jet model describes the core as the upstream region where the conical jet becomes optically thin, i.e., at unity optical depth (e.g., @BK1979). In this scenario, the observed core position shifts closer to the physical location of the jet base at higher observing frequencies – the well-known ‘core shift effect’ [@lobanov1998]. Core shift has been observed in blazars (e.g., @OG2009b [@sokolovsky2011; @algaba2012; @pushkarev2012; @fromm2013; @hovatta2014]) as well as in nearby radio galaxies (e.g., @hada2011 [@martividal2011]), supporting the idea that the radio core marks the transition between the optically thick and thin jet regimes. However, the plain ‘optical depth interpretation’ of the radio core ignores the physical structure of AGN jets. Especially, a standing conical shock, located at the end of the jet acceleration and collimation zone (e.g., @marscher2008), is expected (see also @PC2013a [@PC2013b] for a discussion of the transition region from a parabolic to a conical jet shape that is dominating jet synchrotron emission of blazars). Such a (quasi-)stationary feature – a ‘recollimation shock’ – may appear when there is a mismatch between the gas pressures in the jet and the confining medium (e.g., @DM1988 [@gomez1995; @gomez1997; @agudo2001; @mizuno2015; @marti2016]). Observations of the nearby radio galaxy M87 indeed reveal a stationary feature (known as HST-1) at the end of the jet collimation region [@AN2012], showing blazar-like activity such as rapid variability and high energy emission [@cheung2007]. In addition, recent studies have discovered that most $\gamma$-ray flares in blazars occur when new (apparently) superluminal jet components pass through the core (@JM2016, see also @ramakrishnan2014 [@casadio2015; @rani2015] for the case of individual sources and @jorstad2001 [@leontavares2011] for investigation of statistical significance between the two phenomena). This indicates that the core supplies the jet plasma electrons with large amounts of energy, with the possible formation of a shock as the source of high energy emission. At first glance, these two models and corresponding observational support seem to be in contradiction. This conflict is resolved if the core consists of (a) a standing shock which is optically thin only at (sub)-mm wavelengths, plus (b) extended jet flows downstream of the shock. In this case, there is no core shift expected at millimeter wavelengths where the core becomes transparent. Interestingly, a recent study which used a bona-fide astrometric technique showed that the core shift between 22 and 43 GHz for BL Lacertae is significantly smaller than the expected one from lower frequency data, indicating that the core at these frequencies might be identified with a recollimation shock [@dodson2017]. However, a number of previous studies did not find such a trend at the same frequencies (e.g., @OG2009b [@algaba2012; @fromm2013]). This might be because the core position accuracy of previous VLBI observations is comparable to the expected amount of core shift at those frequencies. [cccccccc]{}\[t!\] & & 21.650 GHz & OJ 287 & 1.22 & 3C 279, OJ 287 & -28.79 & 1.04\ & & 86.600 GHz & 3C 84 & 2.25 & 3C 279, OJ 287 & 112.76 & 0.90\ & & 43.300 GHz & OJ 287 & 0.70 & 3C 279, OJ 287 & -19.52 & 2.37\ & & 86.600 GHz & OJ 287 & 1.86 & 3C 279, OJ 287 & 125.86 & 2.65\ & & 21.650 GHz & 3C 84 & 0.57 & 3C 279, OJ 287 & 11.53 & 0.94\ & & 86.600 GHz & OJ 287 & 1.25 & 3C 279, OJ 287 & 5.55 & 1.80\ & & 43.100 GHz & 3C 84 & 1.07 & 3C 279, OJ 287 & 54.29 & 0.33\ & & 129.300 GHz & $-$ & $-$ & $-$ & $-$ & $-$\ & & 21.650 GHz & 3C 84 & 1.04 & 3C 279, OJ 287 & -28.72 & 0.87\ & & 86.600 GHz & OJ 287 & 1.59 & 3C 279, OJ 287, 3C 454.3 & 32.42 & 1.04\ & & 43.100 GHz & OJ 287 & 0.90 & 3C 279, OJ 287, BL LAC & 9.07 & 1.37\ & & 129.300 GHz & $-$ & $-$ & $-$ & $-$ & $-$ \[Information\] An alternative route is provided by multi-frequency polarimetric observations of the core that provide RMs, defined as $\rm EVPA_{obs} = EVPA_{int} + RM\lambda^2$, where $\rm EVPA_{obs}$ and $\rm EVPA_{int}$ are observed and intrinsic electric vector position angles (EVPAs) of linearly polarized emission and $\lambda$ is observing wavelength. If the core is the $\tau = 1$ surface of a continuous conical jet and the jet is in a state of energy equipartition, then the core RM obeys the relation $|{\rm RM_{\rm core, \nu}}| \propto \nu^a$, where $a$ is the power-law index of the electron density distribution given by $N_e \propto d^{-a}$, with $d$ being the distance from the jet base [@jorstad2007]. In this scenario, we observe polarized emission from regions closer to the jet base at higher frequencies due to the core shift effect, where one may expect higher particle densities and stronger magnetic fields. Looking at this argument the other way around, we would expect *no increase* in RM as a function of frequency at millimeter wavelengths *if* the core is indeed a standing recollimation shock. This provides the opportunity to uncover the nature of blazar VLBI cores, and thus the intrinsic structure of blazar jets, through multi-frequency polarimetric observations at millimeter wavelengths. At centimeter wavelengths, many studies showed that the power law index $a$ is usually distributed around $a = 2$ (e.g., @OG2009a [@algaba2013; @kravchenko2017]), corresponding to a spherical or conical outflow [@jorstad2007]. A conical outflow is more likely than a spherical one because @pushkarev2017 showed that conical jet geometries are common in blazars. $a\approx2$ found in many blazars is in agreement with the fact that many blazars show core shift at these wavelengths. However, to the best of our knowledge, there are only a few studies of the core RM of blazars at (sub-)mm frequencies. [@jorstad2007] analyzed 7, 3, and 1 mm polarization data and obtained an average $\langle a \rangle = 1.8 \pm 0.5$ by comparing with other studies done at cm wavelengths. This result indicates that the dependence of RM on observing frequency might continue up to mm wavelengths. Some of their sources are not fitted well by $\lambda^2$ laws even at the highest frequencies, indicating that a frequency dependence of RM exists even at around 1 mm. Another study using the IRAM 30-m telescope at 3 and 1 mm found RMs (a few times $10^4 \rm\ rad/m^2$) that are much larger than those at cm wavelengths (a few hundred $\rm rad/m^2$, @hovatta2012), albeit within large errors (@agudo2014, see also @agudo2018a [@agudo2018b; @thum2018]). A recent observation with the Atacama Large Millimeter Array (ALMA) at 1 mm has revealed a very high rotation measure of $(3.6\pm0.3)\times10^5\rm \ rad/m^2$ in 3C 273 with the core RM scaling with frequency like $\rm|RM|\propto\nu^{1.9\pm0.2}$ from cm to mm wavelengths. [@martividal2015] observed even larger RMs ($\approx 10^8 \rm\ rad/m^2$ in the rest frame) in the gravitationally lensed quasar PKS 1830–211 through ALMA observations at up to 300 GHz (about 1 THz in the rest frame). These results may suggest that (i) blazar core RMs rapidly increase as a function of frequency, as predicted by [@jorstad2007]; (ii) polarized (sub-)mm radiation might originate near the jet base, not from a recollimation shock (which presumably is located quite far from the jet base). However, it is uncertain whether this is a common behaviour of blazars or if these quasars are special. Therefore, a systematic study of blazar core RMs with multi-frequency polarimetric observations at (sub)mm wavelengths is necessary. The Korean VLBI Network (KVN) has the unique capability of observing simultaneously at four frequencies, 22, 43, 86, and 129 GHz, or at two of these frequencies in dual polarization mode [@lee2011; @lee2014]. Thanks to the simultaneous observation at multiple frequencies, one can overcome rapid phase variations at high frequencies caused by tropospheric delay that reduce the coherence time by applying the fringe solutions obtained at lower frequencies to higher ones, i.e., frequency phase transfer (FPT) [@rioja2011; @rioja2014; @algaba2015; @zhao2018]. This technique increases the fringe detection rate to values larger than 80% even at 129 GHz for sources brighter than $\approx0.5$ Jy, making the KVN a powerful instrument for multi-frequency mm polarimetry of AGNs. In early 2017, we launched a KVN large program, the Plasma-physics of Active Galactic Nuclei (PAGaN) project (see @kim2015 [@oh2015] for related studies), for monitoring about 14 AGNs at the four KVN frequencies in dual polarization mode almost every month.[^1] One of the main scientific goals of the project is a systematic study of RMs of blazars at mm wavelengths and their evolution in time. In this paper, we present the results from three observation epochs located between late 2016 and early 2017, which were performed as test observations for the initiation of the large program. We describe observations and data calibration in Section \[sect2\]. Results are shown and discussed in Section \[sect3\] and \[sect4\], respectively. In Section \[sect5\], we summarize our findings. -100mm [&gt;R &gt;R &gt;R &gt;R &gt;S&gt;S &gt;S &gt;S &gt;S &gt;S &gt;S &gt;S&gt;S&gt;S&gt;S&gt;S &gt;S ]{} & & & & & & & & & & & & &\ & & & & & & & $m [\%]$ & $\chi$ \[$^{\circ}$\] & $m$ & $\chi$ & $m$ & $\chi$ & & & (1) & (2) & (3) & & (4) & (5) & & & & & & & (6) & (7) & (8) & (9) &&&&12/09-10/16&$-0.4\pm0.2$&$-0.3\pm0.2$&9.9$\pm$0.1&26$\pm$1&10.3$\pm$0.5&33$\pm$3&10.5$\pm$2.2&37$\pm$2&-0.05&(-2.0$\pm$1.0)$\times10^3$&(-4.0$\pm$4.5)$\times10^3$&1.0$\pm$1.8&&&&01/16-17/17&$-0.3\pm0.2$&$-0.7\pm0.2$&8.8$\pm$0.1&28$\pm$1&8.8$\pm$0.4&31$\pm$2&8.2$\pm$0.9&38$\pm$3&0.03&(-6.5$\pm$6.5)$\times10^2$&(-8.4$\pm$4.1)$\times10^3$&3.7$\pm$1.6&&&&03/22-24/17&$-0.1\pm0.2$&$-0.4\pm0.2$&8.6$\pm$0.1&30$\pm$1&11.2$\pm$0.2&36$\pm$2&12.4$\pm$0.4&49$\pm$3&-0.31&(-1.7$\pm$0.6)$\times10^3$&(-1.4$\pm$0.4)$\times10^4$&3.1$\pm$0.6&&&&12/09-10/16&$-0.2\pm0.2$&$-0.4\pm0.2$&4.6$\pm$0.1&-2$\pm$1&6.1$\pm$0.1&-1$\pm$2&7.4$\pm$0.5&-2$\pm$2&-0.38&(-3.8$\pm$5.6)$\times10^2$&(1.2$\pm$2.5)$\times10^3$&1.7$\pm$3.7&&&&01/16-17/17&$0.0\pm0.2$&$-0.7\pm0.2$&6.6$\pm$0.1&-26$\pm$1&8.3$\pm$0.1&-33$\pm$1&10.9$\pm$0.1&-36$\pm$2&-0.73&(1.5$\pm$0.2)$\times10^3$&(2.8$\pm$1.6)$\times10^3$&1.0$\pm$0.7[^2]&&&&03/22-24/17&$-0.2\pm0.2$&$-0.3\pm0.2$&5.0$\pm$0.1&-63$\pm$1&7.6$\pm$0.1&-63$\pm$1&8.3$\pm$0.2&-57$\pm$1&-0.43&(-0.3$\pm$3.6)$\times10^2$&(-4.8$\pm$1.5)$\times10^3$&7.1$\pm$14.6&&&&12/09-10/16&$0.3\pm0.2$&$0.3\pm0.2$&1.6$\pm$0.1&-83$\pm$3&1.5$\pm$0.2&-91$\pm$5&1.8$\pm$0.4&-117$\pm$11&0.02&(4.3$\pm$2.6)$\times10^3$&(5.2$\pm$2.4)$\times10^4$&3.6$\pm$1.1&&&&01/16-17/17&$0.4\pm0.2$&$0.1\pm0.2$&1.1$\pm$0.1&-63$\pm$2&1.4$\pm$0.2&0$\pm$3&1.4$\pm$0.1&36$\pm$8&-0.13&(-3.2$\pm$0.2)$\times10^4$&(-7.2$\pm$1.6)$\times10^4$&1.2$\pm$0.3&&&&12/09-10/16&$-0.3\pm0.2$&$-0.6\pm0.2$&2.6$\pm$0.2&-111$\pm$2&3.9$\pm$0.2&-74$\pm$3&6.6$\pm$0.8&-62$\pm$3&-0.65&(-1.2$\pm$0.1)$\times10^4$&(-1.4$\pm$0.5)$\times10^4$&0.3$\pm$0.5&&&&01/16-17/17&$-0.2\pm0.2$&$-0.7\pm0.2$&3.7$\pm$0.1&-88$\pm$1&5.8$\pm$0.2&-74$\pm$1&9.0$\pm$0.7&-61$\pm$2&-0.65&(-4.4$\pm$0.5)$\times10^3$&(-1.6$\pm$0.3)$\times10^4$&1.9$\pm$0.3&&&&12/09-10/16&$0.0\pm0.2$&$-0.2\pm0.2$&4.2$\pm$0.1&2$\pm$1&3.2$\pm$0.1&-1$\pm$3&2.2$\pm$0.5&-11$\pm$3&0.42&(4.5$\pm$6.2)$\times10^2$&(8.8$\pm$3.6)$\times10^3$&4.3$\pm$2.1&&&&03/22-24/17&$-0.1\pm0.2$&$-0.4\pm0.2$&4.6$\pm$0.2&-10$\pm$2&2.9$\pm$0.1&-22$\pm$2&2.5$\pm$0.8&-30$\pm$6&0.63&(2.6$\pm$0.5)$\times10^3$&(6.5$\pm$5.1)$\times10^3$&1.3$\pm$1.20235+164&FSRQ&0.94&25&03/22-24/17&$0.1\pm0.2$&$0.0\pm0.2$&2.8$\pm$0.1&48$\pm$1&3.3$\pm$0.1&51$\pm$2&2.4$\pm$0.3&59$\pm$3&-0.09&(-1.4$\pm$1.0)$\times10^3$&(-1.5$\pm$0.6)$\times10^4$&3.5$\pm$1.2BLLAC&BLO&0.07&4&03/22-24/17&$0.0\pm0.2$&$-0.9\pm0.2$&3.5$\pm$0.1&38$\pm$1&4.1$\pm$0.3&31$\pm$2&3.8$\pm$0.4&19$\pm$3&-0.11&(1.0$\pm$0.3)$\times10^3$&(6.4$\pm$1.9)$\times10^3$&2.7$\pm$0.61633+38&FSRQ&1.81&27&03/22-24/17&$-0.2\pm0.2$&$-0.2\pm0.2$&1.5$\pm$0.2&31$\pm$3&1.4$\pm$0.2&24$\pm$4&2.2$\pm$0.7&18$\pm$6&-0.09&(6.3$\pm$4.8)$\times10^3$&(2.4$\pm$2.7)$\times10^4$&1.9$\pm$2.0 Observations and Data Reduction {#sect2} =============================== We observed a total of 11 sources in the 22, 43, and 86 GHz bands with the KVN on 2016 December 9–10 and in the four bands including the 129 GHz band on 2017 January 16–17 and 2017 March 22–24 with observation time of $\approx48$ hours for each epoch. Since KVN can observe at two frequencies simultaneously in dual polarization mode, we allocated the first half of the observing time to 22/86 GHz observations and the other half to 43/129 GHz. Although we obtained the data at 129 GHz in the two epochs observations, we had a difficulty in polarization calibration of the data and thus we did not include them in this paper. More sophisticated investigation of the 129 GHz data will be presented in a forthcoming paper (Kam et al. in preparation). All sources were observed in 6–15 scans of 5–20 minutes in length, depending on source declination and brightness. We performed cross-scan observations at least twice per hour to correct antenna pointing offsets that might lead to inaccurate correlated amplitudes. The received signals were 2-bit quantized and divided into 4 sub-bands (IFs) of 16 MHz bandwidth for each polarization and each frequency. Mark 5B recorders were used at recording rates of 1024 Mbps. The data were correlated with the DiFX software correlator in the Korea-Japan Correlator Center [@lee2015a]. Table \[Information\] summarizes our observations. A standard data post-correlation process was performed with the NRAO Astronomical Image Processing System (AIPS). Potential effects of digital sampling on the amplitudes of cross-correlation spectra were estimated by the AIPS task ACCOR. Amplitude calibration was done by using the antennas gain curves and opacity corrected system temperatures provided by the observatory. The fringe amplitudes were re-normalized by taking into account potential amplitude distortion due to quantization, and the quantization and re-quantization losses [@lee2015b]. The instrumental delay residuals were removed by using the data in a short time range of bright calibrators, either 3C 279 or 3C 454.3. To apply the FPT technique, a global fringe fitting was performed with a solution interval of 10 seconds for the lower frequency first (22 or 43 GHz), which led us to very high fringe detection rates $\gtrsim 95\%$ in most cases. Then, we transferred the obtained fringe solutions to the simultaneously observed higher frequency (86 GHz). This process corrects rapidly varying tropospheric errors in the visibility phases at high frequencies (though not the ionospheric errors that vary more slowly). Then, the residual phases have much longer coherence times, typically larger than a few minutes. Thus, we performed a global fringe fitting with a much longer solution interval of $\approx3$ minutes for the high frequency data, which resulted in quite high fringe detection rates – usually larger than 95% at 86 GHz for our sources. Bandpass calibration was performed by using scans on bright sources such as 3C 279. The cross-hand R-L phase and delay offsets were calibrated by using the data for bright sources, such as OJ 287, 3C 84, 3C 279 and 3C 454.3, located within short time ranges, with the task RLDLY. We used the Caltech Difmap package for imaging and phase self-calibration [@shepherd1997]. Typical beam sizes are $5.6\times3.2$, $2.8\times1.6$, and $1.4\times0.8$ mas at 22, 43, and 86 GHz, respectively. We determined the feed polarization leakage (D-terms) for each antenna by using the task LPCAL [@leppanen1995] with a total intensity model of the D-Term calibrators. 3C 84 usually serves as a good D-Term calibrator thanks to its high flux density and very low degree of linear polarization ($\lesssim$0.5%) at lower frequencies but less so at high frequency (86 GHz) where its linear polarization becomes non-negligible (Kam et al., in preparation). Thus, we also used a compact, bright, and polarized source OJ 287 at 86 GHz. We chose the D-Term calibrator for each epoch and for each frequency by comparing the behaviour of observed visibility ratios on the complex plane with the D-Term models of different calibrators (see Appendix \[appendixa\]). The EVPA calibration was performed by comparing the integrated EVPAs of the VLBI maps of the EVPA calibrators after the instrumental polarization calibration with contemporaneous KVN single dish polarization observations. We performed KVN single dish observations within two days of each VLBI observations as described in [@kang2015]. For the 2016 data, we have two 86 GHz data separated by 1 day. We note that the maps for all sources after the calibration are almost identical to each other and we used the average of Stokes I, Q, and U maps of the two data for our further analysis. Estimating errors for degree of linear polarization ($m$) and EVPA is important but not straightforward. Errors for each polarization quantity can be derived from the following relations: $$\sigma_p = \frac{\sigma_Q + \sigma_U}{2}$$ $$\sigma_{\rm EVPA} = \frac{\sigma_p}{2p}$$ $$\sigma_{m} = \frac{\sigma_p}{I}$$ where $\sigma_Q$ and $\sigma_U$ denote rms noise in the Stokes Q and U images, respectively, $p = \sqrt{Q^2 + U^2}$, and $m = p/I$ [@hovatta2012]. In most cases, random errors are quite small and systematic errors are much more dominant in the above quantities. Imperfect D-term calibration is usually the most dominant source of errors in $m$. For EVPAs, both the D-term uncertainty and EVPA correction error are important. Following [@roberts1994], errors of $m$ and EVPA caused by residual D-terms can be expressed as: $$\label{eq4} \sigma_{m, {\rm D}} = \sigma_D(N_aN_{\rm IF}N_s)^{-1/2}$$ $$\label{eq5} \sigma_{\rm EVPA, {\rm D}} \approx \frac{\sigma_{m, {\rm D}}}{2m}$$ where $\sigma_D$ is the D-term error, $N_a$ and $N_{\rm IF}$ are the number of antennas and IFs, respectively, and $N_s$ is the number of scans having independent parallactic angles which depend on the source declination. We estimated the D-term errors by comparing the D-terms obtained from different D-Term calibrators (see (5) in Table \[Information\] and Appendix \[appendixa\]) and estimated $\sigma_{m, {\rm D}}$ and $\sigma_{\rm EVPA, {\rm D}}$ using Equation \[eq4\] and \[eq5\]. Thanks to the number of IFs being four and the large parallactic angle coverage of our sources, we could achieve errors in $m$ (typically $0.1-0.3\%$) much smaller than the D-Term errors (typically $1-2\%$). We also assessed the EVPA correction error, $\sigma_{\Delta\chi}$, by comparing the amount of EVPA rotation calculated from different EVPA calibrators (see (6) and (8) in Table \[Information\]). Then, we added $\sigma_m$ and $\sigma_{m, {\rm D}}$ quadratically for $m$ and $\sigma_{\rm EVPA}$, $\sigma_{\rm EVPA, D}$, and $\sigma_{\Delta\chi}$ quadratically for EVPA. In the Appendix, we show the results of D-term calibration and the temporal evolution of the D-terms. The overall D-terms are usually less than $\approx10\%$, except for Ulsan station at 86 GHz which showed D-terms as large as $\approx20\%$. The D-terms obtained from different calibrators are quite consistent with each other, showing standard deviations of $\lesssim2\%$ (see (5) in Table \[Information\]). The D-terms are more or less stable over $\approx$ 3 months, showing standard deviations of $\lesssim2\%$. We also compare our KVN 22/43 and 86 GHz data of 3C 273 observed in 2016 December with contemporaneous Very Long Baseline Array (VLBA) 15/43 GHz data, respectively. Both fractional polarization and EVPAs at a few different locations in the jet are in good agreement within errors between the data of the different instruments, considering non-negligible time gaps between the observations and the expected RM of a few hundred $\rm rad/m^2$ in the jet (e.g., @hovatta2012). \ \ \ \ \ results {#sect3} ======= RM at radio wavelengths ----------------------- In Figure \[result\], we present polarization maps for multiple frequencies convolved with the KVN 22 GHz beam size (left panels), EVPAs at the core as a function of $\lambda^2$ (central panels), and RMs as a function of geometric mean observing frequency (right panels). We obtained one RM value from each adjacent data pair in the EVPA–$\lambda^2$ plots because we could not obtain good $\lambda^2$ fits across the three bands in most cases. We have to rotate the 22 and 43 GHz EVPAs by more than $720^{\circ}$ and $180^{\circ}$ to explain this behavior with $n\pi$ ambiguity, which translates into $\rm RM \gtrsim 10^5\ rad/m^2$. This RM value is too large especially at 22 GHz because almost all of the detected core RM of blazars are less than $1000\rm\ rad/m^2$ at $\lesssim15$ GHz [@hovatta2012]. Alternatively, different optical depths of the cores at different frequencies might be responsible for the non-$\lambda^2$ fits. Especially, EVPA rotations by $90^\circ$ are expected in case of a transition of the core from optically thick to optically thin [@pacholzcyzk1970]. We provide the spectral index, $\alpha$ in $S_{\nu} \propto \nu^{\alpha}$, between adjacent frequencies in columns (3) and (4) in Table \[result\] and in the center panels of Figure \[result\] (blue asterisks). We found that values of $\alpha$ measured at different frequency pairs differ from each other by more than $1\sigma$ in only four cases, 3C 279 in 2017 January, OJ 287 in 2017 January, 3C 345 in 2017 January, and BL Lac in 2017 March. When we rotate the EVPAs at the lowest frequency by $90^\circ$ for these cases, we are left with even worse $\lambda^2$ fits (higher $\chi^2$ values) compared to the case of no rotation. In addition, the degree of polarization is not much different, less than a factor of $\approx2$, at different frequencies (Section \[fracpol\]), while it should decrease by a factor of $\approx7$ if there were a $90^{\circ}$ flip [@pacholzcyzk1970]. Therefore, the progressively steeper EVPA rotations at higher frequencies are more likely to be due to the core shift effect, as shown in the numerical simulations of special relativistic magnetohydrodynamic jets [@porth2011]. We identified the origin (0,0) of the maps with the location of the cores. This might not be always the case exactly; however, the beam size is quite large and thus the effect of an offset of the core position from the origin would be insignificant. We summarize our sources’ basic information and their observed polarization quantities in Table \[table:source\]. We note that we did not consider the effect of the integrated (Galactic) RM because it is a few hundreds $\rm rad/m^2$ at most (e.g., @taylor2009); this is much smaller than the typical RM error we obtain, and at the KVN frequencies the EVPA rotation due to the integrated RM is negligible. Since the beam sizes are quite different at different frequencies, we need to quantify the errors in polarization quantities introduced by the convolution of all maps with the 22 GHz beam. We compared $m$ and EVPA at the core found with and without using convolution for each source at each frequency and added the differences quadratically to the errors of $m$ and EVPA found from convolved maps, respectively. Using contemporaneous BU VLBA maps, We identified and excluded sources which have complex polarization structure near the core that cannot be resolved with the KVN; this leaves us with eight sources. We briefly describe the results for individual sources below. ### 3C 279 This source is characterized by longitudinal (i.e., parallel to the jet direction) EVPAs that show a smooth distribution from the core to the inner jet[^3] (e.g., @jorstad2005). Similarly, our KVN maps show basically longitudinal EVPAs but rotated by up to $\approx20^{\circ}$ as function of frequency. RMs between adjacent frequency pairs range from $\approx10^3$ to $\approx10^4\rm\ rad/m^2$. We fitted a power law function to the RMs as a function of geometrical mean observing frequency and obtained the power law index $a$ in the relation ${\rm|RM|} \propto \nu^a$. Since we calculate each RM value from only two data points, the RM errors are relatively large, which results in relatively large errors in $a$. However, $a$ values in all three epochs show a good agreement with $a=2$–3, which is quite consistent with the results of previous KVN single-dish polarization monitoring of 3C 279 [@kang2015]. We note that the core EVPAs of this source might be contaminated by polarization from the inner jet. However, EVPA rotation of the inner jet region is expected to be very small at $\gtrsim 22\rm\ GHz$ because of a relatively small RM in that region ($\lesssim250\rm\ rad/m^2$; @hovatta2012). Therefore, we conclude that the observed EVPA rotation over frequency of this source is dominated by the core polarization. ### OJ 287\[oj287\] For this source, we could find contemporaneous VLBA data at 15 GHz from the MOJAVE program[^4], observed on 2017 January 28 and March 11, with our KVN data being obtained on 2017 January 17–18 and March 22–24, respectively. We included those data in our analysis after convolving the 15 GHz maps with the KVN 22 GHz beam (because the KVN beam is larger than the VLBA one even though it is at a higher frequency.) This source has shown slow and gradual EVPA variation in time and thus a potential variability during the time gap between the MOJAVE and our KVN observations ($\lesssim2$ weeks) would not be significant (see the AGN monitoring database of the 26-meter University of Michigan Radio Astronomy Observatory[^5]). In addition, OJ 287 has been known for relatively small RMs at cm wavelengths (e.g., @hovatta2012). Our KVN maps are consistent with zero RM (within errors) in 2016 December. However, in the 2017 January data, EVPA rotations from 15 to 86 GHz in the same direction being steeper at higher frequency pairs are observed, which results in $a = 0.98\pm0.66$. In the 2017 March data, the EVPA rotations between 15 and 43 GHz were almost zero within errors but a relatively large rotation with $\rm|RM|\approx5\times10^3\rm rad/m^2$ was detected at 43/86 GHz. ### CTA 102\[cta102\] This source shows a high degree of polarization, up to $\approx 40\%$, in its jet at 22 GHz but becomes compact at higher frequencies. The EVPAs rotate rapidly as function of frequency, with different slopes in the EVPA–$\lambda^2$ diagram between 22/43 GHz and 43/86 GHz. RMs at 43/86 GHz are a few times $10^4\rm\ rad/m^2$ in the source rest frame in both epochs. However, $a$ value in the 2016 December data is much larger than that in the 2017 January data. The signs of RMs are different in our two epochs, while their absolute values are of the same order of magnitude. This behavior suggests that the line-of-sight component of the jet magnetic field changed its direction within $\approx1$ month, while magnetic field strength and electron density (or at least their product) did not vary substantially. This sign change might be related to a strong flare that occurred during our KVN observations [@raiteri2017]. We discuss possible reasons for the sign flip in CTA 102 in Section \[signchange\]. ### 3C 345 {#3c345} This source shows almost the same RMs at 22/43 and 43/86 GHz, $\rm |RM|\approx10^4\ rad/m^2$ in the 2016 December data, with $a$ being consistent with zero within errors. However, the RM at 43/86 GHz is much larger than that at 22/43 GHz about one month later, resulting in $a=1.86\pm0.3$. These results indicate that there is substantial time variability in this source. Similarly to the case of CTA 102, this source shows a flare during our KVN observations and the flux density in early 2017 is almost three times higher than that in mid 2016 at 1 mm.[^6] ### 1749+096 This source displays a rather compact jet geometry at all frequencies and in both epochs (2016 December and 2017 March). Interestingly, the degree of linear polarization is larger at lower frequencies, which is not usually seen in other sources (Section \[fracpol\]). This might suggest that we are looking at a mixture of different polarization components with different EVPAs and/or RMs or that internal Faraday rotation occurs in this source (Section \[sectscreen\]). The values of RM range from $\approx10^3$ to $\approx10^4\rm\ rad/m^2$ in both epochs data. The values of $a$ are consistent within errors. Therefore, the sign, absolute value, and frequency dependence of RM appear to be stable over $\approx3$ months. ### 0235+164 This source shows a rather compact jet geometry. A systematic rotation of EVPAs in the same sense as function of frequency can be seen from 22 to 86 GHz. RMs range from $\approx10^3$ to $\approx2\times10^4\rm\ rad/m^2$, with a substantially larger RM at a higher frequency pair, resulting in $a = 3.47\pm1.18$. ### BL LAC The EVPA–$\lambda^2$ diagram shows a steeper slope between 43 and 86 GHz than between 22 and 43 GHz, providing $a = 2.65\pm 0.61$. RM values range from $\approx10^3$ to $\approx6\times10^3\rm\ rad/m^2$. This is consistent with previous measurements of the core RM between 15 and 43 GHz [@OG2009a; @gomez2016]. However, the high resolution *RadioAstron* space VLBI image shows a complex RM structure in the core region including a sign change, indicating the presence of helical magnetic fields there [@gomez2016]. Therefore, we may be looking at a blend of those structures in our KVN images (see Section \[multipleshocks\]). ### 1633+38 This source is relatively faint and shows a low degree of linear polarization ($\approx2\%$) which leads to relatively large EVPA errors. Thus, the RM at 43/86 GHz is comparable to its error and the obtained $a=1.91\pm1.96$ has also a large error. When fitting a single linear function to the EVPAs at the three frequencies available, we obtained $\rm RM = 974\pm509\ rad/m^2$. This is surprisingly low because a very high RM, $\approx2.2\times10^4\rm\ rad/m^2$, was reported previously for the core of this source using six different frequencies from 12 GHz to optical wavelength [@algaba2012]. We found that EVPAs at the core of this source in the MOJAVE program are $44^\circ$ and $32^\circ$ in 2016 November and 2017 April, respectively. If there is no substantial EVPA variability in between these two epochs, then a simple $\lambda^2$ fit can explain the data from 15 to 86 GHz, suggesting that there is no $n\pi$ ambiguity in our data and that the core RM of this source is indeed quite small. The observations in [@algaba2012] were performed in 2008 November which indicates that there is substantial temporal variability of the EVPA rotation in 1633+38. Four years before their observations, core EVPAs could not be fitted with a single $\lambda^2$ law and the obtained RM value was much smaller, $\rm RM = -570\pm430\ rad/m^2$ [@algaba2011]. This also suggests substantial temporal RM variability in 1633+38. Optical EVPAs from the Steward observatory\[optdata\] ----------------------------------------------------- For a few sources, we obtained quasi-contemporaneous (taken within $\lesssim$1 week) optical polarization data from the Steward Observatory blazar monitoring program[^7] (see @smith2009 for details) for some epochs. We summarize the optical data we used in Table \[Steward\]. (We excluded some additional datasets due to their large errors.) The optical polarimetry errors are usually quite small, $\lesssim1^{\circ}$, unless sources are very weakly polarized. However, optical polarization of blazars often show rapid variability on short time scales (e.g., @jorstad2013) presumably due to a smaller size of the emission region at higher frequencies [@marscher1996]. In order to take into account the uncertainty arising from the time gap between optical and radio observations we estimated errors from source variability as follows. The Steward Observatory blazar monitoring program usually observes each source multiple times for a specified period spanning a few days in a broad optical band from 500 to 700 nm. We selected all data in the periods that are close to our KVN observations and used the mean and standard deviation of the data points as representatives of optical EVPA and typical error, respectively. We assumed that their optical EVPAs show random-walk type variations with time. (In addition to statistical variability, many blazars occasionally show smooth, systematic optical EVPA rotations that might be associated with high energy flares; e.g., @blinov2015). We multiplied the observed standard deviation by the square root of the ratio of the time gap between optical and radio observations to the duration of the period in which a set of optical data was obtained. Under this assumption, the longer the time separation, the larger the formal uncertainty of an optical EVPA at the time of the corresponding radio observation. [clcl]{}\[t!\] & 11/30/16–12/02/16 & $-216.5 \pm 31.4$ & $244^{+701}_{-93}$\ & 01/10/17–01/13/17 & $111.7 \pm 12.6$ & $591^{+1467}_{-276}$\ 3C 279 & 01/10/17–01/14/17 & $85.2 \pm 2.7$ & $209^{+697}_{-71}$\ BL LAC & 03/29/17–03/30/17 & $-8.9 \pm 26.1$ & $138^{+298}_{-65}$ Due to lack of frequency coverage between 86 GHz and optical wavelengths, we suffered from potential $n\pi$ rotation of the optical EVPAs. Therefore, we assumed that the optical EVPAs of our sources rotate in the same direction as the ones at mm wavelengths and that the EVPA rotation between 86 GHz and the optical band does not exceed $\pi$. We present the optical EVPA values of three sources in Table \[Steward\] and plot them with the core EVPAs from our KVN observations in the left panel of Figure \[paopt\]. We also show the RMs obtained from each adjacent frequency pair in the RM–frequency diagram (the right panels of Figure \[paopt\]). Our assumption on $n\pi$ rotation appears reasonable because the optical EVPAs follow the trend of EVPA rotation established at radio frequencies, although we cannot rule out the possibility of coincidence because of the low number of sample. The RMs obtained from the EVPA difference between 86 GHz and the optical frequencies are about an order of magnitude higher than the values obtained from the frequency pair 43/86 GHz. We note that the observed RMs between 86 GHz and optical light exceed the minimum possible measurable RM by an order of magnitude except for BL Lac for which the observed RM is about two times the minimum measurable RM. The power-law increase of RM as a function of frequency does not continue to optical wavelengths but saturates at a certain frequency (right panel of Figure \[paopt\]). We used the term *transition frequency*, $\nu_{\rm trans}$, for this frequency. We calculated asymmetric errors on $\nu_{\rm trans}$ via Monte-Carlo simulations by adding Gaussian random numbers to the best-fit parameters of the radio RM–$\nu$ power-law relation with standard deviations identical to their $1\sigma$ errors. The obtained $\nu_{\rm trans}$ are distributed from 138 to 591 GHz in the source rest frame for different sources and in different epochs (Table \[Steward\]). We note that $\nu_{\rm trans}$ for BL Lac is consistent with the observed frequency of 86 GHz within $1\sigma$ because of the relatively large minimum measurable RM. fractional polarization\[fracpol\] ---------------------------------- We present the degree of linear polarization $m$ as function of $\lambda$ in Figure \[frac\]. Various de-polarization models are available to explain the evolution of $m$ with wavelength (see @osullivan2012 [@farnes2014] for summaries). In principle, $m-\lambda$ scalings can be used to determine whether the emitting region and the Faraday screen are co-spatial or not, whether magnetic fields in the screen are regular or turbulent, whether there are multiple components with different polarization properties on scales smaller than the spatial resolution, and so on [@burn1966; @conway1974; @tribble1991; @sokoloff1998]. However, we did not try to apply those models to our data because (i) our data provide sparse frequency sampling over a limited frequency range, (ii) the models are mostly appropriate for optically thin emitters while we are dealing with (partially) optically thick cores, and (iii) different observing frequencies might probe different physical regions, as suggested by the complicated $\chi-\lambda^2$ scalings of the EVPAs. Instead, we obtain a polarization spectral index $\beta$ by fitting $m\propto\lambda^{\beta}$ to our data (Table \[result\], see @farnes2014), which could be used for future theoretical studies (e.g., @porth2011) and for comparison with observations at lower frequencies (e.g., @farnes2014). We refer the readers to detailed studies of degree of linear polarization at different wavelengths of AGNs using broadband radio spectro-polarimetric observations (e.g., @osullivan2012 [@osullivan2017; @hovatta2018; @pasetto2018]) and investigating spatially resolved optically thin emitting regions with multi-frequency VLBI observations (e.g., @hovatta2012 [@kravchenko2017]). The median, mean, and standard deviation of $\beta$ are -0.11, -0.17, and 0.38, respectively. All sources show $\beta\lesssim0$ except for 1749+096 which showed $\beta\approx0.5$ in both epochs (2016 December and 2017 March). discussion {#sect4} ========== In this section, we interpret the results of the core polarization properties of eight blazars, five flat spectrum radio quasars (FSRQs) and three BL Lac objects (BLOs). RM distributions at different frequencies {#rmdist} ----------------------------------------- We present the distributions of the absolute RM values for different frequency pairs in the left panel of Figure \[rmhisto\]. We excluded RM values whose absolute values are smaller than their $1\sigma$ errors. The histograms show that the median RM increases with frequency. We note that the minimum possible measurable RMs are 440 and 2038 $\rm rad/m^2$ at 22/43 and 43/86 GHz, respectively, assuming a typical EVPA error of $2^\circ$, $3^\circ$, and $3^\circ$ at 22, 43, and 86 GHz, respectively. As is evident in Figure \[rmhisto\], the RM values we found are much larger than these minimum possible measurable RMs. Notably, the trend of increasing RMs with increasing observing frequencies cannot be produced artificially. We collected the median core RMs at cm wavelengths for our sources from [@hovatta2012] and show all median RM values as function of frequency in the right panel of Figure \[rmhisto\]. As expected, RMs increase with increasing frequency (355, 2620, and 14200 $\rm rad/m^2$ for 8.1–15.4, 22–43, and 43–86 GHz, respectively). Un-weighted fitting of a power-law function returns a best-fit power-law index $a=2.42$. Although the sample size is small and the standard deviations of the RM distributions are large, the obtained power-law index is quite close to $a = 2$, indicating that Faraday rotating media of blazars core can be represented as conical outflows statistically [@jorstad2007]. Instead of comparing RM distributions of all sources at different frequencies, we collected the power-law indices $a$ obtained for each source in Figure \[aplot\]. We have 13 measurements in total, with some sources having more than one measurement. The mean and standard deviation of all $a$ values are $a = 2.25 \pm 1.28$, which is consistent with $a=2$ and the fitting results for the median values of RM distributions at different frequencies. Our results are also consistent with previous studies of blazars at both cm and mm wavelengths (e.g., @jorstad2007 [@OG2009a; @algaba2013; @kravchenko2017; @hovatta2018]). However, many $a$ values are located far from the mean value, which potentially indicates a bimodal distribution. Assuming a power-law electron density distribution as function of jet distance ($d$), $N_e \propto d^{-a}$, toroidal magnetic fields dominant in the Faraday screen, $B\propto d^{-1}$, a conical geometry of the Faraday screen, $dl\propto d$ and $a=2$, and energy equipartition, $d_{\rm core} \propto \nu^{-1}$, one obtains ${\rm RM_{core}} \propto \int N_eBdl \propto \nu^2$ [@jorstad2007]. If some of these assumptions are not satisfied, one might expect deviations from a $a=2$ scaling. For example, there is growing evidence for a parabolic geometry of the blazar cores (e.g., @algaba2017 [@pushkarev2017]). In some cases, a need for helical magnetic fields instead of dominant toroidal fields in blazars has been pointed out (e.g., @zamaninasab2013). Likewise, the assumption of energy equipartition between radiating particles and magnetic fields may not hold for some sources (e.g., @homan2006). However, it is difficult to determine accurate $a$ values for each source with the current data only due to source variability and relatively large errors in $a$. For example, the values of $a$ are likely related with source’s flaring activity as seen in the case of CTA 102 and 3C 345 (see Section \[cta102\] and  \[3c345\]). We expect that our monthly monitoring program will allow us to investigate the reason for potential difference in $a$ values for different sources and in different epochs, together with the detailed information of the compact core region provided by the ultra high resolution arrays such as the Global Millimeter-Very Long Baseline Interferometry Array (GMVA, see e.g., @kim2016) or the Radioastron space VLBI (e.g., @gomez2016). Change of core opacities from optically thick to thin\[transition\] ------------------------------------------------------------------- In Section \[optdata\], we show that the power-law increase of RM as a function of frequency might not continue to optical wavelengths but flatten out at a certain frequency, $\nu_{\rm trans}$. We suggest that the cores of blazars become fully transparent at $\nu > \nu_{\rm trans}$, meaning no core-shift and thus no more frequency dependence of RM at those frequencies. Accordingly, the radio core may be a standing recollimation shock at $\nu > \nu_{\rm trans}$. For CTA 102, $\nu_{\rm trans}$ increased substantially from $\approx240$ GHz to $\approx590$ GHz within one month, albeit within large errors (Table \[Steward\]). This might be related to a strong flare that occurred at the time of our observations (see Section \[cta102\]) which ejected a large amount of relativistic electrons into the core, causing it to become optically thick. We obtained $\nu_{\rm trans} \approx 210$ and 140 GHz for 3C 279 and BL Lac, respectively. This result seems to be in line with recent astrometric observations of BL Lac which found a systematic deviation of the amount of core-shift from the one expected for a Blandford–Königl type jet at 22/43 GHz [@dodson2017]. Likewise, the scaling of synchrotron cooling time with frequency in BL Lac matches a standing shock better than an optically thick jet [@kim2017]. In summary, one may expect no frequency dependence of RM and no core-shift above $\approx140$ GHz for BL Lac and above $\approx210$ GHz for 3C 279 and CTA 102. However, we stress that the conclusions presented in this section are valid only when the assumptions of (a) no $n\pi$ ambiguity and (b) EVPA rotations in the same sense from mm to optical hold. We will study core opacity evolution and RM saturation further with dedicated upcoming multi-frequency observations at mm and sub-mm, combining data from KVN and ALMA (Park et al., in preparation). The Faraday screen\[sectscreen\] -------------------------------- Identifying the source of Faraday rotation is very difficult. As the amount of Faraday rotation is inversely proportional to the square of the mass of charged particles, thermal electrons and/or low-energy end of radiating non-thermal electrons would be the dominant source of the observed RM. If those Faraday rotating electrons are mixed in with the emitting plasma in jets, internal Faraday rotation occurs. However, if the rotating medium is located outside the jet, e.g., in a sheath surrounding the jet or the broad/narrow line regions (BLRs/NLRs), then the observed Faraday rotation is external to the jet. Theoretical models assuming an optically thin jet with spherical or slab geometries showed that it is very difficult for internal Faraday rotation to cause EVPA rotations larger than $45^{\circ}$ without severe depolarization [@burn1966]. Multiple studies showed that many blazars indeed have EVPA rotations larger than $45^{\circ}$ without significant depolarization, indicating that the source of Faraday rotation is external to the jets usually (e.g., @ZT2003 [@ZT2004; @jorstad2007; @OG2009a; @hovatta2012]). A sheath surrounding the jet is considered to be the most viable candidate for an external Faraday rotating medium; in addition, BLRs/NLRs are unlikely sources of RM given the time variability of RMs in jets and volume filling factor arguments [@ZT2002; @ZT2004; @hovatta2012]. Nevertheless, there is indication for potential internal Faraday rotation in some sources [@hovatta2012]. We cannot identify the Faraday screen from our data because of their limitations (Section \[fracpol\]). Nevertheless, we note that the observed RM–frequency relations having $a\approx2$ (Section \[rmdist\]) and the polarization spectral indices being predominantly negative ($\beta\lesssim0$) for our sources support the conclusion of previous studies that an external jet sheath acts as Faraday screen [@ZT2002; @ZT2004; @hovatta2012]. However, for 1749+096 we observed the degree of fractional polarization at high frequencies to be smaller than the one at lower frequencies, with $\beta\approx0.5$ in both epochs (2016 December 9 and 2017 March 24) – which is almost impossible to explain with any standard external depolarization model [@hovatta2012]. Such ‘inverse depolarization’ can be due to blending of different polarized inner jet components at different frequencies [@conway1974] or internal Faraday rotation in helical or loosely tangled random magnetic field configurations [@homan2012]. RM sign change {#signchange} -------------- We observed a RM sign change for CTA 102 within $\approx$1 month, while the absolute values of RM did not change much (Figure \[result\]). Previous studies found temporal sign reversals in RMs for other sources (e.g., @mahmud2009 [@lico2017]), sign reversals between core and jet (e.g., @mahmud2013), and sign reversals in the cores at different frequencies intervals (e.g., @OG2009a). Scenarios proposed to explain such RM sign changes include: (i) a reversal of the magnetic pole of the black hole facing the Earth; (ii) torsional oscillations of the jet; (iii) a ‘nested-helix’ magnetic field structure; and (iv) helical magnetic fields in jets seen at different orientations due to relativistic abberation, depending on whether $\theta\Gamma$ is larger or smaller than 1, where $\theta$ is the viewing angle and $\Gamma$ is the bulk Lorentz factor of jets (see @mahmud2009 [@mahmud2013] for (i)-(iii) and @OG2009a for (iv) for details). Although all scenarios are possible theoretically, we focus on the fact that CTA 102 underwent a relatively strong flare in the period of our KVN observations (Section \[cta102\]). Evidence for the presence of helical magnetic fields in AGN jets has been provided by many studies, starting with the detection of a transverse RM gradient in the jet of 3C 273 [@asada2002] which was later confirmed by other studies [@ZT2005; @hovatta2012]. Similar behaviour has been found in many BL Lac objects (e.g., @gabuzda2004 [@gabuzda2015]), radio galaxies (e.g., @kharb2009), and quasars (e.g., @asada2008 [@algaba2013; @gabuzda2015]). Furthermore, general relativistic magnetohydrodynamic simulations of AGN jets showed that the combination of the rotation of the jet base and the outflow leads to the generation of a helical field and associated Faraday rotation gradients [@BM2010]. If helical magnetic fields pervade in the jet sheaths and if they are the main contributor of the observed RMs as speculated in Section \[sectscreen\]), the sign of RMs would be determined by whether $\theta\Gamma$ is larger or smaller than 1, as explained in [@OG2009a]. As noted in Section \[cta102\], a strong flare at multiple wavelengths occurred during our KVN observations [@raiteri2017]. Flares in blazars are usually associated with new VLBI components emerging from the cores (e.g., @savolainen2002). The flare in CTA 102 would then likewise be connected to newly ejected VLBI components. [@jorstad2005] found $\theta = 2.6^{\circ}$ and $\Gamma=17.2$ for CTA 102, which yields $\theta\Gamma = 0.78$. If there is bending and/or acceleration or deceleration of the ejected component, changes of $\theta\Gamma$ across the value $\theta\Gamma=1$ can occur and the sign of RM reverses. Assuming scenario (i) or (ii) as mechanism behind the sign reversal requires coincidence with the recent strong flaring activity of this source. Furthermore, scenario (iii) can be related to flaring since a new jet component might lead to temporal increase of the relative contribution of the inner field to the outer field in the magnetic tower model. However, in this case it is difficult to explain the observation of similar RM magnitudes in the two epochs (a few times $10^4\rm\ rad/m^2$ for CTA 102); the relative contributions by the inner and outer magnetic fields to the observed RMs must be almost exactly opposite in different epochs, which, again, would be a coincidence (but see @lico2017 for the case of Mrk 421 which supports this scenario). Therefore, we conclude that scenario (iv) provides the most natural way to explain the observed sign change in RMs of CTA 102 as it does not require substantial changes in the physical properties of the jets. Our interpretation is also consistent with modelling the multi-wavelength flare in this source in late 2016 with a twisted inhomogeneous jet [@raiteri2017]. We note that bending and acceleration/deceleration of blazar jets are quite common indeed (e.g., @lister2013). Optical subclasses {#sect:opticalsubclasses} ------------------ Phenomenologically, blazars can be divided into two classes based on their optical properties: FSRQs and BLOs. Previous studies showed that FSRQs tend to have higher RMs than BLOs [@ZT2004; @hovatta2012]. We collected all available core RM values from all frequency pairs and present the distributions of the (logarithmic) RMs of FSRQs and BLOs in Figure \[histoclass\]. The median RM values are $1.2\times10^4\rm\ rad/m^2$ and $4.8\times10^3\rm\ rad/m^2$ for FSRQs and BLOs, respectively – the value for FSRQs is higher than that for BLOs by a factor close to three. However, a Kolmogorov-Smirnov test [@press1992] finds a probability of 7% that the FSRQ and BLO values are drawn from the same parent population. Therefore, it is possible that their RM properties are intrinsically the same. In Section \[sectscreen\], we claimed that the observed Faraday rotation mostly originates from jet sheaths. Relatively slow, possibly non-relativistic winds launched by an accretion disk that surround and confine the highly relativistic jet spine are one of the candidates for a jet sheath (e.g., @devilliers2005). A fundamental difference between FSRQs and BLOs is their accretion luminosities relative to their Eddington luminosities, above and below $\approx1\%$, respectively (e.g., @ghisellini2011, see also @PC2015 for further discussion). This suggests that sources in high accretion states tend to have larger RMs. A simple explanation would be that high accretion rates lead to relatively larger amounts of matter in jet sheaths. There is indeed evidence for a relation between the rate of matter injection into the jet and the accretion rate (e.g., @ghisellini2014 [@PT2017]), supporting this idea. However, the strength and degree of ordering of core magnetic fields as function of blazar subclass are poorly understood yet; the difference in RM may not be solely due to the difference in particle density. Intrinsic polarization orientation ---------------------------------- Intrinsic EVPAs (projected onto the sky plane) of AGN jets can be obtained by correcting for Faraday rotation. It has been consistently shown that BLOs have intrinsic EVPAs well aligned with their jets, while a wide range of angles between EVPAs and jet orientations, sometimes seen as double-peaked distribution of relative angles, is observed for FSRQs (e.g., @LH2005 [@jorstad2007]). The good alignment and the mis-alignment were associated with a transverse or oblique shock and a conical shock, respectively [@jorstad2007]. These results, however, used RMs obtained from a single $\lambda^2$ law description of EVPA variation between 7 and 1 mm. The RM values were of the order $10^4\rm\ rad/m^2$. However, our results show that there is a possibility that the core RMs of blazars can increase up to $\approx10^6\rm\ rad/m^2$ at $\approx250$ GHz (Figure \[paopt\]). The possible difference in core RM between FSRQs and BLOs, discussed in Section \[sect:opticalsubclasses\], suggests that it is easier for FSRQs to have high core RMs up to $\approx10^6\rm\ rad/m^2$ than for BLOs – unless the transition frequency for BLOs is much larger than for FSRQs, which seems not to be the case (Figure \[paopt\]). Even at 1 mm, the high RM of $\approx10^6\rm\ rad/m^2$ leads to an EVPA rotation of $\approx57\deg$. Therefore, one needs to take the frequency dependence of RM into account – especially for FSRQs – when comparing the intrinsic EVPAs at mm wavelengths with the direction of the inner jet. However, FSRQs are unlikely to have intrinsic EVPAs aligned with their jets (see also @yuan2001 [@gabuzda2006; @hovatta2016]) even after correcting for Faraday rotation; even their optical EVPAs, which do not suffer from strong Faraday rotation, show bimodal distributions in the angles between jets and EVPAs (@jorstad2007, see also @LS2000). We observe RMs to increase with increasing frequency, meaning that intrinsic core EVPAs are different for different observing frequencies. Such a frequency dependence implies that polarized emission observed at higher frequencies comes from regions closer to the jet base. This indicates that intrinsic EVPAs can vary with distance from the jet base. A similar behaviour has been observed for a few sources in other studies. @OG2009a found that the jet of BL Lac shows EVPAs well aligned to the jet direction in inter-knot regions and even when the jet bends. [@gomez2016] showed that their high resolution polarization image of the same source shows smooth but non-negligible variations of EVPA upstream and downstream from the core. Both results were interpreted as the presence of helical magnetic fields in the jet. Similarly, different intrinsic EVPAs at different frequencies might imply the presence of helical magnetic fields in the core regions. However, a firm conclusion requires confirming the frequency dependence of RM at a wide range of observing frequencies with both short and long $\lambda^2$ spacings (see Section \[multipleshocks\]). Multiple recollimation shocks in the cores {#multipleshocks} ------------------------------------------ Theoretical studies have shown that a series of recollimation shocks can form in relativistic jets: in analytic works (e.g., @DM1988), in hydrodynamic numerical simulations (e.g., @gomez1995 [@gomez1997; @agudo2001]), and in magneto-hydrodynamic simulations (e.g., @mizuno2015 [@marti2016]). Observationally, the presence of stationary features in AGN jets in addition to their VLBI cores has been verified in many studies (e.g., @jorstad2005). Especially, high resolution images of 3C 454.3 [@jorstad2010] and BL Lac [@gomez2016] revealed that their cores may consist of multiple stationary components. For BL Lac, emission upstream the radio core, leading to multi-wavelength flares when it passes through the core, was observed [@marscher2008; @gomez2016]; this supports the idea that the core can be identified with one of a series of recollimation shocks (e.g., @marscher2009). We found that the EVPA–$\lambda^2$ relations of our sources are usually non-linear, instead showing breaks in their slopes. We obtained the RMs for pairs of adjacent frequencies and discovered that the core RMs systematically increase with observing frequency. Based on VLBA observations at 8 different frequencies from 4.6 to 43 GHz, [@OG2009a] showed that breaks in RM appear frequently, with the best-fit lines in the EVPA–$\lambda^2$ diagram connecting smoothly over a wide range of frequencies (though not for BL Lac in their sample). In contrast, [@kravchenko2017] presented large discontinuities between the different EVPA–$\lambda^2$ fits at much lower frequencies (between 2 and 5 GHz). Therefore, one might expect that potential discontinuities in EVPA rotations might not be substantial at mm wavelengths and the assumption underlying our analysis – no RM discontinuities – might be justified. Furthermore, these studies showed that core EVPA rotations could be fitted well by a $\lambda^2$ law in some frequency ranges, then breaks, and then shows another good $\lambda^2$ fit in other frequency ranges (e.g., @OG2009a [@kravchenko2017]). Other studies obtained good $\lambda^2$ fits for the core EVPA rotations in most cases when they used relatively small frequency intervals, e.g., 8–15 GHz (e.g., @ZT2004 [@hovatta2012]). This indicates that polarized emission from a single emission region is dominant over relatively small frequency intervals, without showing a systematic increase of RMs as a function of frequency. However, over a wide range of frequencies, the RM–frequency relations appear to show multiple breaks; this implies that $\rm |RM| \propto \nu^a$ predicted by [@jorstad2007] assuming a continuous core-shift effect might not hold for narrow frequency intervals. One possible explanation is that blazar cores actually consist of multiple recollimation shocks and we observe polarized emission from one of the shocks in a given narrow frequency interval. As one goes to higher frequencies, polarized emission from inner shocks close to the jet base becomes more dominant due to lower opacity, leading to another good $\lambda^2$ fit with higher RMs. Conclusions {#sect5} =========== We studied polarization properties of 8 blazars – 5 FSRQs and 3 BLOs with multi-frequency simultaneous observations with the KVN at 22, 43, and 86 GHz. We investigated the nature of blazar radio cores by means of measuring Faraday rotation measures at different observing frequencies. Our work leads us to the following principal conclusions: 1. We found that RMs increase with frequency, with median values of $2.62\times10^3\rm\ rad/m^2$ and $1.42\times10^4\rm\ rad/m^2$ for the frequency pairs 22/43 GHz and 43/86 GHz, respectively. These values are also higher than those obtained by [@hovatta2012] at 8.1–15.4 GHz for the same sources. The median values are described well by a power-law function with $\rm|RM|\propto\nu^{a}$ with $a=2.42$. When $a$ values are obtained separately for each source, they are distributed around $a=2$ with mean and standard deviation of $a=2.25\pm1.28$. This agrees with the expectation from core-shift [@jorstad2007] for many blazars at the KVN frequencies. This finding implies that the geometry of Faraday rotating media in blazar cores can be approximated as conical. 2. We compared our KVN data with contemporaneous (within $\approx1$ week) optical polarization data from the Steward Observatory for a few sources. When we assume that the direction of EVPA rotation at radio frequencies is the same at optical wavelengths and that there is no $n\pi$ ambiguity, the optical data show a trend of EVPA rotation similar to that of the radio data. The RM values obtained with the optical data indicate that the power-law increase of RM with frequency continues up to a certain frequency, $\nu_{\rm trans}$, and then saturates, with $\rm |RM|\approx10^{5-6}\rm\ rad/m^2$ at $\approx250\rm\ GHz$, depending on source and flaring activity. We suggest that this saturation is due to the absence of core shift above $\nu_{\rm trans}$; instead, radio cores are standing recollimation shocks. This is in agreement with other studies which concluded that the radio cores of blazars cannot purely be explained as the unity optical depth surface of a continuous conical jet but are physical structures at least in some cases. 3. We detected a sign change in the observed RMs of CTA 102 over $\approx1$ month, while the magnitudes of RM were roughly preserved. Since this source showed strong flaring at the time of our observations, we suggest that new relativistic jet components emerging from the core undergo acceleration/deceleration and/or jet bending, thus leading to a change in the direction of the line-of-sight component of helical magnetic fields in the jet because of relativistic abberation. 4. We found indication that the absolute values of the core RMs of FSRQs are larger than those of BLOs at 22–86 GHz, which is consistent with results found at cm wavelengths. This difference might arise from FSRQs having higher accretion rates than BLOs, resulting in larger amounts of material in the central engine. 5. For those sources which show non-linear EVPAs–$\lambda^2$ relations, the RM-corrected (intrinsic) EVPAs might be different at different frequencies and thus at different locations of the jets. A recent ultra-high resolution image of BL Lac observed with space VLBI shows that its intrinsic EVPAs in the core region vary with different locations indeed. 6. We suggest that the systematic increase of RM as function of observing frequency appears only when covering sufficiently large ranges in frequency, with different $\lambda^2$ laws at different frequency ranges connecting smoothly. Combining this with the fact that linear EVPA–$\lambda^2$ relations are commonly observed over narrow frequency ranges suggests that blazars cores might consist of multiple recollimation shocks such that polarized emission from one of the shocks is dominant in a given narrow frequency range. We thank the referee for constructive comments, which helped to improve the paper. The KVN is a facility operated by the Korea Astronomy and Space Science Institute. We are grateful to the staff of the KVN who helped to operate the array and to correlate the data. The KVN and a high-performance computing cluster are facilities operated by the KASI (Korea Astronomy and Space Science Institute). The KVN observations and correlations are supported through the high-speed network connections among the KVN sites provided by the KREONET (Korea Research Environment Open NETwork), which is managed and operated by the KISTI (Korea Institute of Science and Technology Information). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. Data from the Steward Observatory spectropolarimetric monitoring project were used. This program is supported by Fermi Guest Investigator grants NNX08AW56G, NNX09AU10G, NNX12AO93G, and NNX15AU81G. This study makes use of 43 GHz VLBA data from the VLBA-BU Blazar Monitoring Program (VLBA-BU-BLAZAR; <http://www.bu.edu/blazars/VLBAproject.html>), funded by NASA through the Fermi Guest Investigator Program. The VLBA is an instrument of the Long Baseline Observatory. The Long Baseline Observatory is a facility of the National Science Foundation operated by Associated Universities, Inc. This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team [@lister2009]. We acknowledge financial support from the Korean National Research Foundation (NRF) via Global Ph.D. Fellowship Grant 2014H1A2A1018695 (J.P.) and Basic Research Grant NRF-2015R1D1A1A01056807 (S.T., J.C.A.). S. S. Lee was supported by NRF grant NRF-2016R1C1B2006697. GYZ is supported by Korea Research Fellowship Program through the NRF (NRF-2015H1D3A1066561). Correspondence should be addressed to S.T. D-Term calibration and evolution \[appendixa\] {#appendixa} ============================================== We show examples of calibration of instrumental polarization in Figure \[dtermcal\]. The calibrated complex visibility ratio, LR/LL with L and R referrring to left and right hand circular polarization, respectively, from the calibrator sources used for D-term calibration of our 2017 January data is shown on the complex plane. KVN antennas are on alt-azimuth mounts and instrumental polarization does not (usually) change over time as antennas change the direction of pointing. Target parallactic angles change over time, causing the polarization signal received by the alt-az antennas to vary. Since we perform parallactic angle correction in the early stage of data pre-processing, the polarization signal intrinsic to the target remains constant in the LR/LL plane while the instrumental polarization signal rotates with parallactic angle. Therefore, the rotation of the visibility ratio on the complex plane in Figure \[dtermcal\] is (mostly) due to instrumental polarization. See [@roberts1994] and [@aaron1997] for details of instrumental polarization calibration. Amplitudes and phases of D-terms can be derived from the rotating pattern by assuming that the center of rotation does not vary with time. This is true only when (i) the antenna D-terms do not vary during the observation, which is true in most cases, and (ii) the calibration sources are either unpolarized or are polarized but spatially un-resolved. As we have 6 independent visibility ratios from three baselines and 6 unknown D-terms for three antennas and two polarizations, solutions of D-terms can be obtained from fitting the pattern observed in the complex plane. The green and red lines in the left and the central panels of Figure \[dtermcal\] correspond to the expected LR/LL variation caused by the D-terms of two antennas, as obtained from the AIPS task LPCAL. The blue lines are the sum of contributions of the two D-terms, which is in good agreement with the data. The right panels show the visibility ratio after the D-term correction; the data are clustered around fixed points which mark polarization signals intrinsic to the target AGNs. The scatter is mostly due to thermal noise in the data; the noise becomes larger at higher observing frequencies. Even precise D-term measurements may be affected by substantial systematic errors like non-stationary centers of rotation in the complex plane. Our calibrator sources are not perfectly un-polarized (or are polarized with sub-structure), and the D-terms may not always be constant during an observation run. We can estimate the errors on the D-terms by comparing the values obtained from different instrumental polarization calibrators as we did in Figure \[dtermcomparison\]. We use the standard devations of the D-terms obtained using different calibrators as errors. The errors are usually less than 1–2% but sometimes up to 3%. These errors will be transferred to the polarization quantities we used in our analysis and we considered these errors as described in Section \[sect2\]. The D-terms of the VLBA antennas are known to vary only on timescales of months or longer (e.g., @gomez2002). In Figure \[dtermevolution\] we check the stability of the KVN D-terms; their amplitudes seem to be mostly stable over $\approx4$ months but sometimes show non-negligible variability. Their standard deviations are usually less than 1–2% – in agreement with the formal errors of the D-terms. Reliability check of KVN polarimetry \[appendixb\] {#appendixb} ================================================== Thanks to the extensive monitoring of blazars, specifically by the MOJAVE program at 15 GHz and the VLBA-BU program[^8] at 43 GHz, we could check if our KVN maps are consistent with contemporaneous VLBA images. We picked 3C 273, which shows complex jet structure in both total intensity and polarization and thus is not used for our analysis of blazar cores, as reference source and compared (a) our KVN 22 GHz data observed on 2016 December 9 with the MOJAVE 15 GHz data observed one day after, (b) our KVN 43 GHz data observed on 2016 December 10 with the BU data observed on 2016 December 24, and (c) our KVN 86 GHz data observed on 2016 December 10 with the BU data in Figure \[comparison\]. The top panels show polarization maps generated from the KVN and VLBA data next to each other. The VLBA maps are convolved with the corresponding KVN beam. The VLBA and KVN distributions of fractional jet polarization at 15 GHz and 22 GHz, respectively, are in good agreement with each other, showing higher degrees of polarization – up to $\approx70\%$ – at the northern edge of the jet located $\approx10$ mas from the core. Likewise, the 43-GHz VLBA and KVN data are consistent with each other except that the KVN maps show more polarized emission in the outer jet, i.e., $\approx10$ mas from the core. This might be because there is a time gap of $\approx2$ weeks between the observations and/or the KVN has only short baselines (with the maximum baseline length less than 500 km) and thus is more sensitive to the extended emission. However, the 86 GHz KVN map shows an additional polarization component near the core region, while the 43 GHz VLBA map does not have such a component but has an extended polarization near the core from the inner jet polarization component (at $\approx$ 1 mas from the core) by convolution of a large beam. Interestingly, the core of this source is usually unpolarized (e.g., @jorstad2005 [@attridge2005; @hada2016]), which has been attributed to strong Faraday depolarization or intrinsically very low polarization at the core. Its inner jet components at $\lesssim1$ mas from the core show $|\rm RM| \approx a\ few \times 10^4\ rad/m^2$ [@attridge2005; @jorstad2007; @hada2016]. Therefore, our result might indicate a detection of core polarization of 3C 273 at 86 GHz presumably because of less depolarization at higher frequency. We will verify this possibility in a forthcoming paper with more data at 86 and 129 GHz (Park et al. in preparation). The bottom panels of Figure \[comparison\] show EVPA and degree of polarization as a function of $\lambda^2$ at a few locations in the jet marked in the top panels. The 15-GHz VLBA and 22-GHz KVN results are, to first order, consistent with each other but show non-negligible differences. This might be due to Faraday rotation with a RM of a few hundred $\rm rad/m^2$ in the jet, as reported by many other studies (e.g., @hovatta2012) and possibly different polarization structure in the jet at different frequencies. The VLBA and KVN data at 43 GHz are consistent with each other within errors, especially when considering the time gap of $\approx2$ weeks. Similarly, The VLBA 43 GHz and KVN 86 GHz data are in agreement with each other for the inner jet component at $\approx$ 1 mas from the core, taking into account the time gap and the small rotation measure. Therefore, we conclude that the polarimetry mode of KVN is reliable. Aaron, S., 1997, EVN Memo \#78, “Calibration of VLBI polarization data”, <http://pc.astro.brandeis.edu/pdfs/evnmemo78.pdf> Agudo, I., Gómez, J.-L., Martí, J.-M., et al. 2001, , 549, 183 Agudo, I., Thum, C., Gómez, J. L., & Wiesemeyer, H. 2014, , 566, 59 Agudo, I., Thum, C., Molina, S. N., et al. 2018a, , 474, 1427 Agudo, I., Thum, C., Ramakrishnan, V., et al. 2018b, , 473, 1850 Algaba, J. C. 2013, , 429, 3551 Algaba, J.-C., Gabuzda, D. C., & Smith, P. S. 2011, , 411, 85 Algaba, J.-C., Gabuzda, D. C., & Smith, P. S. 2012, , 420, 542 Algaba, J.-C., Nakamura, M., Asada, K., & Lee, S.-S. 2017, , 834, 65 Algaba, J.-C., Zhao, G.-Y., Lee, S.-S., et al. 2015, JKAS, 48, 237 Asada, K., Inoue, M., Uchida, Y., et al. 2002, , 54, 39 Asada, K., Inoue, M., Nakamura, M., et al. 2008, , 682, 798 Asada, K., & Nakamura, M. 2012, , 745, 28 Asada, K., Nakamura, M., Doi, A., et al. 2014, , 781, 2 Attridge, J. M., Wardle, J. F. C., & Homan, D. C. 2005, , 633, 85 Blandford, R. D., & Königl, A. 1979, , 232, 34 Blandford, R. D., & Payne, D. G. 1982, MNRAS, 199, 883 Blandford, R. D., & Znajek, R. L. 1977, MNRAS, 179, 433 Blinov, D., Pavlidou, V., Papadakis, I., et al. 2015, , 453, 1669 Broderick, A. E., & McKinney, J. C. 2010, , 725, 750 Burn, B. J. 1966, MNRAS, 133, 67 Casadio, C., Gómez, J. L., Grandi, P., et al. 2015, , 808, 162 Chatterjee, R., Marscher, A. P., Jorstad, S. G., et al. 2009, , 704, 1689 Chatterjee, R., Marscher, A. P., Jorstad, S. G., et al. 2011, , 734, 43 Cheung, C. C., Harris, D. E., & Stawarz, Ł. 2007, , 663, 65 Conway, R. G., Haves, P., Kronberg, P. P., et al. 1974, , 168, 137 Daly, R. A., & Marscher, A. P. 1988, , 334, 539 De Villiers, J.-P., Hawley, J. F., Krolik, J. H., & Hirose, S. 2005, , 620, 878 Dodson, R., Rioja, M. J., Molina, S. N., & Gómez, J. L. 2017, , 834, 177 Farnes, J. S., Gaensler, B. M., & Carretti, E. 2014, , 212, 15 Fromm, C. M., Ros, E., Perucho, M., et al. 2013, , 557, 105 Gabuzda, D. C., Knuettel, S., & Reardon, B. 2015, , 450, 2441 Gabuzda, D. C., Murray, É., & Cronin, P. 2004, , 351, 89 Gabuzda, D. C., Rastorgueva, E. A., Smith, P. S., & O’Sullivan, S. P. 2006, , 369, 1596 Ghisellini, G., Tavecchio, F., Foschini, L., & Ghirlanda, G. 2011, , 414, 2674 Ghisellini, G., Tavecchio, F., Maraschi, L., et al. 2014, , 515, 376 Gómez, J. L., Marscher, A. P., Alberdi, A., et al. 2002, VLBA Scientific Memo \#30 Gómez, J. L., Lobanov, A. P., Bruni, G., et al. 2016, , 817, 96 Gómez, J. L., Martí, J. M., Marscher, A. P., et al. 1995, , 449, 19 Gómez, J. L., Martí, J. M., Marscher, A. P., et al. 1997, , 482, 33 Hada, K., Doi, A., Kino, M., et al. 2011, , 477, 185 Hada, K., Kino, M., Doi, A., et al. 2013, , 775, 70 Hada, K., Kino, M., Doi, A., et al. 2016, , 817, 131 Hada, K., Park, J. H., M. Kino, et al. 2017, , 69, 71 Homan, D. C. 2012, , 747, 24 Homan, D. C., Kovalev, Y. Y., Lister, M. L., et al. 2006, , 642, 115 Hovatta, T., Aller, M. F., Aller, H. D., et al. 2014, , 147, 143 Hovatta, T., O’Sullivan, S., Martí-Vidal, I., et al. 2018, , in press (arXiv:1803.09982) Hovatta, T., Lindfors, E., Blinov, D., et al. 2016, , 596, 78 Hovatta, T., Lister, M. L., Aller, M. F., et al. 2012, , 144, 105 Jorstad, S., & Marscher, A. 2016, Galaxies, 4, 47 Jorstad, S. G., Marscher, A. P., Larionov, V. M., et al. 2010, , 715, 362 Jorstad, S. G., Marscher, A. P., Lister, M. L., et al. 2005, , 130, 1418 Jorstad, S. G., Marscher, A. P., Mattox, J. R., et al. 2001, , 556, 738 Jorstad, S. G., Marscher, A. P., Smith, P. S., et al. 2013, , 773, 147 Jorstad, S. G., Marscher, A. P., Stevens, J. A., et al. 2007, , 134, 799 Kang, S., Lee, S.-S., & Byun, D.-Y. 2015, JKAS, 48, 257 Kharb, P., Gabuzda, D. C., O’Dea, C. P., et al. 2009, , 694, 1485 Kim, D.-W., Trippe, S., Lee, S.-S., et al. 2017, JKAS, 50, 167 Kim, J.-Y., Trippe, S., Sohn, B. W., et al. 2015, JKAS, 48, 285 Kim, J.-Y., Lu, R.-S., Krichbaum, T. P., et al. 2016, Galaxies, 4, 39 Komissarov, S. S., Barkov, M. V., Vlahakis, N., & Königl, A. 2007, , 380, 51 Komissarov, S. S., Vlahakis, N., Königl, A., & Barkov, M. V. 2009, , 394, 1182 Kravchenko, E. V., Kovalev, Y. Y., & Sokolovsky, K. V. 2017, , 467, 83 Lee, S.-S., Byun, D.-Y., Oh, C. S., et al. 2011, , 123, 1398 Lee, S.-S., Byun, D.-Y., Oh, C. S., et al. 2015, JKAS, 48, 229 Lee, S.-S., Oh, C. S., Roh, D.-G., et al. 2015, JKAS, 48, 125 Lee, S.-S., Petrov, L., Byun, D.-Y., et al. 2014, , 147, 77 León-Tavares, J., Valtaoja, E., Tornikoski, M., et al. 2011, , 532, 146 Leppänen, K. J., Zensus, J. A., & Diamond, P. J. 1995, , 110, 2479 Lico, R., Gómez, J. L., Asada, K., & Fuentes, A. 2017, , 469, 1612 Lister, M. L., Aller, H. D., Aller, M. F., et al. 2009, , 137, 3718 Lister, M. L., Aller, M. F., Aller, H. D., et al. 2013, , 146, 120 Lister, M. L., & Homan, D. C. 2005, , 130, 1389 Lister, M. L., & Smith, P. S. 2000, , 541, 66 Lobanov, A. P. 1998, , 330, 79 Mahmud, M., Coughlan, C. P., Murphy, E., et al. 2013, , 431, 695 Mahmud, M., Gabuzda, D. C., & Bezrukovs, V. 2009, , 400, 2 Marscher, A. P. 1996, in ASP Conf. Ser. 110, Blazar Continuum Variability, ed. H. R. Miller, J. R. Webb, & J. C. Noble (San Francisco, CA: ASP), 248 Marscher, A. P. 2009, in ASP Conf. Ser. 402, Approaching Micro-Arcsecond Resolution with VSOP-2: Astrophysics and Technologies, ed. Y. Hagiwar et al. (San Francisco, CA: ASP), 194 Marscher, A. P., Jorstad, S. G., D’Arcangelo, F. D., et al. 2008, , 452, 966 Marscher, A. P., Jorstad, S. G., Gómez, J.-L., et al. 2002, , 417, 625 Martí, J. M., Perucho, M., & Gomez, J. L. 2016, , 831, 163 Martí-Vidal, I., Marcaide, J. M., Alberdi, A., et al. 2011, , 533, 111 Martí-Vidal, I., Muller, S., Vlemmings, W., et al. 2015, Science, 348, 311 Mertens, F., Lobanov, A. P., Walker, R. C., & Hardee, P. E. 2016, , 595, 54 Mizuno, Y., Gómez, J. L., Nishikawa, K.-I., et al. 2015, , 809, 38 Oh, J., Trippe, S., Kang, S., et al. 2015, JKAS, 48, 299 O’Sullivan, S. P., Brown, S., Robishaw, T., et al. 2012, , 421, 3300 O’Sullivan, S. P. & Gabuzda, D. C. 2009a, , 393, 429 O’Sullivan, S. P. & Gabuzda, D. C. 2009b, , 400, 26 O’Sullivan, S. P., Purcell, C. R., Anderson, C. S., et al. 2017, , 469, 4034 Pacholczyk, A. G. 1970, Radio Astrophysics (San Francisco: W. H. Freeman) Park, J., & Trippe, S. 2017, , 834, 157 Pasetto, A., Carrasco-González, C., O’Sullivan, S., et al. 2018, , in press (arXiv:1801.09731) Porth, O., Fendt, C., Meliani, Z., & Vaidya, B. 2011, , 737, 42 Potter, W. J., & Cotter, G. 2013a, , 429, 1189 Potter, W. J., & Cotter, G. 2013b, , 431, 1840 Potter, W. J., & Cotter, G. 2015, , 453, 4070 Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes in FORTRAN (2nd ed.; Cambridge: Cambridge Univ. Press) Pushkarev, A. B., Hovatta, T., Kovalev, Y. Y., et al. 2012, , 545, 113 Pushkarev, A. B., Kovalev, Y. Y., Lister, M. L., & Savolainen, T. 2017, , 468, 4992 Raiteri, C. M., Villata, M., Acosta-Pulido, J. A., et al. 2017, , 552, 374 Rani, B., Krichbaum, T. P., Marscher, A. P., et al. 2015, , 578, 123 Ramakrishnan, V., León-Tavares, J., Rastorgueva-Foi, E. A., et al. 2014, , 445, 1636 Rioja, M. J., Dodson, R., Jung, T., et al. 2014, , 148, 84 Rioja, M. J., Dodson, R., Malarecki, J., & Asaki, Y. 2011, , 142, 157 Roberts, D. H., Wardle, J. F. C., & Brown, L. F. 1994, , 427, 718 Savolainen, T., Wiik, K., Valtaoja, E., et al. 2002, , 394, 851 Shepherd, M. C. 1997, in Astronomical Data Analysis Software and Systems VI, eds. G. Hunt, & H. E. Payne (San Francisco: ASP), ASP Conf. Ser., 125, 77 Smith, P.S., Montiel, E., Rightley, S., et al. 2009, arXiv:0912.3621, 1094 2009 Fermi Symposium, eConf Proceedings C091122 Sokoloff, D. D., Bykov, A. A., Shukurov, A., et al. 1998, , 299, 189 Sokolovsky, K. V., Kovalev, Y. Y., Pushkarev, A. B., & Lobanov, A. P. 2011, , 532, 38 Taylor, A. R., Stil, J. M., & Sunstrum, C. 2009, , 702, 1230 Thum, C., Agudo, I., Molina, S. N., et al. 2018, , 473, 2506 Toma, K., & Takahara, F. 2013, Progr. Theor. Exper. Phys., 2013, 083E02 Tribble, P. C. 1991, , 250, 726 Urry, C. M., & Padovani, P. 1995, , 107, 803 Vlahakis, N., & Königl, A. 2004, , 605, 656 Walker, R. C., Hardee, P. E., Davies, F. B., et al. 2018, , 855, 128 Yuan, M. S., Tran, H., Wills, B., & Wills, D. 2001, in ASP Conf. Ser. 227, Blazar Demographics and Physics, ed. P. Padovani & C. M. Urry (San Francisco: ASP), 150 Zamaninasab, M., Savolainen, T., Clausen-Brown, E., et al. 2013, , 436, 3341 Zavala, R. T., & Taylor, G. B. 2002, , 566, 9 Zavala, R. T., & Taylor, G. B. 2003, , 589, 126 Zavala, R. T., & Taylor, G. B. 2004, , 612, 749 Zavala, R. T., & Taylor, G. B. 2005, , 626, 73 Zhao, G.-Y., Algaba, J. C., Lee, S. S., et al. 2018, , 155, 26 60 [^1]: <https://radio.kasi.re.kr/kvn/ksp.php#ksp003> [^2]: This power-law index is obtained by including the RM at 15/22 GHz; see Figure \[result\] and Section \[oj287\] for details. [^3]: The term ‘inner jet’ denotes any polarized component in the jet that can be resolved from the core by instruments with higher angular resolution than the KVN but cannot be (well) resolved by the KVN itself, like the extended linear polarization structure of 3C 279 at $\approx1$ mas from the core seen in the BU map of 14 January 2017 (see <https://www.bu.edu/blazars/VLBA_GLAST/3c279/3C279jan17_map.jpg>) [^4]: <http://www.physics.purdue.edu/astro/MOJAVE/index.html> [^5]: <https://dept.astro.lsa.umich.edu/obs/radiotel/gif/0851_202.gif> [^6]: <http://sma1.sma.hawaii.edu/callist/callist.html?plot=1642%2B398> This might be related to a substantial change in $a$ within $\approx1$ month for this source, though the sign of RM does not change during the period. [^7]: <http://james.as.arizona.edu/~psmith/Fermi> [^8]: https://www.bu.edu/blazars/VLBAproject.html
{ "pile_set_name": "ArXiv" }
--- abstract: 'Charmonia spectral functions at finite temperature are studied using QCD sum rules in combination with the maximum entropy method. This approach enables us to directly obtain the spectral function from the sum rules, without having to introduce any specific assumption about its functional form. As a result, it is found that while $J/\psi$ and $\eta_c$ manifest themselves as significant peaks in the spectral function below the deconfinement temperature $T_c$, they quickly dissolve into the continuum and almost completely disappear at temperatures between 1.0 $T_c$ and 1.1 $T_c$.' author: - Philipp Gubler - Kenji Morita - Makoto Oka title: Charmonium spectra at finite temperature from QCD sum rules with the maximum entropy method --- Since QCD was established to be the theory of strong interactions, charmonium has often been used as a suitable probe of its dynamics, owing to the fact that in this system both perturbative and nonperturbative aspects of QCD play equally important roles [@Novikov]. The behavior of charmonia in a hot or dense medium has also attracted much interest, as it was suggested some time ago, that in the color-deconfined medium with a temperature above $T_c$ charmonia will dissolve due to the color Debye screening, and thus serve as a signal for the formation of quark-gluon plasma [@Matsui]. Testing these early suggestions from first principles of QCD has become feasible only recently, as new developments in lattice QCD have made it possible to access the charmonium spectral functions with the help of the maximum entropy method (MEM) [@Asakawa; @Datta; @Umeda; @Jakovac]. These studies found that the lowest charmonium states ($J/\psi$ and $\eta_c$) survive up to temperatures as high as $\sim$ 1.5 $T_c$ or higher. Besides lattice QCD, the method of QCD sum rules [@Shifman] provides another tool for investigating the properties of hadrons at finite temperature [@Bochkarev; @Hatsuda]. Using this approach various charmonium channels were studied recently [@Morita1; @Morita3], and evidence for a considerable change of the spectral functions just above $T_c$ was found. To specify the nature of this change is the major goal of this study. For this task we employ MEM, which is applicable to QCD sum rules [@Gubler] and has the advantage that one does not have to introduce any strong assumption about the functional form of the spectral function, such as the “pole + continuum" ansatz, which is often used in QCD sum rule studies. Let us first recapitulate what sort of information QCD sum rules can provide on the charmonium spectral function at finite temperature [@Bochkarev; @Hatsuda]. One considers the time-ordered correlator at finite temperature $$\Pi^{\mathrm{J}}(q) = i \displaystyle \int d^4x e^{iqx} \langle T [j^{\mathrm{J}}(x) j^{\mathrm{J}}(0) ] \rangle_T, \label{eq:correlator}$$ where $j^{\mathrm{J}}(x)$ stands for $\bar{c} \gamma_{\mu} c(x)$ and $\bar{c} \gamma_{5} c(x)$ in the vector ($\mathrm{V}$) and pseudoscalar ($\mathrm{PS}$) channel, respectively. The expectation value $\langle \mathcal{O} \rangle_T$ is defined as $\langle \mathcal{O} \rangle_T \equiv \mathrm{Tr}( e^{-H/T} \mathcal{O} ) / \mathrm{Tr}( e^{-H/T} )$. Throughout this work, we will set the spatial momentum of the charmonium system relative to the thermal medium to be $\textbf{0}$; thus, $q^{\mu} = (\omega, \textbf{0})$. In this circumstance, there is only one independent component in the correlator of the vector channel. In what follows, we will use the dimensionless functions $\tilde{\Pi}^{\mathrm{V}}(q^2) \equiv \Pi^{\mu,\mathrm{V}}_{\mu}(q)/(-3q^2)$ and $\tilde{\Pi}^{\mathrm{PS}}(q^2) \equiv \Pi^{\mathrm{PS}}(q)/q^2$ for the analysis. Going to the deep Euclidean region $q^2 \equiv -Q^2 \ll 0$, one can calculate the correlation functions using the operator product expansion (OPE), giving an expansion in local operators $O_n$ with increasing mass dimension $n$: $\tilde{\Pi}^{\mathrm{J}}(q^2) = \sum_n C^{\mathrm{J}}_n(q^2) \langle O_n \rangle_T$. As was first discussed in [@Hatsuda], as long as the temperature $T$ lies below the separation scale of the OPE, which is of the order of $\sim 1$ GeV, all the temperature effects can be included into the expectation values of the local operators $\langle O_n \rangle_T$, while the Wilson coefficients $C^{\mathrm{J}}_n(q^2)$ are independent of $T$. Furthermore, to improve the convergence of the OPE and suppressing the influence of high energy states onto the sum rule, we apply the Borel transform to the correlator, leading to the final result of the OPE for $\nu \equiv 4m_c^2/M^2$, $M$ being the Borel mass: $$\begin{split} \mathcal{M}^{\mathrm{J}}(\nu) = & e^{-\nu}A^{\mathrm{J}}(\nu)[1 + \alpha_s(\nu) a^{\mathrm{J}}(\nu) + b^{\mathrm{J}}(\nu) \phi_b(T) \\ &+ c^{\mathrm{J}}(\nu) \phi_c(T) + d^{\mathrm{J}}(\nu) \phi_d(T)]. \end{split} \label{eq:OPE}$$ The first two terms in Eq.(\[eq:OPE\]) are the leading order perturbative term and its first order $\alpha_s$ correction. The third and fourth terms contain the scalar and twist-2 gluon condensates of mass dimension 4: $\phi_b(T) = \frac{4\pi^2}{9(4m_c^2)^2} G_0$ and $\phi_c(T) = \frac{4\pi^2}{3(4m_c^2)^2} G_2$, where $G_0 = \langle \frac{\alpha_s}{\pi} G^a_{\mu\nu} G^{a\mu\nu}\rangle_T$ and $G_2$ is defined as $\langle \frac{\alpha_s}{\pi} G^{a\mu\sigma} G^{a\nu}_{\sigma}\rangle_T = (u^{\mu}u^{\nu} - \frac{1}{4}g^{\mu\nu})G_2$, $u^{\mu}$ being the four velocity of the medium. For the detailed expressions of the Wilson coefficients of these terms, see [@Morita3]. To evaluate the possible influence of higher order contributions, we include one more term, which is proportional to the scalar gluon condensate of dimension 6, $\phi_d(T) = \frac{1}{(4m_c^2)^3}\langle g^3 f^{abc} G^{a\nu}_{\mu} G^{b\lambda}_{\nu} G^{c\mu}_{\lambda} \rangle_T$. The Wilson coefficient of this term can be found in [@Marrow]. The correlator can also be expressed by a dispersion relation, in terms of the spectral function $\rho^{\mathrm{J}}(\omega)$ of the channel specified by the operator $j^{\mathrm{J}}(x)$. After the Borel transform one obtains $$\mathcal{M}^{\mathrm{J}}(\nu) = \displaystyle \int_0^{\infty}dx^2 e^{-x^2 \nu} \rho^{\mathrm{J}}(2m_c x). \label{eq:dispersion}$$ Equating Eqs.(\[eq:OPE\]) and (\[eq:dispersion\]) then gives the final form of the sum rules. In the vector channel, an additional constant term contributes to Eq.(\[eq:OPE\]), which originates from a pole at $\omega = 0$ in $\rho^{\mathrm{V}}(\omega)$ [@Bochkarev]. As this so-called scattering term considerably complicates the analysis, we eliminate it by taking the derivative of Eqs.(\[eq:OPE\]) and (\[eq:dispersion\]) with respect to $\nu$ and analyze only the resulting derivative sum rule in this channel. For a discussion on the validity of this procedure in the heavy quark sum rules, see [@Morita4]. The usual strategy of analyzing QCD sum rules is to make some reasonable assumptions on the functional form of the spectral function, and then extract information on the lowest lying peak from Eqs.(\[eq:OPE\]) and (\[eq:dispersion\]). This method, however, has several shortcomings. First of all, the widely used “pole + continuum" ansatz, which certainly works well at $T=0$, may not be appropriate at temperatures above $T_c$, where the lowest lying state is expected to be modified and eventually melt into the $c$-$\bar{c}$ continuum, which could become the dominant contribution. Furthermore, it is not always possible to unambiguously fit a specific ansatz to the OPE results, because of the occurrence of equally valid solutions. Such a situation arose in [@Morita1; @Morita3], where it was not possible to determine a unique solution for the used parametrization of the spectral function. To handle these problems, we propose to use MEM, which allows us to extract the spectral function from Eqs.(\[eq:OPE\]) and (\[eq:dispersion\]) without prejudice on its functional form. Moreover, it can be proven that this method provides a unique solution for the spectral function [@Asakawa2]. Let us now briefly summarize the basic ideas of MEM, which helps us to carry out the task of inverting the integral of Eq.(\[eq:dispersion\]). This is, however, an ill-posed problem as we have only access to $\mathcal{M}^{\mathrm{J}}(\nu)$ in the region of $\nu$ where the OPE shows sufficient convergence and, furthermore, have only information on $\mathcal{M}^{\mathrm{J}}(\nu)$ with limited precision due to the uncertainties of the values of $m_c$, $\alpha_s$ and the various condensates. Nonetheless, using Bayesian probability theory, MEM makes it possible to at least obtain the most probable form of the spectral function $\rho(\omega)$. To do this, one defines a probability $P[\rho|\mathcal{M}\mathcal{I}]$ for $\rho$ to have a specific form given $\mathcal{M}$ and additional information $\mathcal{I}$ on $\rho$ such as positivity and asymptotic values. Using Bayes’ theorem, this probability can be rewritten as $P[\rho|\mathcal{M}\mathcal{I}] \propto P[\mathcal{M}|\rho\mathcal{I}] P[\rho|\mathcal{I}] = e^{- L + \alpha S}$, where $e^{-L}$ is the likelihood function, used in standard $\chi^2$ methods. $e^{\alpha S}$ stands for the prior probability, given by the parameter $\alpha$ and the Shannon-Jaynes entropy, $$S = \displaystyle \int_0^{\infty} d\omega \Bigr[ \rho(\omega) - m(\omega) - \rho(\omega) \log \Bigl( \frac{\rho(\omega)}{m(\omega)} \Bigr) \Bigl]. \label{eq:SJe}$$ $m(\omega)$ is called the default model and can be used to incorporate prior knowledge on the asymptotic values of $\rho(\omega)$ into the analysis. Determining now the most probable $\rho(\omega)$ corresponds to the numerical problem of finding the maximum of the functional $- L + \alpha S$, for which we use the Bryan algorithm [@Bryan]. As a last step, the spectral function $\rho_{\alpha}(\omega)$ maximizing $- L + \alpha S$ for a fixed value of $\alpha$ is averaged using $\displaystyle \int d \alpha \rho_{\alpha}(\omega) P[\alpha |\mathcal{M}\mathcal{I}]$, which gives the final output of the MEM analysis. For more details see [@Asakawa2], while specific issues concerning the application of this method to QCD sum rules are discussed in [@Gubler]. Next, we describe how the temperature dependencies of the gluonic condensates are determined. For the scalar and twist-2 gluon condensates with mass dimension 4, we follow the approach proposed in [@Morita1; @Morita3], where, in the quenched approximation, the energy momentum tensor, expressed using gluonic operators, was matched with the corresponding quantity written down in form of the energy density $\epsilon$ and the pressure $p$, leading to $G_0 = G^{\mathrm{vac}}_0 - \frac{8}{11}\bigl[\epsilon(T) - 3p(T)\bigr]$ and $G_2 = -\frac{\alpha_s(T)}{\pi}\bigl[\epsilon(T) + p(T)\bigr]$ for the scalar and twist-2 gluon condensates. The functions $\epsilon(T)$, $p(T)$ and $\alpha_s(T)$ were then extracted from quenched lattice QCD data [@Boyd; @Kaczmarek]. We will in this study use the same numerical values for the $T$ dependent part of $G_0$ and $G_2$ as in [@Morita3]. As is shown there, both $G_0$ and $G_2$ exhibit a sudden decrease in the vicinity of $T_c$. It was suggested in previous studies that the OPE could break down at temperatures above $T_c$ as higher dimensional operators may become non-negligible [@Morita3]. To investigate this possibility, we include the scalar gluonic condensate with dimension 6, $\langle g^3 f^{abc} G^{a\nu}_{\mu} G^{b\lambda}_{\nu} G^{c\mu}_{\lambda} \rangle$, about which much less is known. To our knowledge, at $T=0$, there exists only an estimate based on the dilute instanton gas model, giving $ \langle g^3 f^{abc} G^{a\nu}_{\mu} G^{b\lambda}_{\nu} G^{c\mu}_{\lambda} \rangle = \frac{48 \pi^2}{5\rho_c^2} \langle \frac{\alpha_s}{\pi} G^2 \rangle, $ where $\rho_c$ is a representative value for the instanton radius, for which we use the established value of 0.33 fm [@Schafer]. Assuming that the relation above also holds at finite temperature, and taking into account the reduction of $\rho_c$ above $T_c$ [@Chu], we, however, found that the dimension 6 term does not influence the behavior of the spectral function much in the temperature region investigated in this Letter. Therefore, we conclude that even though the relation obtained from the dilute instanton gas model can only be considered to be a crude estimate, as long as it gives the correct order of magnitude, the contribution of the dimension 6 condensate is small and does not alter the results obtained in this study. ![image](Jpsi.etac.0.0.7.eps){width="7.0cm"} ![image](Jpsi.etac.1.0.7.eps){width="7.0cm"} Let us now turn to the MEM analysis of Eqs.(\[eq:OPE\]) and (\[eq:dispersion\]). First, we investigate the spectral function at $T = 0$ in the vector and pseudoscalar channel. To determine the upper boundary of the region of $\nu$ to be analyzed, we employ the criterion that the dimension 6 term should be smaller than 20% of the whole OPE expression of Eq.(\[eq:OPE\]), which gives $\nu^{\mathrm{V}}_{\mathrm{max}} = 8.03$ in the vector and $\nu^{\mathrm{PS}}_{\mathrm{max}} = 7.29$ in the pseudoscalar channel. We keep these values fixed when going to finite temperature. In fact, in the temperature region around $T_c$, the relative contribution of the dimension 6 term at $\nu^{\mathrm{V},\mathrm{PS}}_{\mathrm{max}}$ is even smaller, namely, around 10% or less. The lower boundary of $\nu$ is chosen to be $\nu^{\mathrm{V},\mathrm{PS}}_{\mathrm{min}} = 0.78$, corresponding to a Borel mass of $M = 3.0$ GeV. We have checked that the obtained spectral functions do not depend much on this choice. For the value of the charm quark mass $m_c$, we use a recent estimate giving $\overline{m}_c(m_c) = 1.277 \pm 0.026$ GeV [@Dehnadi], for $\alpha_s$ we employ the newest world average $\alpha_s(M_Z) = 0.1184 \pm 0.0007$ [@Bethke], while for the vacuum gluon condensate $G^{\mathrm{vac}}_0$ the standard value $G^{\mathrm{vac}}_0 = 0.012 \pm 0.0036$ $\mathrm{GeV}^4$ [@Shifman; @Colangelo] is applied. For the default model $m(\omega)$, we use a constant matched to the perturbative value of the spectral function at high energy, as was done in similar studies using lattice QCD [@Asakawa]. The resulting spectral functions are given on the left side of Fig. \[fig:zerofinite\]. We observe in both channels a clear ground state peak, corresponding to $\eta_c$ and $J/\psi$. The spectral functions also exhibit a second peak, which is, however, not statistically significant. These second peaks most likely reflect the existence of several excited states, which the MEM analysis is not able to resolve, quite similar to the situation encountered in lattice studies. Furthermore, it is seen that the spectral function of the vector channel approaches the default model in the region close to $\omega = 0$, which, however, should not be confused with a contribution of the scattering term. This behavior is an artifact caused by our usage of the derivative sum rule in this channel and should thus not be considered to be a physical effect. Numerically, the peak representing $\eta_c$ lies at $3.02$ GeV, while the one standing for $J/\psi$ is found at $3.06$ GeV. Thus, we see that the ground state in both channels reproduces the experimental value with a precision of the order of $50$ MeV. In the vector channel, the residue can be related to the electronic width of the corresponding resonance. We can obtain this residue from Fig. \[fig:zerofinite\] simply by integrating the spectral function in the region of the peak, which gives 0.162 $\mathrm{GeV}^2$, which is in good agreement with the experimental value of 0.173 $\mathrm{GeV}^2$. On the other hand, we observe that the hyperfine splitting between $\eta_c$ and $J/\psi$ is underestimated. All these findings are in qualitative agreement with the results obtained in the conventional analysis of the charmonium sum rules [@Marrow]. Next, we increase the temperature according to Eq.(\[eq:OPE\]). The resulting spectral functions are shown on the right side of Fig. \[fig:zerofinite\] at temperatures between $0.9$ $T_c$ and $1.2$ $T_c$. It is seen in the figure that the behavior of the spectral functions changes abruptly in the vicinity of $T_c$. First, both ground state peaks experience a shift to lower energies of the order of 50 MeV, before dissolving quickly into the continuum above the critical temperature. Investigating the spectral functions in more detail, one observes that $\eta_c$ disappears already at $T=1.0$ $T_c$, while $J/\psi$ survives a bit longer, but also appears to be melted to a large degree at $T=1.1$ $T_c$. This sudden qualitative change of the spectral function mainly originates from the changes of the third and fourth terms in Eq.(\[eq:OPE\]), which can be traced back to the rapid adjustment of the thermodynamic quantities $\epsilon(T)$ and $p(T)$ around $T_c$. It is reassuring to note that our results are consistent with the findings of [@Morita1; @Morita3] in the sense that both observe a negative energy shift of the peaks around $T_c$. In these earlier works, it was, however, not possible to discuss the possible melting of the peaks because a relativistic Breit-Wigner form for the spectral function was assumed at all investigated temperatures. For obtaining firm conclusions, one has to test the reliability of the MEM procedure at finite temperature, where systematic effects decrease the reproducibility and resolution of the spectral function obtained from MEM. In lattice studies, this reduced reliability is primarily caused by the reduction of the data points in the imaginary time correlator, due to periodicity and the reduction of the maximal time extent. In the case of QCD sum rules, this problem does not exist, as Eq.(\[eq:OPE\]) is given as a continuous function at any temperature and therefore the same number of data points can be used. Nevertheless, the reliability of the MEM technique is still reduced at finite temperature due to the uncertainties of the thermodynamic functions $\epsilon(T)$ and $p(T)$, whose contribution grows with temperature and therefore increases the error of Eq.(\[eq:OPE\]). In order to confirm that the change of the spectral function in Fig. \[fig:zerofinite\] is not an artifact, we reanalyze Eq.(\[eq:OPE\]) at $T=0$, but use the errors of $T\neq0$ in the analysis. The results are then compared to the ones given in Fig. \[fig:zerofinite\], to investigate the net temperature effect on the spectral function. We find from this analysis that while the height of the peaks of the spectral functions at $T=0$ is indeed reduced because of the increased error, this effect is much smaller than the actual reduction of the peaks around $T_c$, seen on the right side of Fig. \[fig:zerofinite\]. We therefore conclude that the disappearance of the peaks observed in Fig. \[fig:zerofinite\] is a physical effect and is not induced by an artifact of the MEM analysis. In summary, we have extracted the spectral functions of the pseudoscalar and vector channel at both zero and finite temperature using a combined analysis of QCD sum rules and MEM. At $T=0$, the MEM technique is able to clearly resolve the lowest energy peaks, corresponding to the $\eta_c$ and $J/\psi$ resonances. The positions of both peaks agree with the experimental values with a precision of about 50 MeV. At finite temperature, we find that $\eta_c$ and $J/\psi$ melt quickly after the temperature is raised above the deconfinement temperature $T_c$, caused by the sudden change of the dimension-4, scalar and twist-2 gluon condensates in this temperature region. We have checked that this effect is not an artifact of the systematics of the MEM analysis. These results quantitatively disagree with the earlier findings of lattice studies which suggest that both $\eta_c$ and $J/\psi$ can survive at temperatures of up to 1.5 $T_c$ or higher. It, however, has to be mentioned that our results are in fact consistent with the latest lattice results [@Ding], finding the peaks of $\eta_c$ and $J/\psi$ to be largely distorted between $0.73$ $T_c$ and $1.46$ $T_c$. It remains to be seen whether or not the two methods will converge to compatible conclusions when more accurate analyses will become available in the future. This work was supported by KAKENHI under Contracts No. 22105503, No. 19540275 and by YIPQS at the Yukawa Institute for Theoretical Physics. P.G. gratefully acknowledges the support by the Japan Society for the Promotion of Science for Young Scientists (Contract No. 21.8079). K.M. thanks FIAS for support. [99]{} V. A. Novikov *et al.*, Phys. Rept. **41**, 1 (1978). T. Matsui and H. Satz, Phys. Lett. B **178**, 416 (1986). M. Asakawa and T. Hatsuda, Phys. Rev. Lett. **92**, 012001 (2004). S. Datta *et al.*, Phys. Rev. D **69**, 094507 (2004). T. Umeda , K. Nomura and H. Matsufuru, Eur. Phys. J. C **39**, 9 (2004). A. Jacov$\acute{a}$c *et al.*, Phys. Rev. D **75**, 014506 (2007). M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. **B147**, 385 (1979); **B147**, 448 (1979). A.I. Bochkarev and M.E. Shaposhnikov, Nucl. Phys. **B268**, 220 (1986). T. Hatsuda, Y. Koike and S.H. Lee, Nucl. Phys. **B394**, 221 (1993). K. Morita and S.H. Lee, Phys. Rev. Lett. **100**, 022301 (2008); K. Morita and S.H. Lee, Phys. Rev. C **77**, 064904 (2008); Y. Song, S.H. Lee and K. Morita, Phys. Rev. C **79**, 014907 (2009). K. Morita and S.H. Lee, Phys. Rev. D **82**, 054008 (2010). P. Gubler and M. Oka, Prog. Theor. Phys. **124**, 995 (2010), arXiv:1005.2459 \[hep-ph\]. J. Marrow, J. Parker and G. Shaw, Z. Phys. C **37**, 103 (1987). K. Morita and S.H. Lee, arXiv:1012.3110 \[hep-ph\]. M. Asakawa, T. Hatsuda, and Y. Nakahara, Prog. Part. Nucl. Phys. **46**, 459 (2001). R.K. Bryan, Eur. Biophys. J. **18**, 165 (1990). G. Boyd *et al.*, Nucl. Phys. **B469**, 419 (1996). O. Kaczmarek *et al.*, Phys. Rev. D **70**, 074505 (2004). T. Schafer and E.V. Shuryak, Rev. Mod. Phys. **70**, 323 (1998). M.-C. Chu and S. Schramm, Phys. Rev. D **51**, 4580 (1995). B. Dehnadi *et al.*, arXiv:1102.2264 \[hep-ph\]. S. Bethke, Eur. Phys. J. C **64**, 689 (2009). P. Colangelo and A. Khodjamirian, *“At the Frontier of Particle Physics/Handbook of QCD"* (World Scientific, Singapore, 2001), Volume 3, 1495. H.-T. Ding *et al.*, PoS **LAT2010**, 180 (2010).
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the impact of radiation pressure on spatial dust distribution inside regions using one-dimensional radiation hydrodynamic simulations, which include absorption and re-emission of photons by dust. In order to investigate grain size effects as well, we introduce two additional fluid components describing large and small dust grains in the simulations. Relative velocity between dust and gas strongly depends on the drag force. We include collisional drag force and coulomb drag force. We find that, in a compact region, a dust cavity region is formed by radiation pressure. Resulting dust cavity sizes ($\sim 0.2$ pc) agree with observational estimates reasonably well. Since dust inside an region is strongly charged, relative velocity between dust and gas is mainly determined by the coulomb drag force. Strength of the coulomb drag force is about 2-order of magnitude larger than that of the collisional drag force. In addition, in a cloud of mass $10^5$ $M_{\sun}$, we find that the radiation pressure changes the grain size distribution inside regions. Since large (0.1 ) dust grains are accelerated more efficiently than small (0.01 ) grains, the large to small grain mass ratio becomes smaller by an order of magnitude compared with the initial one. Resulting dust size distributions depend on the luminosity of the radiation source. The large and small grain segregation becomes weaker when we assume stronger radiation source, since dust grain charges become larger under stronger radiation and hence coulomb drag force becomes stronger.' author: - | Shohei Ishiki,$^{1}$[^1] Takashi Okamoto,$^{1}$ and Akio K. Inoue$^{2}$\ $^{1}$Department of Cosmoscience, Hokkaido University, N10 W8, Kitaku, Sapporo, 060-0810, Japan\ $^{2}$Department of Environmental Science and Technology, Faculty of Design Technology,\ Osaka Sangyo University, 3-1-1 Nakagaito, Daito, Osaka 574-8530, Japan\ bibliography: - 'ishiki\_2\_clean.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'The effect of radiation pressure on spatial distribution of dust inside H$_\mathrm{II}$ regions' --- \[firsPHage\] radiative transfer – methods: numerical – ISM: clouds – regions Introduction ============ Radiation from young massive stars plays a crucial role in star forming regions, and its effect on spatial dust distribution inside regions is also non-negligible. [@ODell1965] firstly observed dust inside the region and many other observations found dust in regions [@ODell1966; @Kawajiri1968; @Harper1971]. [@ODell1965] observationally estimated the distribution of dust inside regions, concluding that gas-to-dust mass ratio decreases as a function of distance from the centre of the nebulae. [@Nakano1983] and [@Chini1987] observationally suggested the existence of dust cavity regions. There have been some theoretical attempts to reveal dust distribution inside regions [@Mathews1967; @Gail1979a; @Gail1979b]. [@Gail1979b] suggested that a dust cavity can be created by radiation pressure. Radiation pressure may also produce spatial variations in the grain size distribution inside regions as suggested by recent observational data of IR bubbles. From the Galactic Legacy Ingrared Mid-Plane Survey Extraordinaire [GLIMPSE; @Benjamin2003], [@Churchwell2006] found that about 25% of IR bubbles are associated with known regions and they claimed that the IR bubbles are primarily formed around hot young stars. [@Deharveng2010] then pointed out that 86% of IR bubbles are associated with ionzed gas. Since [@Churchwell2006] missed the large (&gt; 10 arcmin) and small (&lt; 2 arcmin) bubbles, [@Simpson2012] presented a new catalogure of 5106 IR bubbles. [@Paladini2012] found that the peak of 250  continuum emission appears further from radiation source than that of 8  continuum emission. Since they assumed that 250  continuum emission traces the big grains (BGs) and 8  continuum emission traces the polycyclic aromatic hydrocarbons (PAHs), they argued that the dust size distribution depends on the distance from a radiation source. [@Inoue2002] argued the presence of the central dust depleted region — dust cavity — in compact/ultra-compact regions in the Galaxy by comparing the observed infrared-to-radio flux ratios with a simple spherical radiation transfer model. The dust cavity radius is estimated to be 30% of the Stromgren radius on average, which is too large to be explained by dust sublimation. The formation mechanism of the cavity is still an open question, while the radiation pressure and/or the stellar wind from the excitation stars have been suggested as responsible mechanisms. We will examine whether the radiation pressure can produce the cavity in this paper. By considering the effect of radiation pressure on dust and assuming steady regions, [@Draine2011] theoretically explained the dust cavity size that [@Inoue2002] estimated from observational data. [@Akimkin2015; @Akimkin2017] estimated dust size distribution by solving motion of dust and gas respectively, and they concluded that radiation pressure preferentially removes large dust from regions. Their simulations have, however, assumed a single OB star as a radiation source. As mentioned by [@Akimkin2015], grain electric potential is the main factor that affects the dust size distribution. If we assume a stronger radiation source, such as a star cluster, dust would been more strongly charged and their conclusions might change. In this paper, we investigate the effect of radiation pressure on spatial dust distribution inside compact regions and compare it with the observational estimates [@Inoue2002]. In addition, we perform multi-dust-size simulations and study the effect of the luminosity of the radiation source on dust size distribution inside regions. The structure of this paper is as follows: In Section \[sec:Methods\], we describe our simulations. In Section \[sec:setup\], we describe our simulation setup. In Section \[sec:Results\], we present simulation results. In Section \[sec:Discussion\], we discuss the results and present our conclusions. Methods {#sec:Methods} ======= We place a radiation source at the centre of a spherically symmetric gas distribution. The species we include in our simulations are , , , , , electrons, and dust. We assume the dust-to-gas mass ratio to be $6.7 \times 10^{-3}$ corresponding to a half of the abundance of elements heavier than He (so-called ’metal’) in the Sun [@asplund09]. We neglect gas-phase metal elements in this paper. We solve the radiation hydrodynamic equations at each timestep as follows: 1. Hydrodynamic equations 2. Radiative transfer and other related processes 1. Static radiative transfer equations 2. Chemical reactions 3. Radiative heating and cooling 4. Grain electric potential The methods we use for radiation transfer, chemical reactions, radiative heating, cooling and time stepping are the same as @Ishiki2017 [hereafter paper I]. Dust model ---------- We include absorption and thermal emission of photons by dust grains in our simulations. To convert the dust mass density to the grain number density, we assume a graphite grain whose material density is 2.26gcm$^{-3}$ [@Draine1979]. We employ the cross-sections of dust in @1984ApJ...285...89D and @1993ApJ...402..441L[^2]. Dust sizes we assume are 0.1 or 0.01 . Dust temperature is determined by the radiative equilibrium, and thus the dust temperature is independent from gas temperature. We assume that the dust sublimation temperature is $1500$ K; however, dust never be heated to this temperature in our simulations. We do not include photon scattering by dust grains for simplicity. Grain electric potential ------------------------ In our simulations, we solve hydrodynamics including the coulomb drag force which depends on grain electric potential. In order to determine the grain electric potential, we consider following processes: primary photoelectric emission, auger electron emission, secondary electron emission, and electron and ion collisions [@Weingartner2001; @Weingartner2006]. The effect of auger electron emission and secondary electron emission is, however, almost negligible in our simulations, because high energy photons ($> 10^2$ eV) responsible for the two processes are negligible in the radiation sources considered in this paper. Since the time scale of dust charging processes is so small ($\lesssim$1 yr), we integrate the equation of grain electric potential implicitly. Dust drag force --------------- In our simulations, we calculate the effect of drag force $F_\mathrm{drag}$ on a dust of charge $Z_\mathrm{d}$ and radius $a_\mathrm{d}$ [@Draine1979] as follows: $$F_\mathrm{drag} = 2 \pi a_\mathrm{d}^2 k T_\mathrm{g} \left[ \sum_\mathrm{i} n_\mathrm{i} \left( G_0 (s_\mathrm{i}) + z_\mathrm{i}^2 \phi^2 \mathrm{ln} (\Lambda / z_\mathrm{i}) G_2 (s_\mathrm{i}) \right) \right] , \nonumber$$ where $$\begin{aligned} s_\mathrm{i} \equiv \sqrt{m_\mathrm{i} v^2 / (2kT_\mathrm{g})}, \nonumber \\ G_0 (s_\mathrm{i}) \approx 8s_\mathrm{i} / (3 \sqrt{\pi} ) \sqrt{1+9\pi s_\mathrm{i}^2/64}, \nonumber \\ G_2 (s_\mathrm{i}) \approx s_\mathrm{i} / (3 \sqrt{\pi}/4 + s_\mathrm{i}^3), \nonumber \\ \phi \equiv Z_\mathrm{d} e^2 / ( a_\mathrm{d} k T_\mathrm{g} ), \nonumber \\ \Lambda \equiv 3/ (2 a_\mathrm{d} e | \phi |) \sqrt{k T_\mathrm{g}/ \pi n_\mathrm{e}}, \nonumber \end{aligned}$$ $k$ is the Boltzmann constant, $T_\mathrm{g}$ is the temperature of gas, $n_\mathrm{i}$ is the number density of $i$th gas species, $n_\mathrm{e}$ is the number density of electron, $z_\mathrm{i}$ is the charge of $i$th gas species ($i=$ , , , , ), and $m_\mathrm{i}$ is the mass of $i$th species. Hydrodynamics ------------- ### dust and gas dynamics {#subsubsec:d1} In this section we describe the procedure to solve the set of hydrodynamic equations: $$\begin{aligned} \frac{\partial}{\partial t} \rho_\mathrm{g} + \frac{\partial}{\partial x} \rho_\mathrm{g} v_\mathrm{g} &=& 0 \nonumber \\ \frac{\partial}{\partial t} \rho_\mathrm{d} + \frac{\partial}{\partial x} \rho_\mathrm{d} v_\mathrm{d} &=& 0 \nonumber \\ \frac{\partial}{\partial t} \rho_\mathrm{g} v_\mathrm{g} + \frac{\partial}{\partial x} \rho_\mathrm{g} v_\mathrm{g}^2 &=& \rho_\mathrm{g} a_\mathrm{gra} + f_\mathrm{rad,g} - \frac{\partial}{\partial x} P_\mathrm{g} \nonumber \\ & & + K_\mathrm{d} (v_\mathrm{d} - v_\mathrm{g}) \nonumber \\ \frac{\partial}{\partial t} \rho_\mathrm{d} v_\mathrm{d} + \frac{\partial}{\partial x} \rho_\mathrm{d} v_\mathrm{d}^2 &=& \rho_\mathrm{d} a_\mathrm{gra} + f_\mathrm{rad,d} \nonumber \\ & & + K_\mathrm{d} (v_\mathrm{g} - v_\mathrm{d}) \nonumber \\ \frac{\partial}{\partial t} \left( \frac{1}{2} \rho_\mathrm{g} v_\mathrm{g}^2 + \frac{1}{2} \rho_\mathrm{d} v_\mathrm{d}^2 + e_\mathrm{g} \right) &+& \frac{\partial}{\partial x} \left[ \left( \frac{1}{2} \rho_\mathrm{g} v_\mathrm{g}^2 + h_\mathrm{g} \right) v_\mathrm{g} + \frac{1}{2} \rho_\mathrm{d} v_\mathrm{d}^3 \right] \nonumber \\ &=& \left( \rho_\mathrm{g} v_\mathrm{g} + \rho_\mathrm{d} v_\mathrm{d} \right) a_\mathrm{gra} \nonumber \\ & & + f_\mathrm{rad,g} v_\mathrm{g} + f_\mathrm{rad,d} v_\mathrm{d} \nonumber \end{aligned}$$ where $\rho_\mathrm{g}$ is the mass density of gas, $\rho_\mathrm{d}$ is the mass density of dust, $v_\mathrm{g}$ is the velocity of gas, $v_\mathrm{d}$ is the velocity of dust, $a_\mathrm{gra}$ is the gravitational acceleration, $f_\mathrm{rad,g}$ is the radiation pressure gradient force on gas, $f_\mathrm{rad,d}$ is the radiation pressure gradient force on dust, $P_\mathrm{g}$ is the gas pressure, $e_\mathrm{g}$ is the internal energy of gas, $h_\mathrm{g}$ is the enthalpy of gas, and $K_\mathrm{d}$ is the drag coefficient between gas and dust defined as follows: $$K_\mathrm{d} \equiv \frac{n_\mathrm{d} F_\mathrm{drag}}{|\bm{v}_\mathrm{d} - \bm{v}_\mathrm{g}|}, \nonumber$$ where $n_\mathrm{d}$ is the number density of dust grains. In order to solve the dust drag force stably, we use following algorithm for the momentum equations: $$\begin{split} \begin{bmatrix} p_\mathrm{d}^{*} \left( = \rho_\mathrm{d}^\mathrm{t+\Delta t} v_\mathrm{d}^* \right) \\ p_g^{*}\left( = \rho_\mathrm{g}^\mathrm{t+\Delta t} v_\mathrm{g}^* \right) \end{bmatrix} &= \begin{bmatrix} p_\mathrm{d}^t \\ p_g^t \end{bmatrix} + \begin{bmatrix} F_\mathrm{p, d}(\rho_\mathrm{d}^t,v_\mathrm{d}^t) \\ F_\mathrm{p, g}(\rho_g^t,v_\mathrm{g}^t,e_g^t) \end{bmatrix} \Delta t,\label{cdv1} \end{split}$$ $$\begin{split} \begin{bmatrix} p_\mathrm{d}^\mathrm{t+\Delta t} \\ p_\mathrm{g}^\mathrm{t+\Delta t} \end{bmatrix} &= \begin{bmatrix} \rho_\mathrm{d}^\mathrm{t+\Delta t} \\ \rho_\mathrm{g}^\mathrm{t+\Delta t} \end{bmatrix} \frac{ p_\mathrm{d}^*+p_\mathrm{g}^* }{ \rho_\mathrm{d}^\mathrm{t+\Delta t} + \rho_g^\mathrm{t+\Delta t} } \\ &+ \begin{bmatrix} \rho_\mathrm{d}^\mathrm{t+\Delta t} \\ \rho_\mathrm{g}^\mathrm{t+\Delta t} \end{bmatrix} \left[ a_\mathrm{gra} + \frac{f_\mathrm{d} + f_g }{\rho_\mathrm{d}^\mathrm{t+\Delta t} + \rho_g^\mathrm{t+\Delta t} }\right] \Delta t \label{cdv2} \\ &+ \begin{bmatrix} -1 \\ 1 \end{bmatrix} \frac{\rho_g^\mathrm{t+\Delta t} \rho_d^\mathrm{t+\Delta t}}{\rho_g^\mathrm{t+\Delta t} + \rho_d^\mathrm{t+\Delta t}} \left( v_\mathrm{g}^* - v_\mathrm{d}^* \right) \mathrm{e}^{-\frac{\Delta t}{t_d}} \\ &+ \begin{bmatrix} -1 \\ 1 \end{bmatrix} t_d \frac{\rho_d^\mathrm{t+\Delta t} f_g - \rho_g^\mathrm{t+\Delta t} f_d}{\rho_g + \rho_d} ( 1 - \mathrm{e}^{-\frac{\Delta t}{t_d}} ), \end{split}$$ where $\Delta t$ is the time step, $\rho_\mathrm{i}^\mathrm{t}$ is the mass density of $i$th species at time $t$, $p_\mathrm{i}^\mathrm{t}$ is the momentum of $i$th species at time $t$, $e_\mathrm{g}^\mathrm{t}$ is the internal energy of gas at time $t$, $F_\mathrm{X,i}$ is the advection of the physical quantity $X$ of the $i$th species, $f_\mathrm{d}$ is the force on dust ($f_\mathrm{d}=f_\mathrm{rad,d}$), $f_\mathrm{g}$ is the force on gas ($f_\mathrm{g}=f_\mathrm{rad,g}-\partial P_\mathrm{g}/ \partial x$), and the inverse of the drag stopping time, $t_\mathrm{d}$, is $$t_\mathrm{d}^{-1} = K_\mathrm{d} \frac{\rho_\mathrm{d}^\mathrm{t+\Delta t} + \rho_\mathrm{g}^\mathrm{t+\Delta t} }{\rho_\mathrm{d}^\mathrm{t+\Delta t} \rho_\mathrm{g}^\mathrm{t+\Delta t} }. \label{td}$$ Equation (\[cdv2\]) that determines the relative velocity between dust and gas is the exact solution of the following equations: $$\begin{split} \rho_\mathrm{g} \frac{d}{dt} v_\mathrm{g} &= f_\mathrm{rad,g} - \frac{\partial}{\partial x}P_\mathrm{g} + \rho_\mathrm{g} a_\mathrm{gra} + K_\mathrm{d} (v_\mathrm{d} - v_\mathrm{g}), \label{dve} \\ \rho_\mathrm{d} \frac{d}{dt} v_\mathrm{d} &= f_\mathrm{rad,d} + \rho_\mathrm{d} a_\mathrm{gra} + K_\mathrm{d} (v_\mathrm{g} - v_\mathrm{d}). \end{split}$$ Momentum advection and other hydrodynamic equations are solved by using [<span style="font-variant:small-caps;">AUSM$+$</span>]{} [@1996JCoPh.129..364L]. We solve the hydrodynamics in the second order accuracy in space and time. In order to prevent cell density from becoming zero or a negative value, we set the minimum number density, $n_\mathrm{H} \simeq 10^{-13}$ cm$^{-3}$. We have confirmed that our results are not sensitive to the choice of the threshold density as long as the threshold density is sufficiently low. In order to investigate whether our method is reliable, we perform shock tube tests in Appendix \[sec:stt\]. ln Appendix \[sec:2dust\], we describe how we deal with the dust grains with two sizes. Simulation setup {#sec:setup} ================ \[0.85\] --------- -------------------- ----------------------------- ------------------------------ ----------------------------------- ----------------------------------- -------------- ------------- ------------------------ --------------------------- ------------------ -------------------------------- ------ --------- Cloud $r_\mathrm{cloud}$ ${n}_\mathrm{{\mathrm{H}}}$ ${n}_\mathrm{{\mathrm{He}}}$ ${n}_\mathrm{{\mathrm{d,Large}}}$ ${n}_\mathrm{{\mathrm{d,Small}}}$ Distribution Source $\dot{N}_\mathrm{ion}$ ${T_\mathrm{\mathrm{g}}}$ ${T_\mathrm{d}}$ ${M}_\mathrm{{\mathrm{star}}}$ Dust Gravity (pc) (cm$^{-3}$) (cm$^{-3}$) ($10^{-10}$ cm$^{-3}$) ($10^{-7}$ cm$^{-3}$) ($10^{49}$ s$^{-1}$) (K) (K) ($10^3~M_\mathrm{{\sun}}$) Cloud 1 1.2 4$\times$10$^5$ 3.4$\times$10$^4$ 6.4$\times$10$^{3}$ 0 C BB (50100K) 6.2 100 10 0.08 1 off Cloud 2 17 791 67 9.6 3.0 BE BB (38500K) 0.72 1082 10 0.05 2 on Cloud 3 17 791 67 9.6 3.0 BE SSP 5.8 1082 10 2 2 on Cloud 4 17 791 67 9.6 3.0 BE SSP 58 1082 10 20 2 on --------- -------------------- ----------------------------- ------------------------------ ----------------------------------- ----------------------------------- -------------- ------------- ------------------------ --------------------------- ------------------ -------------------------------- ------ --------- \[Tab1\] In the first simulation, in order to investigate whether our simulation derives a consistent result with the observational estimate for compact/ultra-compact regions [@Inoue2002], we model a constant density cloud of hydrogen number density $4\times10^5$ cm$^{-3}$ and radius 1.2 pc. As a radiation source, we place a single star (i.e. black body) at the centre of the sphere. Since we are interested in the formation of a dust cavity, we neglect the gravity which does not affect the relative velocity between dust grains and gas (see equation (\[cdv2\])). We assume a single dust grain size in this simulation. In the second set of simulations, in order to investigate the effect of radiation pressure on the dust grain size distribution inside a large gas cloud, we model a cloud as a Bonner-Ebert sphere of mass $10^5$ $M_{\sun}$ and radius 17 pc. As the radiation source, we consider a single star (black body, BB) or a star cluster (a simple stellar population, SSP) and we change the luminosity of the radiation source to investigate the dependence of the dust size distribution on the luminosity of the radiation source. We compute its luminosity and spectral-energy distribution as a function of time by using a population synthesis code, [<span style="font-variant:small-caps;">PÉGASE.2</span>]{} , assuming the Salpeter initial mass function [@salpeter] and the solar metallicity. We set the mass range of the initial mass function to be 0.1 to 120 $M_{\sun}$. Materials at radius, $r$, feel the radial gravitational acceleration, $$a_\mathrm{gra}(r) = - G \frac{M_\mathrm{star} r}{(r^2 + r_\mathrm{soft}^2)^{3/2}}- G \frac{M (<r) }{r^2 } \nonumber$$ where M(&lt;r) represents the total mass of gas inside $r$ and $M_\mathrm{star}$ is the mass of the central radiation source, which is 50 $M_{\sun}$ for the single star case and $2\times10^3$ or $2\times10^4$ $M_{\sun}$ for the two star cluster cases. Since the gravity from the radiation source has a non-negligible effect on simulation results and causes numerical instability in the case of SSP, we need to introduce softening length, $r_{\mathrm{soft}}$. We set it to 0.5 pc for the SSP. Since the gravity from a single star is negligible effect on simulation results, we set 0 pc for the single star. Following the dust size distribution of [@MRN], so-called MRN distribution, we assume two dust size in these simulations. We assume the initial number ratio of large to small dust as $$n_{\mathrm{d, Large}} : n_\mathrm{d, Small} = 1 : 10^{2.5}, \nonumber$$ where $n_{\mathrm{d, Large}}$ and $n_\mathrm{d, Small}$ are the number density of dust grains of 0.1  and 0.01  in size, respectively. The details of initial conditions are listed in Table \[Tab1\]. We use linearly spaced 128 meshes in radial direction, 128 meshes in angular direction, and 256 meshes in frequency direction in all simulations to solve radiation hydrodynamics. Results {#sec:Results} ======= Dust cavity radius ------------------ ![image](masterg1v5B2.pdf){width="8.0cm"} -------------------------- --------------------------- ------------------------ ---------------------------- ------------------ ------------------------------------- $\overline{n}_\mathrm{e}$ $\dot{N}_\mathrm{ion}$ ${r}_\mathrm{\ion{H}{ii}}$ ${r}_\mathrm{d}$ ${y}_\mathrm{d}$ (cm$^{-3}$) (10$^{49}$s$^{-1}$) (pc) (pc) $\equiv r_\mathrm{d}/R_\mathrm{St}$ this work ($t=0.42$ Myr) 1247 6.2 0.73 0.21 0.20 [@Inoue2002] 1200 $\pm$ 400 6.8 $\pm$ 3.9 0.72 $\pm$ 0.098 0.28 $\pm$ 0.13 0.30 $\pm$ 0.12 -------------------------- --------------------------- ------------------------ ---------------------------- ------------------ ------------------------------------- \[Tab2\] We present density, gas temperature, dust-to-gas mass ratio, grain electric potential ($V_\mathrm{d} \equiv e Z_d/a_\mathrm{d}$), and relative velocity between dust and gas as functions of radius in Fig \[d1g\]. In the top panels, the hydrogen number density is indicated by the red solid line. The number density of is indicated by the blue dash-dotted line. The initial state of the simulation is shown by the black dotted line. The average electron number density within an region, $\overline{n}_\mathrm{e}$, the region radius, $r_\mathrm{\ion{H}{ii}}$, the dust cavity radius, $r_\mathrm{d}$, and the ratio between the radius of the dust cavity to the Strömgren radius, $y_\mathrm{d}$, obtained by our simulation ($t=0.42$ Myr) and the observational estimate are shown in Table \[Tab2\]. We find that our simulation results are in broad agreement with the observational estimate. The dust cavities, hence, could be created by radiation pressure. The parameter $y_\mathrm{d}$ obtained by the simulation is somewhat smaller than the observational estimate. However, we could find a better agreement if we tuned the initial condition such as the gas density. In addition, the agreement would be better if we included the effect of stellar winds, which was neglected in this paper. Since dust inside the region is strongly charged, relative velocity between dust and gas is determined by coulomb drag force. Magnitude of the coulomb drag force is about 2-order of magnitude larger than that of the collisional drag force. The relative velocity, thus, becomes largest when the dust charge is neutral. Grain electric potential gradually decreases with radius and then suddenly drops to negative value. Near the ionization front, the number of ionized photons decreases and hence collisional charging becomes important. This is the reason behind the sudden decrease of the grain electric potential. In the neutral region, there is no photon which is able to ionize the gas and hence there is few electrons that collide with dust grains. On the other hand, there are photons that photoelectrically charge dust grains. Therefore, the grain electric potential becomes positive again at just outside of the region. Then, the UV photons are consumed and the electron collisional charging becomes dominant again in the neutral region. Spatial distribution of large dust grains and small dust grains --------------------------------------------------------------- ![image](masterg2allvfixdcdgIFT.pdf){width="16.5cm"} We present densities, gas temperature, dust-to-gas mass ratios for large and small grains, large-dust-to-small-dust mass ratios ($\rho_\mathrm{d, Large}/\rho_\mathrm{d, Small}$), the grain electric potential, and relative velocity between dust and gas as functions of radius in Fig \[d2g\]. In order to compare the simulation results on the dust size distribution with each other, we present the results at the time when the shock front reaches to $\sim$15 pc. In order to study the dependence of the dust size distribution on time and the luminosity of the radiation source, we also present the simulation result of Cloud 2 at $t=$1.1 My: the same irradiation time as Cloud 4. In the top panels, the hydrogen number density is indicated by the red solid lines and that of is indicated by the blue dot-dashed lines. Initial states of the simulations are shown by black dotted lines. In the fifth row, the charges of dust grains with size 0.1 and 0.01 are indicated by the red solid and blue dot-dashed lines, respectively. The black dotted lines show the initial profiles (i.e. 0 V). In the bottom panels, the relative velocity between dust grains with size 0.1 and gas and between dust grains with size 0.01 and gas are indicated by the red solid and blue dot-dashed lines, respectively. Note that the radiation source becomes stronger from Cloud 2 to Cloud 4. We find that radiation pressure affects the dust distribution within an region depending on the grain size. In Fig \[d2g\], we divide them into the following four regions: 1. From the central part, radiation pressure removes both large and small dust grains and creates a dust cavity (the yellow shaded region). 2. Within an region, $\rho_\mathrm{d, Large}/\rho_\mathrm{d, Small}$ has a peak. Between the region ‘a’ and this peak, there is a region where $\rho_\mathrm{d, Large}/\rho_\mathrm{d, Small}$ takes the local minimum value (the cyan shaded region), for example, at $r\sim 4$ pc in Cloud 2 at $t=2.9$ Myr. 3. The region that contains the peak mentioned above is shaded by magenta. 4. The $\rho_\mathrm{d, Large}/\rho_\mathrm{d, Small}$ is also reduced just behind the ionization front (the gray shaded region), for example, at $r\sim 6$ pc in Cloud 2 at $t=2.9$ Myr. We find that the dust cavity radius becomes larger as the radiation source becomes brighter (the regions ‘a’). The reasons are as follows. grain electric potential of the dust grains with the same size within $r=2$ pc is almost the same among all simulations and the number density of the gas becomes smaller for stronger radiation source. Since the dust drag force strongly depends on the grain electric potential, the number density of gas, and radiation pressure on dust, relative velocity between dust and gas becomes larger for the brighter source. In the region ‘b’ and ‘d’, the ratio $\rho_\mathrm{d, Large}/\rho_\mathrm{d, Small}$ is decreased from the initial condition when the radiation source is a single OB star (Cloud 2). Except in the ionization front (vertical brown dashed lines) which is contained in the region ‘d’, radiation pressure preferentially removes large dust grains from these regions. The photoelectric yield of the large dust grains is smaller than that of the small dust grains, and hence grain electric potential of the large dust grains becomes smaller than that of the small dust grains. Coulomb drag between large dust grains and gas therefore becomes weaker than that between small dust grains and gas. On the other hand, since Cloud 4 has the strongest radiation source and hence it makes grain electric potentials largest among the simulations, the dust size segregation in the regions ‘b’ and ‘d’ is less prominent. Even when we compare Cloud 2 and 4 at the same irradiation time, $t=$1.1 Myr, the dust size distributions inside regions are different. Luminosity of the radiation source must be the main cause of the dust size segregation. The ratio $\rho_\mathrm{d, Large}/\rho_\mathrm{d, Small}$ in all simulations has a peak in the regions ‘c’. Since dust grains have large negative charge in the regions ‘c’ and ‘d’, the coulomb drag force between dust and gas is strong and hence dust and gas are tightly coupled each other. Large dust grains are, therefore, removed from the regions ‘a’ and ‘b’ and gathered in the regions ‘c’. At the ionization front and the shock front (vertical green dot-dot-dashed lines), the relative velocity, $v_\mathrm{d} - v_\mathrm{g}$ has downward peaks. In theses fronts, gas pressure force exceeds radiation pressure force. Since the dust drag time depends on the dust grain size, the dust-gas relative velocity also depends on the grain size. As a result, $\rho_\mathrm{d, Large}/\rho_\mathrm{d, Small}$ is slightly reduced in these fronts. Discussion and Conclusions {#sec:Discussion} ========================== We have investigated radiation feedback in dusty clouds by one-dimensional multi-fluid hydrodynamic simulations. In order to study spatial dust distribution inside regions, we solve gas and dust motion self-consistently. We also investigate dust size distribution within regions by considering dust grains with two different sizes. We find that radiation pressure creates dust cavity regions. We confirm that the size of the dust cavity region is broadly agree with the observational estimate [@Inoue2002]. We also find that radiation pressure preferentially removes large dust from regions in the case of a single OB star. This result is almost the same as in [@Akimkin2015]. The dust size distribution is, however, less affected when the radiation source is a star cluster, in other word, a more luminous case. Resulting dust size distributions largely depend on the luminosity of the radiation source. We assume dust is graphite. There are, however, other forms of dust such as silicate. Since the photoelectric yield and the absorption coefficient depend on a dust model, spatial dust distribution of dust grains may become different when we use a different dust model. For example, since silicate has a larger work function and a smaller absorption coefficient than graphite, the cavity size in the silicate case may become larger than that in the graphite case (see @Akimkin2015 [@Akimkin2017] for details). In our simulations, we neglect the effect of sputtering that changes the dust grain size. We estimate this according to [@Nozawa2006], and confirm that sputtering effect is negligible in our simulations. However, if we consider the smaller dust grains, we may have to include the sputtering. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to Takashi Kozasa, Takashi Hosokawa, and Shu-ichiro Inutsuka for helpful discussion. SI acknowledges Grant-in-Aid for JSPS Research Fellow (17J04872) and TO acknowledge the financial support of MEXT KAKENHI Grant (16H01085). Numerical simulations were partly carried out with Cray XC30 in CfCA at NAOJ. Shock tube tests {#sec:stt} ================ In order to investigate whether our method is reliable, we perform shock tube tests. Since the effect of the dust becomes almost negligible in shock tube tests if we assume dust-to-gas mass ratio as 6.7$\times$10$^{-3}$ (the value we assume in our simulations) and hence we will not be able to investigate whether the numerical code is reliable or not, we assume dust-to-gas mass ratio as 1 in the shock tube tests. The initial condition of the shock tube problem is as follows: $$\begin{aligned} \rho_\mathrm{g} &= \rho_\mathrm{d} = \begin{cases} 1,\, &(x<0.5),\nonumber \\ 0.125,\, &(x>0.5), \nonumber \end{cases} \\ P_\mathrm{g} &= \begin{cases} 1,\, &(x<0.5),\nonumber \\ 0.1,\, &(x>0.5), \nonumber \end{cases} \\ \gamma &= 1.67, & \nonumber\end{aligned}$$ where $\gamma$ is heat capacity ratio. Since the analytic solutions are known for $K_\mathrm{d}=0$ and $\infty$, we perform test calculations for $K_\mathrm{d}=0$ and $K_\mathrm{d}=10^{10}$ ($\Delta t_\mathrm{sim} \gg (\rho_\mathrm{g} \rho_\mathrm{d})/(\rho_\mathrm{d}+\rho_\mathrm{g}) K_\mathrm{d}^{-1}\equiv t_\mathrm{d}$), where $\Delta t_\mathrm{sim}$ is the time scale of the shock tube problem and $t_\mathrm{d}$ is the drag stopping time. We use linearly spaced 400 meshes between $x=0$ and $1$. Time steps we use for these simulations are $\Delta t =2.5\times10^{-4}$ for $K_\mathrm{d}=0$ and $\Delta t=4.2\times10^{-4}$ for $K_\mathrm{d}=10^{10}$. The results are shown in Fig \[test\]. We confirm that the numerical results agree with the analytic solutions. ![image](exactddgg2.pdf){width="13.5cm"} dust grains with two sizes and gas dynamics {#sec:2dust} =========================================== In order to investigate the spatial variation of the grain size distribution inside regions, we solve following hydrodynamics equations, where we consider dust grains with two sizes (dust-1 and dust-2): $$\begin{aligned} \frac{\partial}{\partial t} \rho_\mathrm{g} + \frac{\partial}{\partial x} \rho_\mathrm{g} v_\mathrm{g} &=& 0 \nonumber \\ \frac{\partial}{\partial t} \rho_\mathrm{d1} + \frac{\partial}{\partial x} \rho_\mathrm{d1} v_\mathrm{d1} &=& 0 \nonumber \\ \frac{\partial}{\partial t} \rho_\mathrm{d2} + \frac{\partial}{\partial x} \rho_\mathrm{d2} v_\mathrm{d2} &=& 0 \nonumber \\ \frac{\partial}{\partial t} \rho_\mathrm{g} v_\mathrm{g} + \frac{\partial}{\partial x} \rho_\mathrm{g} v_\mathrm{g}^2 &=& \rho_\mathrm{g} a_\mathrm{gra} + f_\mathrm{rad,g} - \frac{\partial}{\partial x} P_\mathrm{g} \nonumber \\ & & + K_\mathrm{d1} (v_\mathrm{d1} - v_\mathrm{g}) + K_\mathrm{d2} (v_\mathrm{d1} - v_\mathrm{g}) \nonumber \\ \frac{\partial}{\partial t} \rho_\mathrm{d1} v_\mathrm{d1} + \frac{\partial}{\partial x} \rho_\mathrm{d1} v_\mathrm{d1}^2 &=& \rho_\mathrm{d1} a_\mathrm{gra} + f_\mathrm{rad,d1} \nonumber \\ & & + K_\mathrm{d1} (v_\mathrm{g} - v_\mathrm{d1}) \nonumber \\ \frac{\partial}{\partial t} \rho_\mathrm{d2} v_\mathrm{d2} + \frac{\partial}{\partial x} \rho_\mathrm{d2} v_\mathrm{d2}^2 &=& \rho_\mathrm{d2} a_\mathrm{gra} + f_\mathrm{rad,d2} \nonumber \\ & & + K_\mathrm{d2} (v_\mathrm{g} - v_\mathrm{d2}) \nonumber \\ \frac{\partial}{\partial t} \left( \frac{1}{2} \rho_\mathrm{g} v_\mathrm{g}^2 + e_\mathrm{g} \right) &+& \frac{\partial}{\partial t} \left( \frac{1}{2} \rho_\mathrm{d1} v_\mathrm{d1}^2 + \frac{1}{2} \rho_\mathrm{d2} v_\mathrm{d2}^2 \right) \nonumber \\ + \frac{\partial}{\partial x} \left( \frac{1}{2} \rho_\mathrm{g} v_\mathrm{g}^2 + h_\mathrm{g} \right) v_\mathrm{g} &+& \frac{\partial}{\partial x} \left( \frac{1}{2} \rho_\mathrm{d1} v_\mathrm{d1}^3 + \frac{1}{2} \rho_\mathrm{d2} v_\mathrm{d2}^3 \right) \nonumber \\ &=& \left( \rho_\mathrm{g} v_\mathrm{g} + \rho_\mathrm{d1} v_\mathrm{d1} + \rho_\mathrm{d2} v_\mathrm{d2} \right) a_\mathrm{gra} \nonumber \\ & & + f_\mathrm{rad,g} v_\mathrm{g} + f_\mathrm{rad,d1} v_\mathrm{d1}+ f_\mathrm{rad,d2} v_\mathrm{d2} \nonumber \end{aligned}$$ where $\rho_\mathrm{d1}$ is the mass density of dust-1, $\rho_\mathrm{d2}$ is the mass density of dust-2, $v_\mathrm{d1}$ is the velocity of dust-1, $v_\mathrm{d2}$ is the velocity of dust-2, $f_\mathrm{rad,d1}$ is the radiation pressure gradient force on dust-1, $f_\mathrm{rad,d2}$ is the radiation pressure gradient force on dust-2, $K_\mathrm{d1}$ is the drag coefficient between gas and dust-1, and $K_\mathrm{d2}$ is the drag coefficient between gas and dust-2. In order to solve dust drag force stably, we use following algorithm for the equation of momentum: $$\begin{split} \begin{bmatrix} p_\mathrm{d1}^{*} \left( = \rho_\mathrm{d1}^\mathrm{t+\Delta t} v_\mathrm{d1}^* \right) \\ p_g^{*}\left( = \rho_\mathrm{g}^\mathrm{t+\Delta t} v_\mathrm{g}^* \right) \\ p_\mathrm{d2}^{*} \left( = \rho_\mathrm{d2}^\mathrm{t+\Delta t} v_\mathrm{d2}^* \right) \\ \end{bmatrix} &= \begin{bmatrix} p_\mathrm{d1}^t \\ p_g^t \\ p_\mathrm{d2}^t \end{bmatrix} + \begin{bmatrix} F_\mathrm{p, d1}(\rho_\mathrm{d1}^t,v_\mathrm{d1}^t) \\ F_\mathrm{p, g}(\rho_g^t,v_\mathrm{g}^t,e_g^t) \\ F_\mathrm{p, d2}(\rho_\mathrm{d2}^t,v_\mathrm{d2}^t) \\ \end{bmatrix} \Delta t,\label{cdv3} \end{split}$$ $$\begin{split} & \begin{bmatrix} p_\mathrm{d1}^\mathrm{t+\Delta t} \\ p_g^\mathrm{t+\Delta t} \\ p_\mathrm{d2}^\mathrm{t + \Delta t} \end{bmatrix} \\ &= \begin{bmatrix} \rho_\mathrm{d1}^{t+\Delta t} \\ \rho_\mathrm{g}^{t+\Delta t} \\ \rho_\mathrm{d2}^{t+\Delta t} \end{bmatrix} \frac{ p_\mathrm{d1}^{*} + p_\mathrm{g}^{*} + p_\mathrm{d2}^{*} }{ \rho_\mathrm{d1}^{t+\Delta t} + \rho_g^{t+\Delta t} + \rho_\mathrm{d2}^{t+\Delta t} } \\ &+ \begin{bmatrix} \rho_\mathrm{d1}^\mathrm{t+\Delta t} \\ \rho_g^\mathrm{t+\Delta t} \\ \rho_\mathrm{d2}^\mathrm{t+\Delta t} \end{bmatrix} \left[ a_\mathrm{gra} + \frac{f_\mathrm{d1} + f_g + f_\mathrm{d2}}{\rho_\mathrm{d1}^\mathrm{t+\Delta t} + \rho_g^\mathrm{t+\Delta t} + \rho_\mathrm{d2}^\mathrm{t+\Delta t} }\right] \Delta t \\ &+ \begin{bmatrix} \frac{b}{a+x} \\ 1 \\ \frac{c}{d+x} \end{bmatrix} \frac{\rho_g \mathrm{e}^{x\Delta t}}{x(x-y)} \left[ b(d+x) v_\mathrm{d1}^* + (a+x)(d+x)v_g^* + c(a+x)v_\mathrm{d2}^* \right] \\ &+ \begin{bmatrix} \frac{b}{a+y} \\ 1 \\ \frac{c}{d+y} \end{bmatrix} \frac{\rho_g \mathrm{e}^{y\Delta t}}{y(y-x)} \left[ b(d+y) v_\mathrm{d1}^* + (a+y)(d+y)v_g^* + c(a+y)v_\mathrm{d2}^* \right] \\ &+ \begin{bmatrix} \frac{b}{a+x} \\ 1 \\ \frac{c}{d+x} \end{bmatrix} \frac{\mathrm{e}^{x\Delta t}-1}{x^2(x-y)} \left[ a(d+x) f_\mathrm{d1} + (a+x)(d+x)f_g + d(a+x)f_\mathrm{d2} \right] \\ &+ \begin{bmatrix} \frac{b}{a+y} \\ 1 \\ \frac{c}{d+y} \end{bmatrix} \frac{\mathrm{e}^{y\Delta t}-1}{y^2(y-x)} \left[ a(d+y) f_\mathrm{d1} + (a+y)(d+y)f_g + d(a+y)f_\mathrm{d2} \right], \label{cdv4} \end{split}$$ where $f_\mathrm{d1}$ is the force on dust-1 ($f_\mathrm{d1}=f_\mathrm{rad,d1}$), $f_\mathrm{d2}$ is the force on dust-2 ($f_\mathrm{d2}=f_\mathrm{rad,d2}$), $$\begin{split} a &= \frac{K_\mathrm{d1}}{\rho_\mathrm{d1}^\mathrm{t+\Delta t}}, \nonumber \\ b &= \frac{K_\mathrm{d1}}{\rho_\mathrm{g}^\mathrm{t+\Delta t}}, \nonumber \\ c &= \frac{K_\mathrm{d2}}{\rho_\mathrm{g}^\mathrm{t+\Delta t}}, \nonumber \\ d &= \frac{K_\mathrm{d2}}{\rho_\mathrm{d2}^\mathrm{t+\Delta t}}, \nonumber \\ x &= -\frac{1}{2} \left[ (a+b+c+d) + \sqrt{(a+b+c+d)^2 - 4(ad+ac+bd)} \right], \nonumber \\ \end{split}$$ and $$y = \frac{ (ad+ac+bd) }{ x }. \nonumber$$ As in section \[subsubsec:d1\], in order to determine the relative velocity between gas and dust, we use equation (\[cdv4\]) which is the exact solution of the following equations: $$\begin{split} \rho_\mathrm{d2} \frac{d}{dt} v_\mathrm{d2} &= f_\mathrm{rad,d2} + \rho_\mathrm{d2} a_\mathrm{gra} + K_\mathrm{d2} (v_\mathrm{g} - v_\mathrm{d2}), \\ \rho_\mathrm{g} \frac{d}{dt} v_\mathrm{g} &= f_\mathrm{rad,g} - \frac{\partial}{\partial x}P_\mathrm{g} + \rho_\mathrm{g} a_\mathrm{gra} \\ &+ K_\mathrm{d1} (v_\mathrm{d1} - v_\mathrm{g})+ K_\mathrm{d2} (v_\mathrm{d2} - v_\mathrm{g}), \label{dve2} \\ \rho_\mathrm{d1} \frac{d}{dt} v_\mathrm{d1} &= f_\mathrm{rad,d1} + \rho_\mathrm{d1} a_\mathrm{gra} + K_\mathrm{d1} (v_\mathrm{g} - v_\mathrm{d1}). \end{split}$$ In order to solve momentum equations, we therefore first solve the momentum advection (\[cdv3\]), and then we solve the exact solution of the equation (\[dve2\]) by equation (\[cdv4\]). In the case for $|x| \Delta t \ll 1$ or $|y| \Delta t \ll 1$, we use Taylar expansion, $\mathrm{e}^{x\Delta t} \approx 1 + x\Delta t$ or $\mathrm{e}^{y\Delta t} \approx 1 + y\Delta t$, and prevent the numerical error in calculating $(\mathrm{e}^{x\Delta t}-1)/x$ from becoming too large. The terminal velocity approximation =================================== We here show that the terminal velocity approximation may give an unphysical result when the simulation time step $\Delta t$ is shorter than the drag stopping time $t_\mathrm{d}$. In order to derive dust velocity and gas velocity, we have used the equation (\[cdv2\]). On the other hand, [@Akimkin2017] used the terminal velocity approximation. When we employ the terminal velocity approximation, the equation (\[cdv2\]) is transforms into the following form: $$\begin{split} \begin{bmatrix} p_\mathrm{d}^\mathrm{t+\Delta t} \\ p_\mathrm{g}^\mathrm{t+\Delta t} \end{bmatrix} &= \begin{bmatrix} \rho_\mathrm{d}^\mathrm{t+\Delta t} \\ \rho_\mathrm{g}^\mathrm{t+\Delta t} \end{bmatrix} \frac{ p_\mathrm{d}^*+p_\mathrm{g}^* }{ \rho_\mathrm{d}^\mathrm{t+\Delta t} + \rho_g^\mathrm{t+\Delta t} } + \begin{bmatrix} \rho_\mathrm{d}^\mathrm{t+\Delta t} \\ \rho_\mathrm{g}^\mathrm{t+\Delta t} \end{bmatrix} a_\mathrm{gra} \Delta t \\ &+ \begin{bmatrix} (\rho_\mathrm{d}^\mathrm{t+\Delta t} \Delta t + \rho_\mathrm{g}^\mathrm{t+\Delta t} t_\mathrm{d})f_\mathrm{d} \\ (\rho_\mathrm{g}^\mathrm{t+\Delta t} \Delta t + \rho_\mathrm{d}^\mathrm{t+\Delta t} t_\mathrm{d})f_\mathrm{g} \end{bmatrix} \frac{1}{\rho_\mathrm{g}^\mathrm{t+\Delta t} + \rho_\mathrm{d}^\mathrm{t+\Delta t}} \\ & + \begin{bmatrix} \rho_\mathrm{d}^\mathrm{t+\Delta t} f_\mathrm{g} \\ \rho_\mathrm{g}^\mathrm{t+\Delta t} f_\mathrm{d} \end{bmatrix} \frac{(\Delta t - t_\mathrm{d})}{\rho_\mathrm{g}^\mathrm{t+\Delta t} + \rho_\mathrm{d}^\mathrm{t+\Delta t}} . \end{split} \label{cdv2ap}$$ The advantage of the equation (\[cdv2\]) is that it is accurate even for $\Delta t < t_\mathrm{d}$. In contrast, the equation (\[cdv2ap\]) becomes inaccurate for $\Delta t \ll t_\mathrm{d}$, since the relation of $\Delta t$ and $t_\mathrm{d}$ should be $\Delta t \gg t_\mathrm{d}$ in order the terminal velocity approximation to be valid. For example, the direction of $f_\mathrm{d}$ on gas and that of $f_\mathrm{g}$ on dust in the equation (\[cdv2ap\]) becomes opposit for $\Delta t < t_\mathrm{d}$. We perform simulations by using equation (\[cdv2ap\]) in stead of equation (\[cdv2\]) and compare the simulation results. Simulation results do not largely change for Cloud 2, 3, and 4. The numerical simulation of Cloud 1, however, is crashed, since the timestep becomes $\Delta t \ll t_\mathrm{d}$ at some steps. -------- ------------------ ---------------------------- ------------------ --------------------- ------------------ --------------- ${n}_\mathrm{H}$ ${n}_\mathrm{\ion{H}{ii}}$ ${T}_\mathrm{g}$ ${a}_\mathrm{dust}$ ${V}_\mathrm{d}$ $\Delta v$ (cm$^{-3}$) (cm$^{-3}$) (K) () (V) (km s$^{-1}$) IGM 10$^{-5}$ 10$^{-5}$ 10$^4$ 0.1 20 0 region 10 10 10$^4$ 0.1 5 0 region 10$^2$ 0 10$^2$ 0.1 0 0 -------- ------------------ ---------------------------- ------------------ --------------------- ------------------ --------------- : Numerical setup for the IGM, the region, and the region. The number densities of hydrogen and ionized hydrogen are represented by ${n}_\mathrm{H}$ and $n_\mathrm{\ion{H}{ii}}$. The temperature of gas is represented by $T_\mathrm{g}$. The radius of a grain is represented by $a_\mathrm{dust}$. The grain electric potential of dust grains is represented by $V_\mathrm{d}$. The relative velocity between a dust grain and gas is represented by $\Delta v$. \[TabC2\] ![ The green, red, and blue hatched regions represent the condition of $t_\mathrm{CFL} > t_\mathrm{d}$ for the IGM, the region, and the region, respectively. []{data-label="timestep_ch"}](timestep_ch2.pdf){width="8cm"} The relation between $\Delta t$ and $t_\mathrm{d}$ becomes $\Delta t < t_\mathrm{d}$ when the drag stopping time $t_\mathrm{d}$ is larger than the chemical timestep $\Delta t_\mathrm{chem}$ or the timestep $\Delta t_\mathrm{CFL}$ defined by Clourant-Friedrichs-Lewy condition. The chemical timestep is defined in equation (7) in paper I. In Fig \[timestep\_ch\], we present the condition for $t_\mathrm{CFL}(\equiv \alpha \Delta x/v) > t_\mathrm{d}$ in the case of the intergalactic medium (IGM), the region, and the region, where $\alpha$ is constant (we assume $\alpha=0.1$), $\Delta x$ is the mesh size, and $v$ is velocity. The details of numerical setup for the IGM, the region, and the region are listed in Tab \[TabC2\]. The green, red, and blue hatched regions represent the condition for $t_\mathrm{CFL} > t_\mathrm{d}$ for the IGM, the region, and the region, respectively. If the relation between $t_\mathrm{CFL}$ and $t_\mathrm{d}$ becomes $t_\mathrm{CFL} \ll t_\mathrm{d}$, the simulation may become unstable. 0 Some extra material =================== If you want to present additional material which would interrupt the flow of the main paper, it can be placed in an Appendix which appears after the list of references. \[lasPHage\] [^1]: E-mail: ishiki@astro1.sci.hokudai.ac.jp [^2]: http://www.astro.princeton.edu/\~draine/dust/dust.diel.html
{ "pile_set_name": "ArXiv" }
--- abstract: 'A new resistance bridge has been built at the Laboratoire national de métrologie et d’essais (LNE) to improve the ohm realization in the *Système International* (SI) of units from the quantum Hall effect. We describe the instrument, the performance of which relies on the development of two synchronized and noise filtered current sources, an accurate and stable current divider and a cryogenic current comparator (CCC) having a low noise of $\mathrm{80~pA.t/Hz^{1/2}}$. As targeted, the uncertainty budget for the measurement of the 100 $\Omega/(R_\mathrm{K}/2)$ ratio, where $R_\mathrm{K}$ is the von Klitzing constant, amounts to a few parts in $10^{10}$ only.' author: - 'Wilfrid Poirier, Dominique Leprat and Félicien Schopfer[^1]' title: 'A resistance bridge based on a cryogenic current comparator targeting sub-$10^{-9}$ measurement uncertainties' --- Metrology, resistance, quantum Hall effect, bridge, cryogenic current comparator, SQUID. Introduction ============ the SI[@BIPM], the ohm can be realized from the von Klitzing constant $R_\mathrm{K}=h/e^2$[@AmpereBIPM2019], where $h$ is the Planck constant and $e$ is the elementary charge, using the quantum Hall effect[@Klitzing1980]. More precisely, the quantized Hall resistance value of a quantum resistance standard (QHR), $R_\mathrm{K}/i$, where $i$ is an integer, is used as a primary reference of resistance[@Poirier2019]. The resistance unit is then disseminated by means of comparisons with this universal and reproducible reference using a resistance bridge. Comparing a resistance with the quantized Hall resistance with the lowest measurement uncertainties is challenging since the measurement current of QHR devices must remain small, i.e. a few tens of $\mu$A if based on GaAs/AlGaAs heterostructure[@Poirier2009] and a few hundreds of $\mu$A if based on graphene[@Ribeiro2015]. Thus, performing a resistance calibration with a relative uncertainty of a few parts in $10^{9}$ requires a resistance bridge very sensitive in current. The most accurate and sensitive bridge able to disseminate the resistance unit[@Poirier2009] is based on the performance of a cryogenic current comparator (CCC). The CCC[@Harvey1972] is basically a perfect transformer operating in direct current regime (dc) able to measure the ratio of the currents circulating through the two resistances to compare with a relative uncertainty below $10^{-10}$. Made of superconducting windings embedded in a superconducting shielding, its accuracy relies on the Messner effect. Its high-sensitivity comes from the flux detector equipping it, which is based on a Superconducting QUantum Interference Device (SQUID)[@Gallop2006]. The development of resistance bridge equipped with a CCC started in several national metrology institutes (NMI), including the French institute, right after the discovery of the QHE. First ones were operating in dc[@Delayahe1985; @Williams1991; @Hartland92; @Dziuba1993]. Then, other bridges adapted to measurements in the low-frequency (below 1 Hz) alternating current regime (ac) were proposed[@Delahaye1991; @Seppa1997]. Accurate operation at higher frequencies was achieved by replacing the CCC with a room-temperature current comparator using high-permeability magnetic cores[@DelahayeAC1993; @Satrapinski2017] but at the expense of larger measurement uncertainties. Since the nineties, international bilateral resistance comparisons[@BIPMEMK12], organized by the *Bureau International des Poids et Mesures* (BIPM), have demonstrated the equivalence of several NMIs using CCC-based resistance bridge (RB) for the realization of the ohm from the QHE with a relative uncertainty of a few parts in $10^9$. The LNE has been using such a dc bridge[@Delayahe1985; @Piquemal1995] for more than thirty years to perform calibrations of wire resistors with an accuracy of a few parts in $10^9$, as about twenty NMIs do at the present time. This old instrument now suffers several limitations with regards to the current needs not only for calibrations but also for research works: few resistance ratios can be measured (1, 10, 12.9, 50, 64.5, 100, 129), the optimal resistance range extends from 1 $\Omega$ to $R_\mathrm{K}/2$ only, the type B uncertainty (of about 7 parts in $10^{10}$ in relative value for the 100 $\Omega$ calibration from the QHE) cannot be further reduced and the type A uncertainty is limited to one part in $10^{9}$ (for one hour measurement time) by the outdated performance of the radio-frequency SQUID. The improvement of digital and analog electronic components and the availability of dc SQUIDs allow the development of a resistance bridge with better performance in terms of sensitivity, accuracy and automation. Several NMIs have thus recently developed more modern and automated RB[@Sanchez2009; @Drung2009; @Gotz2009; @Williams2010]. Here, we report on a semi-automated RB designed to perform comparisons of resistances, of values ranging from $1~\Omega$ to $\mathrm{1.29~M\Omega}$, in ratios from 1 up to 1290, based on current sources delivering currents from $1~\mu$A up to 100 mA. Compared to the older LNE bridge, a strong performance upgrade is expected due to improvements not only in the technical design but also in the instruments making up the bridge. Our target combined standard uncertainty for the calibration of a $100~\Omega$ wire resistor in terms of $R_\mathrm{K}/2$ is a few parts in $10^{10}$ (k=1). The paper is organized as follows: the principle of the RCB is described in Section II, the design and the performance of the CCC are presented in Section III, Section IV describes the electronics of current sources, Section V describes current dividers used to set the fine tuning of the current ratio, Section VI presents the shielding techniques implemented, measurements of SQUID noise and of resistance ratio are presented in Section VII and finally a conclusion is made in Section VIII. Principle of the RB =================== ![Principle of the new LNE resistance bridge based on a CCC. The figure shows the two interlocked current sources, the CCC equipped with a DC SQUID and the feedback control on the secondary current source, the current dividers injecting the in-phase, $\epsilon I_\mathrm{S}$, and the in-quadrature, $j\epsilon_{q}I_\mathrm{S}$, current fractions respectively, the null detector and the two resistances $R_\mathrm{S}$ and $R_\mathrm{P}$ to compare. The ground can be connected in position A (low potential of the secondary resistor) or B (low potential of the secondary winding).[]{data-label="fig1"}](Fig-PrincipleSchematic.pdf){width="3.5in"} The principle of the new RB, described in fig.\[fig1\], is close to that of the older one. It is based on two synchronized sources, primary (P) and secondary (S) sources, that deliver currents $I_\mathrm{P}$ and $I_\mathrm{S}$ respectively. The primary (secondary) source supplies the resistance $R_\mathrm{P}$ ($R_\mathrm{S}$) in series with a superconducting winding of a CCC of number of turns $N_\mathrm{P}$ ($N_\mathrm{S}$). The number of turns $N_\mathrm{S}$ and $N_\mathrm{P}$ are chosen so that the ratio $N_\mathrm{S}/N_\mathrm{P}$ is close to the resistance ratio $R_\mathrm{S}/R_\mathrm{P}$. A standard current divider (SCD) is used to deviate an in-phase calibrated fraction $\epsilon$ of the current $I_\mathrm{S}$ into an auxiliary winding of number of turns $N_\mathrm{A}$. The windings of the CCC are wound according to a toroidal geometry and embedded in a superconducting shielding. Application of the Ampere’s theorem to a circulation along a cross-section of the shielding, where the magnetic flux density is zero, leads to the relationship $N_\mathrm{P}I_\mathrm{P}-(N_\mathrm{S}+\epsilon N_\mathrm{A})I_\mathrm{S}=I_\mathrm{CCC}$, where $I_\mathrm{CCC}$ is a screening current. Because the CCC shielding overlaps itself on two or three turns without electrical contact, this superconducting current circulates from the inner to the outer side of the shielding. It can be therefore detected by a pick-up coil coupled to the outer side and connected to the entry inductance of the SQUID. The secondary current source is servo-controlled by the output of the CCC SQUID electronics so that the screening current $I_\mathrm{CCC}$ (i.e. the total ampere.turn) is nulled. It results that: $$N_\mathrm{P}I_\mathrm{P}-(N_\mathrm{S}+\epsilon N_\mathrm{A})I_\mathrm{S}=0. \label{Equation:AmpereTurns}$$ From the fraction $\epsilon_0$ setting the voltage unbalance, $R_\mathrm{S}I_\mathrm{S}=R_\mathrm{P}I_\mathrm{P}$, one obtains: $$R_\mathrm{S}/R_\mathrm{P}=(N_\mathrm{S}+\epsilon_0 N_\mathrm{A})/N_\mathrm{P}. \label{Equation:resistance}$$ The SCD can also be inserted in the primary circuit to deviate a fraction of the current $I_\mathrm{P}$. In this case, the previous equations remain valid by simply exchanging S and P index. This is the operating mode planned for measurements involving a low resistance $R_\mathrm{S}$ (for example 1 $\Omega$) supplied by a large current $I_\mathrm{S}$ (for example 50 mA) which would lead to a too strong dissipation in the SCD if placed in the secondary circuit. Instead, the SCD inserted in the primary circuit is biased by the lower current $I_\mathrm{P}$ which is usually below 10 mA. Improvements of the bridge mainly concern i) the two current sources able to operate in DC and at very low frequencies which are both servo-controlled by a unique external voltage source, ii) the accurate and stable standard current divider used to adjust the current ratio to the resistance ratio, and iii) the new DC SQUID-based CCC. The new bridge also includes a second current divider able to deviate an in-quadrature current fraction $j\epsilon_{q}I_\mathrm{S}$ in a fourth winding of number of turns $N_\mathrm{A}^q$. It is used to cancel the voltage overshoots occurring during current reversals at the entry of the null detector (ND) that are caused by the capacitances $C_\mathrm{P}$ and $C_\mathrm{S}$ in parallel to the resistors $R_\mathrm{P}$ and $R_\mathrm{S}$ respectively. More precisely, the master equation for ampere.turns becomes: $$N_\mathrm{P}I_\mathrm{P}-(N_\mathrm{S}+\epsilon N_\mathrm{A}+j\epsilon_{q}N_\mathrm{A}^q)I_\mathrm{S}=0. \label{Equation:AmpereTurns}$$ Assuming first-order approximation in angular frequency $\omega$, the voltage balance condition leads to the first equation \[Equation:resistance\] and a second equation involving capacitances according to: $$(R_\mathrm{P}C_\mathrm{P}-R_\mathrm{S}C_\mathrm{S})\omega=\epsilon_{q}\frac{R_\mathrm{P}N_\mathrm{A}^q}{R_\mathrm{S}N_\mathrm{P}}. \label{Equation:Phase}$$ The quadrature current divider (QCD) is therefore used to compensate quadrature signals caused by capacitances. By cancelling the voltage overshoots that can saturate the null detector, the reversal frequency of the current can be increased. This reduces the impact of voltage offset drift and of the $1/f$ SQUID noise on measurements. Moreover, it allows optimizing the ratio between the acquisition time and the total experience time. The calibration of the QCD fractions is not required. Finally, careful shielding of cables and guarding of circuits are implemented to ensure the equality of the currents circulating through the resistor and the winding for better accuracy. the CCC ======= Design and fabrication ---------------------- ![Pictures of the CCC at different stages of shielding assembly. a) The CCC alone. b) The CCC on the probe with all shields removed. c) First Pb/Brass shield of the CCC in place. d) Second Pb/Brass shield of the SQUID in place. e) External cryoperm shield in place.[]{data-label="fig2"}](Fig2.pdf){width="3.5in"} ** The cryogenic current comparator is made of 15 windings of 1, 1, 2, 4, 16, 16, 32, 64, 128, 160, 160, 1600, 1600, 2065 and 2065 turns which hold together with epoxy glue[@Soukiassian2010]. Each winding is made of superconducing and insulated $\mathrm{60~\mu m}$ diameter NbTi/Cu wire. We used optically checked 0.1 mm thick Pb sheets and Pb/Sn/Cd superconducting solder at a temperature lower than $150^{\circ}$C to realize the toroidal shielding around the windings. Our shield overlaps twice (3 layers) to prevent flux leakage. Each layer is covered with PTFE (poly-tetra-fluoro-ethylene) tape for electrical insulation. The CCC is fixed to the cryogenic probe with a piece in MACOR$^{\scriptsize\textregistered}$ material. The DC-SQUID (Quantum Design, Inc) has an input inductance of $L_i=1.8~\mu H$ and a nominal flux noise in flux-lock feedback mode of $\mathrm{3~\mu\phi_0/Hz^{1/2}}$ above 0.3 Hz. It is coupled to the CCC via a superconducting flux transformer made of a NbTi wire inserted in a lead tube. The sensitivity of the system $S_\mathrm{CCC}$ depends on the number of turns $N_\mathrm{PC}$ of the pick-up coil coupled to the CCC through the relationship:\ $$S_\mathrm{CCC}=(2/k)N_\mathrm{PC}S_\mathrm{SQ},$$ where $k$ is the coupling constant between the CCC and the pickup coil, and $S_\mathrm{SQ}$ the SQUID sensitivity (in $\mathrm{\mu A/\phi_0}$). The best sensitivity $S_\mathrm{CCC}^\mathrm{opt}$ is obtained for $N_\mathrm{PC}^\mathrm{opt}$ given by:\ $$N_\mathrm{PC}^\mathrm{opt}=\sqrt{L_i/L^\mathrm{eff}_\mathrm{CCC}},$$\ where $L^\mathrm{eff}_\mathrm{CCC}$ is the effective self inductance of the CCC taking account of the superconducting screen that isolates the SQUID from external magnetic fields (Earth’s field for instance). $L^\mathrm{eff}_\mathrm{CCC}$ and therefore $S_\mathrm{CCC}$ can be determined in a given geometry using the analytical calculation of Sesé and co-authors[@Sese1999; @Sese2003]. In our case, one calculates $L^\mathrm{eff}_\mathrm{CCC}\sim 14$ nH, $N_\mathrm{PC}^\mathrm{opt}=12$ and $S_\mathrm{CCC}^\mathrm{opt}=\mathrm{5~\mu A.t/\phi_0}$. Due to geometrical constraints, the number of turns of the pick-up coil was reduced to $N_\mathrm{PC}=6$ leading to an experimental sensitivity $S_\mathrm{CCC}=\mathrm{8~\mu A.t/\phi_0}$ close to the calculated value of $\mathrm{6~\mu A.t/\phi_0}$. Our magnetic screen is made up of 5 concentric cylinders: two Pb ones, each one embedded in a brass one, and an outer Cryoperm® cylinder. Each cylinder is closed at the top with the same material. The cryogenic probe body is made of three rods to stabilize it mechanically and prevent from vibrations. Noise performance ----------------- ![Noise spectral density measured by the SQUID *versus* frequency *f* for the CCC alone and not connected.[]{data-label="Fig-NoiseCCCAlone"}](Fig-NoiseCCCAlone.pdf){width="3.5in"} The CCC was firstly tested with all windings disconnected. Fig.\[Fig-NoiseCCCAlone\] shows the noise spectral density in $\mathrm{\phi_0/Hz^{1/2}}$ as a function of the frequency. The main frequency resonance due to the coupling of the large inductance of windings and the capacitance between wires is around 14 kHz. Between 6 Hz and about 2 kHz, the noise spectral density is dominated by sharp peaks with an amplitude lower than $1~\mathrm{m\phi_0/Hz^{1/2}}$ which are caused by mechanical resonances. At lower frequencies down to about 0.1 Hz, there exists a white noise regime with a constant noise spectral density of about $10~\mathrm{\mu \phi_0/Hz^{1/2}}$. Considering the CCC sensitivity of $\mathrm{8~\mu A.t/\phi_0}$, this leads to a current sensitivity of about $\mathrm{80~pA/Hz^{1/2}}$. One can guess *1/f* noise above $10~\mathrm{\mu \phi_0/Hz^{1/2}}$ at frequencies lower than 0.1 Hz what is expected if considering the 0.3 Hz corner frequency and the $3~\mathrm{\mu \phi_0/Hz^{1/2}}$ white noise level of the Quantum Design DC SQUID. We therefore conclude that the current reversal frequency of the resistance bridge should be higher than 0.1 Hz to obtain the lowest measurement noise. Accuracy -------- The accuracy of the CCC was tested by realising winding opposition experiments. Windings with same nominal number of turns are connected in series-opposition and supplied by a large current. The voltage at the SQUID output $V_\mathrm{Output}$ operating in internal feedback mode (MODE 5) is converted in magnetic flux $\delta_{\phi_0}$ using the SQUID gain in $V/\phi_0$. By dividing $\delta_{\phi_0}$ by the total magnetic flux $NI/S_\mathrm{CCC}$ generated by one winding, one obtains the relative error $\Delta N/N= \delta_{\phi_0} S_\mathrm{CCC}/(NI)$. We have used the very low noise current source of the RB to carry out these experiments in order to prevent from noise rectification by the SQUID. The measurement current is reversed from 100 mA to -100 mA to remove offset voltages from recorded data. For turn numbers *N* equal to 16 and 2065 which are used in the calibration of a $100~\Omega$ resistor in terms of $R_\mathrm{K}/2$, $\Delta N/N$ is found equal to $(1.9 \pm 1.2)\times 10 ^{-11}$ and $(2.5 \pm 0.04)\times 10 ^{-11}$ respectively. For all other winding opposition, turn errors are found smaller than $6\times10^{-11}$, except for 1-1 and 2-2 combinations which seem let us conclude to significant errors of  $\sim1\times10^{-9}$ and $\sim5\times10^{-10}$. But, a magnetic flux leakage caused by a hole in the toroidal shield or by a imperfect chimney would manifest by large errors for all winding opposition. Our interpretation is that these apparent errors are only caused by spurious signals coming from residual noise rectification which manifest themselves all the more as the total ampere.turn number is small, i.e. for 1-1 and 2-2 winding opposition. Further reduction of the noise emitted by the current source in the 100 mA range (which is the most noisy range) is required to refine the determination of winding errors for small turn numbers. \[1\][&gt;m[\#1]{}]{} [|c|M[5cm]{}|]{} **Winding combination**& **$\Delta N/N$** 1-1&$(1.29\pm 0.38)\times10^{-9}$ 2-2&$(4.64\pm 0.63)\times10^{-10}$ 16-16&$(1.9\pm 1.2)\times10^{-11}$ 16+16-32&$(2.5\pm 1.1)\times10^{-11}$ 16+16+32-64&$(0.22\pm 0.76)\times10^{-11}$ 16+16+32+64-128&$(1.0\pm 1.1)\times10^{-11}$ 160-160&$(2.5\pm 0.14)\times10^{-11}$ 160-128-32&$(0.5\pm 0.4)\times10^{-11}$ 1600-1600&$(5.6\pm 0.032)\times10^{-11}$ 2065-2065&$(2.54\pm 0.035)\times10^{-11}$\ Our conclusion is that the CCC error in the measurements of usual resistance ratios, which exploited mainly windings of number of turns 2065, 1600, 160, and 16, is of a few $10^{-11}$. The CCC contribution to the type-B uncertainty is therefore below $10^{-10}$, in relative value. The current source electronics ============================== Design and fabrication ---------------------- ![Schemes of the electronic circuits of primary and secondary current sources. Both current sources are controlled by a unique external voltage source.[]{data-label="Fig-Circuits"}](Fig-Circuits.pdf){width="3.5in"} The RB is based on two current sources generating the currents $I_\mathrm{P}$ and $I_\mathrm{S}$ which supply the resistors $R_\mathrm{P}$ and $R_\mathrm{S}$ respectively. In some recently developed resistance bridges, current sources are based on digital electronics connected by fiber optics to a internal micro-controller[@Drung2009] or an external PXI computer. This provides strong electrical insulation and easy automation but requires the implementation of efficient noise filtering techniques to protect the SQUID from the radio-frequency noise emitted by digital circuits. On contrary, current sources of the LNE RB are based on linear analog circuits to avoid high-frequency noise[@Soukiassian2010]. Fig.\[Fig-Circuits\] shows the schematic of the two electronic circuits, the primary and the secondary ones. They are controlled and synchronized by a single external voltage source allowing automation of measurements. The latter can be either a dc voltage generator, like a Yogogawa 7651 for usual dc measurements or the oscillator of a lock-in detector for low-frequency ac measurements. The external reference voltage supplies the primary circuit through a high-impedance differential amplifier. A low-pass filter is then used to limit the signal bandwidth with an adjustable cutoff frequency ranging from 1 mHz to 1 kHz. After a stage summing additional voltage corrections and a division stage allowing the setting of a decimal fraction, the voltage is converted into a current in ranges extending from 1 $\mu$A up to 100 mA. This conversion is done using an amplifier inverter circuit boosted by a buffer amplifier (BUF634T) and a dividing resistor. The secondary current source, controlled by the output signal of the low-pass filter of the primary current source, is similar but the current range selected can also be multiplied by a factor 1.2906 or 1/1.2906 to adapt to measurements involving the QHR connected either to the primary or to the secondary current sources. Several additional circuits are implemented to finely adjust the current ratio $r_I=I_\mathrm{S}/I_\mathrm{P}$ to within a few parts in $10^{6}$. This is necessary to limit the ampere.turn unbalance in the CCC, not only to avoid unlocking of the SQUID feedback notably during current switching, but also to achieve the best accuracy in the $I_\mathrm{S}/I_\mathrm{P}$ ratio adjustment. Offset, in-phase and in-quadrature correction circuits are used to tune the secondary current while an asymmetry correction circuit injecting a fraction of the voltage absolute value is used in the primary circuit to compensate, to some extent, the asymmetry behaviour of operational amplifiers. Finally, the SQUID feedback voltage, after insulation by an differential amplifier, is converted into a feedback current at the last stage of the secondary circuit so that the closed-loop feedback gain remains the same as in internal feedback mode, i.e. $\sim0.75~V/\phi_0$. Electronic circuits of each current source are integrated, but electrically isolated with PTFE material, into their own metallic box connected to ground, as shown in pictures of fig.\[FigAppendix-Sources\]. The electronic components are powered by stabilized voltages provided by a circuit itself energized by rechargeable batteries. These are also electrically isolated from the grounded metallic box in which they are placed. The only electrical link between the electronic circuits and the ground comes from the high-impedance operational amplifiers (OPA128LM) which ensure a high-isolation (in principle $\sim10^{15} ~\Omega$ resistance) from the piloting external voltage source and the SQUID feedback electronics. All these precautions aim at cancelling leakage currents. Test of the current ratio adjustability --------------------------------------- ![Illustration of the adjustability of the current ratio $I_\mathrm{P}/I_\mathrm{S}$ using resistances $R_\mathrm{P}=\mathrm{10~k\Omega}$ and $R_\mathrm{S}=\mathrm{100~\Omega}$. The balance voltage $\Delta V$, measured by the null detector (ND) is recorded as a function of time while the current is periodically reversed (red dashed line) for different settings of the corrections circuits : adjustment of the offset correction only (blue line), adjustment of the in-quadrature correction (green line), adjustment of in-phase, in-quadrature and asymmetry corrections (deep blue line).[]{data-label="Fig-CurrentAdjustment"}](Fig-CurrentAdjustment.pdf){width="3.5in"} Fig.\[Fig-CurrentAdjustment\] shows the experiment carried out to test the adjustability of the ratio of the two current sources[@Soukiassian2010]. Two resistors of resistance $\mathrm{10~k\Omega}$ and $\mathrm{100~\Omega}$ are fed by currents $I_\mathrm{P}$ and $I_\mathrm{S}$ respectively. The potential drop difference $\Delta V$ at the terminals of the two resistors is recorded by a null detector (nanovoltmeter EMN11). For a nominal voltage reversing from 1 V to -1 V every 20 secondes, Fig.\[Fig-CurrentAdjustment\] shows that it is possible to reduce the peak to peak $\Delta V$ amplitude to less than $\mathrm{2~\mu V}$ by optimizing the in-phase, the in-quadrature, the offset and the asymmetry corrections. Let us notably remark the effect of the in-quadrature correction in cancelling the voltage overshot caused by the fast current reversal. The current ratio can therefore be adjusted with an accuracy of 2 parts in $10^6$ even during fast current reversal which is an advantage to avoid any SQUID unlocking. Noise optimization and filtering {#Noise optimization and filtering} -------------------------------- ![Electronic scheme of the two last stages (A and B) of the secondary current source (the primary current source does not include the SQUID feedback circuit) describing the noise filtering techniques. Picture of the common mode torus used to block the circulation of current noise towards ground. The dotted line represents the limit of the case (at ground) of the current source.[]{data-label="Fig-Filtering"}](Fig-Filtering.pdf){width="3.5in"} Noise filtering is crucial particularly from the resonance frequency of the CCC (14 kHz) up to the operating frequency of the modulation circuit (500 kHz) not only to ensure a good working of the SQUID but also to avoid noise rectification that would alter measurement accuracy. Fig.\[Fig-Filtering\] shows the last stage of the electronic circuit of the secondary current source. The primary current source is based on a similar stage but differs by the absence of the SQUID feedback electronic circuit. In practice, the frequency bandwidth of the primary and secondary current sources was reduced to 160 Hz at the stage A and 1 kHz at the stage B of the electronic circuit using simple low-pass filters based on resistors and capacitors. By this way, the frequency bandwidth of the SQUID feedback circuit servo-controlling the secondary current is set to 1 kHz. Table.\[tableau:Range\] summarizes the capacitance $C_\mathrm{F}$ and resistance $R_\mathrm{F}$ values chosen to set the 1 kHz cut-off frequency that damps the CCC resonance for each current range (defined by the value of the resistor $R_\mathrm{C}$), considering the nominal value of the resistance in measurement $R_\mathrm{E}$. It is also essential to avoid the circulation of the current noise coming from the capacitive coupling of the electronics circuit with ground. To cancel this noise source which renders the SQUID inoperative, a common mode torus (CMT) was introduced in the current circuit of each source (see fig.\[Fig-Filtering\]). This CMT is made of a PTFE insulated wire pair wounded about 60 times around an APERAM Nano magnetic torus (magnetic permeability of about 80000 up to a 100 kHz frequency) with a return spire. The differential inductance is around $\mathrm{3~\mu H}$ while the common mode inductance is around 0.6 H. The common mode impedance, which increases from about $200~\Omega$ at 50 Hz up to $\mathrm{150~k\Omega}$ at 1 MHz, drastically reduces the circulation of the common mode current noise. This protects the SQUID and makes it operating quite ideally whether it is a radio-frequency SQUID or a DC SQUID. \[1\][&gt;m[\#1]{}]{} **Range** **$R_\mathrm{C}$** **$R_\mathrm{E}$** **$C_\mathrm{F}$** **$R_\mathrm{F}$** ---------------------- -------------------- -------------------- -------------------- -------------------- 1 $\mathrm{\mu A}$ 5 M$\Omega$ 1 M$\Omega$ 300 pF 4 M$\Omega$ 10 $\mathrm{\mu A}$ 500 k$\Omega$ 100 k$\Omega$ 300 pF 400 k$\Omega$ 100 $\mathrm{\mu A}$ 50 k$\Omega$ 10 k$\Omega$ 3 nF 40 k$\Omega$ 1 mA 5 k$\Omega$ 1 k$\Omega$ 30 nF 4 k$\Omega$ 10 mA 500 $\Omega$ 100 $\Omega$ 300 nF 400 $\Omega$ 100 mA 50 $\Omega$ 10 $\Omega$ 3 $\mathrm{\mu F}$ 40 $\Omega$ : Resistance and capacitance values in the last stage of the secondary current source for the different current ranges.[]{data-label="tableau:Range"} The SQUID feedback circuit -------------------------- As shown in fig.\[Fig-Filtering\], the SQUID feedback voltage $V_\mathrm{SQUID}$ probed in the SQUID Quantum Design Preamplifier is sent to the secondary current source of the bridge after decoupling by a high impedance differential amplifier. A resistor is biased by the $V_\mathrm{SQUID}$ voltage to inject the feedback current in the secondary current circuit supplying the secondary winding ($N_\mathrm{S}$). The resistor value, $R_\mathrm{FB}=\mathrm{1.5~M\Omega}$, is chosen so that the closed-loop feedback gain $G_\mathrm{CLG}=R_\mathrm{FB}\times S_\mathrm{CCC}/N_\mathrm{S}$ for for $N_\mathrm{S}=16$ is the same as in the most sensitive internal feedback mode of the SQUID (mode 5 and mode 5s) equal to $0.75~V/\phi_0$. A circuit made of a small capacitance of 200 pF in series with a $\mathrm{20~k\Omega}$ resistor is connected in parallel to the $\mathrm{1.5~M\Omega}$ resistor to partially compensate the dephasing caused by the 1 kHz low-pass filter implemented in the stage B of the electronic circuit and therefore optimize the SQUID operation. Current dividers ================ The standard (or in-phase) current divider ------------------------------------------ The standard current divider (SCD) is used to balance the voltages measured at the terminals of the two resistors by deviating a fraction of current towards the auxiliary winding. This is a key element of the RB, the accuracy of which directly impacts the uncertainty budget of the RB. An alternative technique to null the voltage measured by the detector consists in using an auxiliary current source servo-controlled by the null detector voltage output[@Williams1991; @Hartland92]. This relies on an active component, a second feedback electronics and an accurate measurement of the current delivered. On contrary, the SCD developed is a passive component that, once calibrated, can inject a known current. Moreover, it avoids the use of a second feedback electronics. However, one requirement is a good stability of the current fractions defined by the SCD. This is achieved with a design that limits the number of electrical commutations required to select the current fraction. The counterpart of these technical choices aiming at better reproducibility is that the calibration of the LNE SCD is not fully-automated contrary to that of binary compensation units[@Drung2013; @Gotz2017]. ![Electrical scheme of the standard current divider (SCD). $N$, $P$, $Q$, $P'$, $Q'$ are integers defining the the setting of the SCD. A CMT is inserted between the SCD and the auxiliary winding to reduce the current noise circulation through ground.[]{data-label="fig-SCD"}](Fig-SCD.pdf){width="3.5in"} The SCD, described in fig.\[fig-SCD\], is built to inject in the auxiliary winding fractions of a main current (lower than 10 mA) ranging from 0 to $5\times10^{-5}$ by minimal step of $5\times10^{-8}$. Made of three main series resistor networks ($10\times 20~\Omega$, $10\times 2~\Omega$, $10\times 200~\Omega$) and a large $\mathrm{4~M\Omega}$ division resistor, the SCD ratio can be adjusted using three mechanical IEC MONACO commutators with goal-coated silver contacts (see picture in fig.\[FigAppendix-SCD\] of appendix section). The current fraction is given by: $\epsilon_{(N,P,Q)}= N\times5\times10^{-6}+P\times5\times10^{-7}+Q\times5\times10^{-8}$ where *N*, *P*, *Q* are the integer values between 0 and 10 indexing the three commutator positions, respectively. To achieve this ideal behavior, it was necessary to implement two additional compensations resistor networks in order to circumvent the non-linearities that result from the variation of the division resistance varying *P* and *Q*. Indeed, the triangle formed by resistors ($2~\Omega$, $Q\times 200~\Omega$, $(10-Q)\times 200~\Omega$) leads to the addition of a *Q* dependent resistance to the $\mathrm{4~M\Omega}$ resistance. To solve this non-linearity, a compensation resistance in series with the $\mathrm{4~M\Omega}$ resistance is selected for each *Q* value from the resistor network ($0~\Omega$, $20~\Omega$, $80~\Omega$, $180~\Omega$, $320~\Omega$, $500~\Omega$, $320~\Omega$, $180~\Omega$, $80~\Omega$, $20~\Omega$, $0~\Omega$). The variation of the division resistance changing the *P*-dependent position of the potentiometer is compensated by selecting a fraction of the resistor network ($10\times 2~\Omega$) using the index *P’*=10-*P*. Finally, the fraction of the resistor network ($10\times 20~\Omega$) selected by *N’* index adds to keep constant the total resistance of the SCD independently of the *N* index change: *N’*=10-*N*. This is useful to set the frequency bandwidth of the current source independently of the SCD setting. To achieve the best stability of the SCD fractions with time and under load, resistor networks are constituted of high stability (drift lower than $10^{-5}$/year, in relative value), low temperature coefficient ($<0.6\times10^{-6}/^{\circ}$C) and hermetic Vishay resistors (VH516-4 $20~\Omega$, VHS102 $2~\Omega$, VH518-10 $\mathrm{1~M\Omega}$, VH102K $200~\Omega$) connected in series with low electromotive force solder. Finally, a CMT is introduced that drastically reduces the current noise signals circulating from ground to the auxiliary winding through the SCD circuit. The fractions corresponding to the three following sets of commutator positions: ($N$, $P=0$, $Q=0$) with $N$ varying from zero to ten, ($N=0$, $P$, $Q=0$) with $P$ varying from zero to ten, ($N=0$, $P=0$, $Q$) with $Q$ varying from zero to ten, were calibrated by measuring the ratio of the voltage $V_\mathrm{AB}$ to a reference bias voltage applied in place of the auxiliary winding. Calibrations performed over ten years showed that fractions have remained close to their nominal value within one part in $10^9$, drifting each by no more than 5 parts in $10^{10}$. More generally, the fraction $\epsilon_{(N,P,Q)}$ corresponding to the commutator position ($N$, $P$, $Q$) is given by : $$\begin{aligned} \epsilon_{(N,P,Q)}&=\epsilon_{(N,P=0,Q=0)}+\epsilon_{(N=0,P,Q=0)}\\ &+\epsilon_{(N=0,P=0,Q)}-2\epsilon_{(0,0,0)}.\end{aligned}$$ The quadrature current divider ------------------------------ Capacitive leakages to ground short-circuiting the secondary resistor lead to voltage overshoots during current variations that can possibly saturate the null detector. This limits the maximum current reversing speed. To circumvent this difficulty, a second current divider (QCD) is connected in series with the SCD to cancel these voltage overshoots (see fig.\[fig1\]) by injecting an in-quadrature fraction $j\epsilon_q$ of the main current in a CCC auxiliary winding of number of turns ${N_\mathrm{A}^q}$, as highlighted by equations \[Equation:AmpereTurns\] and \[Equation:Phase\]. More precisely, the main current is flowing through a $R_\mathrm{q}$ resistor of $1~\Omega$ or $10~\Omega$ resistance value. A $100~\Omega$ potentiometer connected in parallel allows the adjustment of the voltage fraction $\alpha$ biasing a $C_\mathrm{q}=235$ nF PTFE capacitor (its parallel resistance is higher than $2\times 10^{13}~\Omega$) in series with the CCC auxiliary winding (typically ${N_\mathrm{A}^q}=1600$). The current fraction injected by the QCD is therefore $j\epsilon_q\simeq j\alpha R_\mathrm{q}C_\mathrm{q}\omega$, where $\alpha\in[0:1]$ is the potentiometer fraction. It is crucial that this current remains negligible during the data acquisition. Considering a current setting time constant of 0.5 s, one can calculate that the current fraction drops down to no more than a few parts in $10^{13}$ of the main current after a waiting time of only 8 s. Let us note that the quadrature current divider is also equipped with a CMT to reduce current noise circulation (see picture in fig.\[FigAppendix-SCD\] of appendix section). Shielding and guarding ====================== ![Left: Picture of the resistance bridge. Right: Picture of the CCC winding switching box at the top of the cryostat.[]{data-label="fig4"}](Fig4.pdf){width="3.5in"} As shown in fig. \[fig4\], each sensitive element of the RB is carefully shielded against noise. The null detector, the current sources, the current dividers, the power supply are in grounded metallic boxes. The QHR and the CCC are each in an independent cryostat connected at ground. The continuity of the shielding between the different elements is ensured by the connection cables, the metallic sheath of which is also connected at ground as schematized in fig.\[fig1\]. This directs any leakage current between wires at different potentials towards the ground. In normal operation, the ground is connected both to the low potential of the resistance $R_\mathrm{S}$ (position A in fig.\[fig1\]) and to the case of the EM detector (there is no common mode voltage). The leakage current, $I_g$ therefore short-circuits the lowest resistance (black arrow) which reduces its impact. This grounding is usually efficient to limit the leakage current contribution to the type B uncertainty below $10^{-9}$ for the measurement of the $100~\Omega/(R_\mathrm{K}/2)$ ratio. The ground can also be connected at the low potential of the secondary winding (position B in fig.\[fig1\]). The leakage current, $I_g$, short-circuiting both the resistor and the winding of the secondary circuit (grey arrow) in this case, the measurement accuracy of the resistance ratio is not altered at all by leakage currents. But, this is at the expense of a common mode voltage, although weak, existing between the ground and the low potential of the null detector due to the small resistance (about 2 $\Omega$) of the winding. To minimize its effect on the measurement accuracy, the case and the low potential of the null detector are short-circuited. Besides, capacitance hand-effects are cancelled because the null detector is itself placed in a grounded metallic box. Resistance ratio Measurements ============================= Noise spectrum and SQUID feedback stability ------------------------------------------- ![Noise spectral density measured by the SQUID *versus* frequency *f* for the measurement of the 100$\Omega$/10 k$\Omega$ ratio. SQUID operating in internal feedback mode 5 (black), in external feedback mode 5s (red), and in external feedback mode 500 (blue).[]{data-label="Fig-NoiseCCC10k100"}](Fig-NoiseCCC10k100.pdf){width="3.5in"} The operation stability of the SQUID was demonstrated for several ratio measurements: 10 k$\Omega$/1 M$\Omega$, 100 $\Omega$/10 k$\Omega$ and 1 $\Omega$/100 $\Omega$. The following settings of the bridge were used: $N_\mathrm{P}=1600$, $N_\mathrm{S}=16$, $N_\mathrm{A}=16$ and a current divider connected in series with the secondary current source for the two first ratios, $N_\mathrm{P}=1600$, $N_\mathrm{S}=16$, $N_\mathrm{A}=1600$ and a current divider connected in series with the primary current source for the ratio 100 $\Omega$/1 $\Omega$. The quadrature divider was not used for these tests. Fig.\[Fig-NoiseCCC10k100\] shows the noise spectral density, expressed in $\mathrm{\phi_0/Hz^{1/2}}$, determined by the Quantum Design SQUID operating in different feedback modes for the measurement of the 100 $\Omega$/10 k$\Omega$ ratio using current ranges $10$ mA/100 $\mu$A. In closed feedback mode operation, the output noise corresponds to the difference between the noises emitted by the primary and the secondary current sources since the current ratio $I_\mathrm{S}/I_\mathrm{P}$ is adjusted within $10^{-6}$ to cancel to ampere.turn unbalance of the CCC, i.e. the magnetic flux in the SQUID. So there remains only the uncorrelated noise contributions of both current sources. Let us note that the residual magnetic flux noise crossing the SQUID itself has a much lower level because of the real-time compensation by the feedback signal. It is given by the combination of the intrinsic SQUID noise (3 $\mathrm{\mu\phi_0/Hz^{1/2}}$), the environmental noise directly captured by the SQUID and the current source noise divided by the open-loop amplification gain. This latter contribution is negligible. The two others, which manifest in the CCC alone and disconnected, give a contribution of about 10 $\mathrm{\mu\phi_0/Hz^{1/2}}$ as observed in fig.\[Fig-NoiseCCCAlone\]. The internal (through the modulation coil of the SQUID) and the external (through the CCC winding) feedback mode operations mainly differ by their bandwidth as one can observe in fig.\[Fig-NoiseCCC10k100\]. In internal feedback mode 5, the bandwidth of about 20 kHz allows measuring the noise level up to the frequency resonances of the CCC. Fig.\[Fig-NoiseCCC10k100\] shows that the noise amplitude is above a noise floor level of about 140 $\mathrm{\mu\phi_0/Hz^{1/2}}$. This bottom level is notably explained by the Johnson-Nyquist noise, of 120 $\mathrm{\mu\phi_0/Hz^{1/2}}$, generated by the $R_\mathrm{C}=50$ k$\Omega$ resistor defining the 100 $\mu$A current range of the primary current source. Below 10 Hz, the noise increase is mainly caused by the $1/f$ voltage noise of the operational amplifiers (OPA111BM) which polarizes the 50 k$\Omega$ dividing resistor. The operation in external feedback mode 5s is very stable. The noise spectrum is similar but is characterized by a lower frequency bandwidth limited to about 500 Hz by the SQUID electronics. The cutoff frequency manifests itself by a peak, above which the signal then decreases of 20 dB by decade. The operation in external feedback mode 500 is also stable but with a higher cutoff frequency of about 1 kHz defined by the secondary current source filters (in fact the 200 pF capacitance of the feedback circuit (fig.\[Fig-Filtering\]) slightly extends the frequency bandwidth above 1 kHz set by the $C_\mathrm{F}$ capacitance). In some configurations, this larger frequency bandwidth can ensure a better stability of the bridge operation against higher-frequencies acoustic noises. ![Noise spectral density measured by the SQUID *versus* frequency *f* for the measurement of the 1 $\Omega$/100 $\Omega$ ratio a) and 10 k$\Omega$/1 M$\Omega$ ratio b). SQUID operating in internal feedback mode 5 (black) and in external feedback mode 5s (red).[]{data-label="Fig-NoiseCCCAutres"}](Fig-NoiseCCCAutres.pdf){width="3.5in"} Fig.\[Fig-NoiseCCCAutres\]a) and b) demonstrate stability of operation of the external SQUID feedback in the measurements of ratios 1$\Omega$/100 $\Omega$ and 10 k$\Omega$/1 M$\Omega$ respectively. The base noise level is larger for the measurement of the 1 $\Omega$/100 $\Omega$ ratio (above $\mathrm{{2~m\phi_0/Hz^{1/2}}}$). This comes from the reduction to $R_\mathrm{C}=5$ k$\Omega$ of the resistor defining the 1 mA range used to supply the 100 $\Omega$ resistor. Conversely, the current sources are feebly noisy in the measurement configuration of the 10 k$\Omega$/1 M$\Omega$ ratio because of the $R_\mathrm{C}=5$ M$\Omega$ resistor defining the 1 $\mu$A range. One can observe in fig.\[Fig-NoiseCCCAutres\]b), a white noise level of no more than $\mathrm{20~\mu\phi_0/Hz^{1/2}}$ between 0.2 Hz and 6 Hz. This low base noise level allows observing the manifestation of moderate mechanical resonances in the range from 10 Hz and 1 kHz. Measurement protocol and type A uncertainty ------------------------------------------- ![Measurement of the resistance ratio $r_R=100~\Omega/(R_\mathrm{K}/2)$ using the old a) and the new b) LNE bridge: relative voltage deviation $\Delta V/V$ as a function of time for several ($\mathrm{I^+}$, 0, $\mathrm{I^-}$) sequences. The signal period is about 200 s. The following settings were used: $N_\mathrm{P}=1936$, $N_\mathrm{S}=15$ and $N_\mathrm{A}=15$ for the older bridge and $N_\mathrm{P}=2065$, $N_\mathrm{S}=16$, $N_\mathrm{A}=16$ and $N_\mathrm{A}^q=1600$ for the new bridge. $\epsilon$ fractions are chosen to obtain similar deviation amplitude, $\Delta V/V$, for both bridges. $\epsilon_q$ setting of the new bridge QCD is optimized to cancel overshoots at current reversals.[]{data-label="Fig-StabilitySteps"}](Fig-StabilitySteps.pdf){width="3.5in"} Measurements of the resistance ratio $r_R=100~\Omega/(R_\mathrm{K}/2)$ were performed using the old and the new LNE bridges. The primary current circulating through the GaAs/AlGaAs-based quantum resistance standard is set to $I_\mathrm{P}=70~\mu$A. For a $\epsilon$ fraction of the SCD which differs from $\epsilon_0$, a finite voltage $\Delta V$ can be detected by the null detector. The relative voltage $\Delta V/V$, where $V=R_\mathrm{P}I_\mathrm{P}$, is related, at the first order, to the deviation $(\epsilon-\epsilon_0)$ by: $$\Delta V/V=(\epsilon-\epsilon_0)\frac{N_\mathrm{A}}{N_\mathrm{S}}.$$ Let us note that, reversely, $\Delta V/V$ could be interpreted as a relative deviation of the ratio $r_R$ to the value $r_{R_0}$ giving $\Delta V=0$ for the fraction $\epsilon$. $\Delta V/V$ measured as a function of time during several ($\mathrm{I^+}$, 0, $\mathrm{I^-}$) sequences is reported in Fig.\[Fig-StabilitySteps\]a) and b) for the old and the new LNE bridge respectively. The signal period, of about 200 s, is imposed by the low-speed capability of the older bridge. The comparison of both data first shows the lower noise level and better stability achieved in measurements performed with the new bridge. This comes not only from the better performance of the current source electronics but also from the lower noise level of the CCC. Second, it demonstrates the efficiency and interest of the quadrature current divider which allows the cancellation of any voltage overshoot during the current switchings. This is useful to speed up the current reversal which reduces the impact of the voltage offset drift and of the $1/f$ SQUID noise. ![Measurement of the resistance ratio $r_R=100~\Omega/(R_\mathrm{K}/2)$ using the new LNE bridge with a primary current $I_\mathrm{P}=70~\mu$A: relative deviation $\Delta V/V$ as a function of time for 16 ($\mathrm{I^+}$, $\mathrm{I^-}$, $\mathrm{I^+}$) sequences and two successive settings, $\epsilon^+=-12.1\times10^{-6}$ and $\epsilon^-=+12.6\times10^{-6}$, of the SCD fractions.[]{data-label="Fig-QuickSequence"}](Fig-QuickSequence.pdf){width="3.5in"} Fig.\[Fig-QuickSequence\] shows the typical data record for the measurement of the resistance ratio $r_R=100~\Omega/(R_\mathrm{K}/2)$ with the new bridge. It consists of two successive acquisitions of voltage measurements, $V^{\epsilon^+}$ and $V^{\epsilon^-}$, that are obtained for the two settings of the SCD fractions $\epsilon^+=-12.1\times10^{-6}$ and $\epsilon^-=+12.6\times10^{-6}$ respectively. Each acquisition is made of 16 ($\mathrm{I^+}$, $\mathrm{I^-}$, $\mathrm{I^+}$) sequences of current reversal that are used to remove voltage offsets. A mean voltage value is calculated from the average of the 16 values $[V(I^+)_1+V(I^+)_3-2V(I^-)_2]/4$, where 1,2,3 index the current state of each sequence. The resistance ratio is then obtained from the $\epsilon_0$ value calculated from the two mean voltages, $<V^{\epsilon^-}>$ and $<V^{\epsilon^+}>$, according to $\epsilon_0=\epsilon^- + (\epsilon^+-\epsilon^-)\times |<V^{\epsilon^-}>|/(|<V^{\epsilon^-}>|+|<V^{\epsilon^+}>|)$. Owing to the new bridge performances, the period of the signal was therefore reduced to about 70 s and the zero crossing step was removed. It results that the ratio between the acquisition time and the total experiment time is increased from about 50 percents with the older bridge to 75 percents with the new bridge, which is favorable to a reduction of the type A uncertainty. ![Measurement of the resistance ratio $r_R=100~\Omega/(R_\mathrm{K}/2)$ using the new LNE bridge with a primary current $I_\mathrm{P}=50~\mu$A. From an experiment described in supplementary of [@Ribeiro2015]. Standard Allan deviation of $r_R$, expressed in relative value, as a function of the acquisition time $t$ (red square). $t^{-1/2}$ adjustment of data (black dashed line). Standard deviation of the mean (blue open hexagon). Inset: ($\mathrm{I^+}$, $\mathrm{I^-}$, $\mathrm{I^+}$) sequence of $\Delta V$ measurements characterized by a period of 70 s.[]{data-label="Fig-Allan"}](Fig-Allan.pdf){width="3.5in"} The noise performance of the new LNE bridge is demonstrated by the calculation of the Allan standard deviation[@Allan1987; @Witt2005] of $r_R=100~\Omega/(R_\mathrm{K}/2)$ from the statistical analysis of the voltage measurements performed using a primary current $I_\mathrm{P}=50~\mu$A. The evolution of this quantity, expressed in relative value, is reported in fig.\[Fig-Allan\] as a function of the experience time $t$. It follows a $t^{-1/2}$ law over measuring times longer than one hour. This shows that the white noise is dominant and that the standard deviation of the mean can be used to estimate the type A uncertainty. It follows that a type A standard uncertainty of $1.5\times10^{-10}$ can be achieved for the measurement of the ratio $r_R=100~\Omega/(R_\mathrm{K}/2)$ using a current of $50~\mu$A after an experience time of one hour. This is five time less than the best uncertainty achievable with the older bridge. This improvement relies not only on the quicker measurement protocol but also on the lower noise of the current source electronics and of the CCC. Let us remark that the contribution of the CCC to the voltage noise at the terminals of the null detector is no more than $\mathrm{0.5~nV/Hz^{1/2}}$. This is ten times lower than the EMN11 nanovoltmeter contribution, of about $\mathrm{5~nV/Hz^{1/2}}$, which limits the bridge type A uncertainty. Preliminary uncertainty budget ------------------------------ A preliminary uncertainty budget, reported in table \[tableau:Uncertainty\], was established for the measurement of the ratio $100~\Omega/(R_\mathrm{K}/2)$. As demonstrated in the previous section, a type A uncertainty of $0.15\times10^{-9}$ can indeed be achieved for a 1 hour measurement experiment and a $I_\mathrm{P}=50~\mu$A measurement current. Further improvement would require the implementation of a lower-noise null detector. Several components contributing to the Type B uncertainty were estimated as reported in Table \[tableau:Uncertainty\]. From the error measurements of the number of turns of windings, one can deduce a contribution of the CCC accuracy below $0.1\times10^{-9}$. However, lower-noise oppositions of windings of one and two turns are required in future to confirm the accuracy in measurements using such small turn numbers. The accuracy of the resistance ratio measurement depends also on the SQUID feedback accuracy in cancelling the total ampere.turns number, i.e. in setting the current ratio $r_I=I_\mathrm{S}/I_\mathrm{P}$ to the target ratio given by the equation \[Equation:AmpereTurns\]. A setpoint error comes from the finite value of the open loop gain $G_\mathrm{OLG}$ of the SQUID electronics. Although the SQUID amplifier is based on an integrator which leads to an infinite gain at DC, measurements are in practice carried out with a finite time periodicity of the current reversal, typically of 70 s. The relative current ratio error is equal to $(\Delta^{adj}r_I/r_I)\times(G_\mathrm{CLG}/G_\mathrm{OLG})$, where $\Delta^{adj}r_I/r_I$ is the relative deviation between the target ratio and the preliminary adjusted current ratio. Measurements of the $100~\Omega/(R_\mathrm{K}/2)$ resistance ratio were performed for $\Delta^{adj}r_I/r_I$ values as large as a few $10^{-4}$. A relative error in the measurement of the resistance ratio lower than $10^{-11}$ is deduced in normal adjustment of the current ratio, i.e. for $\Delta^{adj}r_I/r_I\sim 10^{-6}$. This corresponds to $G_\mathrm{OLG}>7.5\times10^{4}$ V/$\phi_0$. This value is compatible with that yet determined in quantized current experiment[@Brun-Picard2016]. A main contribution comes from the calibration and stability of the standard current divider fractions. Characterizations performed over 10 years have demonstrated a very low drift of the fraction values, less than $5\times10^{-11}$/year in average value. At the present time, the calibration method of a fraction $\epsilon$ gives an uncertainty below $0.5\times10^{-9}$. A new calibration method, under development, aims at decreasing the uncertainty down to $0.3\times10^{-9}$ in order to benefit from the high-stability of the new standard current divider. \[1\][&gt;m[\#1]{}]{} **Uncertainty components (k=1)** **Contributions ($10^{-9}$)** ---------------------------------- ------------------------------- **Type A (1 hour)** **0.15** **Type B** **0.7 (A), 0.5 (B)** CCC accuracy $<0.1$ SQUID feedback accuracy $<0.01$ Current divider calibration $<0.5$ Leakage to ground $\sim 0.5$ (A), 0.1 (B) **Combined Uncertainty** **0.7 (A), 0.6 (B)** : Preliminary relative uncertainty budget for the $100~\Omega/(R_\mathrm{K}/2)$ ratio considering a $I_\mathrm{P}=50~\mu$A measurement current.[]{data-label="tableau:Uncertainty"} The impact of leakage current to ground on the measurement accuracy was also estimated through several experiments. At first, no significant deviation was found within a relative uncertainty of about 0.8 part in $10^{9}$ between measurements of the resistance ratios $100~\Omega/(R_\mathrm{K}/2)$ and $200~\Omega/(R_\mathrm{K}/2)$, performed with the ground either connected in position A ($I_g$ parallel to $R_\mathrm{S}$) or position B ($I_g$ fully deviated). To obtain a better knowledge of leakage currents, comparisons were repeated with a larger secondary resistance $R_\mathrm{S}$=1 k$\Omega$ to amplify their effect. From the comparison of the two measurements of the ratio 1 k$\Omega/$1 k$\Omega$ performed with the resistors interchanged, a relative deviation of $(-3\pm 0.4)\times10^{-9}$ is found for the ground in position A which reduces to $(-0.04\pm 0.3)\times10^{-9}$ for the ground in position B. A similar discrepancy of a few parts in $10^{9}$ is also found by comparing the measurements of the ratio 1 k$\Omega/(R_\mathrm{K}/2)$ obtained for both ground positions. Moreover, it is found that the value obtained with the ground in position B agrees within 1.2 parts in $10^{9}$ with that deduced by combining the measurements of the 100 $\Omega/(R_\mathrm{K}/2)$ and 100 $\Omega/1$ k$\Omega$ ratios. One concludes that a significant discrepancy of a few parts in $10^9$ caused by leakage currents exists but can nevertheless be fully cancelled moving the ground in position B. From these characterizations, a leakage current effect of about 0.5 part in $10^{9}$, i.e. ten times lower, can therefore be deduced for the measurement of the 100 $\Omega/(R_\mathrm{K}/2)$ ratio with the ground in position A. The contribution to the type B uncertainty budget falls below 0.1 part in $10^{9}$ by connecting the ground in position B. Finally, no significant effect of the current reversal duration (from $I^+$ to $I^-$ and reversely) was found within a relative uncertainty of 0.35 part in $10^9$ by varying its value from 12 s to 24 s while keeping the same acquisition time. To conclude, the total type B relative uncertainty is estimated to be either 0.7 part or 0.5 part in $10^{9}$ depending on whether the ground is connected to position A or B. The type A uncertainty being lower, the combined uncertainty is below one part in $10^{9}$. Further reduction of the measurement uncertainty will come from the improvement of the current divider calibration. Validation of measurement accuracy ---------------------------------- The new LNE bridge was used to perform accurate universality tests of the QHE[@Lafont2015; @Ribeiro2015]. The agreement of the quantized Hall resistance, $R_\mathrm{H}$, measured in GaAs and graphene devices was demonstrated with a record[@Ribeiro2015] relative uncertainty of $8\times10^{-11}$. This result was obtained by comparing the two measurements of the ratio $100~\Omega/(R_\mathrm{H}/2)$ carried out using a 100 $\Omega$ transfer resistor. This performance therefore emphasizes the low-noise level and the reproducibility of the measurement bridge, rather than its accuracy. Besides, the capability of the resistance bridge to perform measurements at low frequency (2 Hz) allowed the determination of the temperature evolution of the quantized Hall resistance in graphene during dynamic temperature drift[@Lafont2015]. Many elements of the resistance bridge, i.e. the CCC, the current source and the current divider, were also used to build the programmable quantum current generator that allowed a practical realization of the ampere from the elementary charge with a $10^{-8}$ relative uncertainty[@Brun-Picard2016]. Table \[tableau:Comparisons\] reports on the deviations between the measurements of the ratios $100~\Omega/(R_\mathrm{K}/2)$ and $100~\Omega/10~k\Omega$ performed using the new and the old bridges. It shows that there is no significant discrepancy within a combined uncertainty below 1.5 part in $10^{9}$. Let us note that the comparison uncertainty is limited by the larger type A uncertainty of the older bridge. This agreement between the two resistance bridges, which differ not only by their electronics but also by their CCC and standard current divider, make us very confident in our measurements of these resistance ratios. It also consolidate the Type B uncertainty budget described previously in table \[tableau:Uncertainty\]. \[1\][&gt;m[\#1]{}]{} **Ratio** **$100~\Omega/(R_\mathrm{K}/2)$** **$100~\Omega/10~k\Omega$** ------------------------ ----------------------------------- ----------------------------- **Relative deviation** $(-1.4\pm1.5)\times10^{-9}$ $(-0.7\pm1.4)\times10^{-9}$ : Relative deviations with combined standard uncertainties (k=1) between the measurements of the resistance ratio performed by the new and the older bridges.[]{data-label="tableau:Comparisons"} Conclusion ========== A new comparison resistance bridge based on a CCC was built at LNE. It is based on low-noise synchronized current sources that are carefully shielded and electrically isolated from ground, a new CCC with a very low-noise level of $\mathrm{80~pA.t/Hz^{1/2}}$ and a very stable standard current divider characterized by a drift of less than 0.5 part in $10^{9}$ over ten years. Stable operation of the resistance bridge, i.e. of the SQUID in external closed feedback mode, was yet demonstrated in the measurements of ratios $100~\Omega/(R_\mathrm{K}/2)$, $\mathrm{100~\Omega/10~k\Omega}$, $\mathrm{10~k\Omega/1~M\Omega}$ and $1~\Omega/100~\Omega$. The ratio $100~\Omega/(R_\mathrm{K}/2)$ ratio can be determined with a relative type A uncertainty below $0.15\times 10^{-9}$ within one hour measurement time. This performance results not only from the lower noise of the bridge and particularly of the new CCC but also from the optimization of the data acquisition thanks to the quadrature current divider which cancels voltage overshoots. Main contributions to the type B uncertainty budget have yet been estimated. They concern the standard current divider calibration, the CCC accuracy, the finite open-loop feedback gain and the electrical leakage current. The total type B uncertainty is estimated to be below 0.7 part in $10^{9}$. A further reduction to about 0.4 part in $10^{9}$ is expected using both a new calibration method of the standard current divider and the cancellation method of the leakage currents (position B). Thus, a combined standard uncertainty of 0.5 part in $10^{9}$ is expected at term. Next development steps will consist in characterizations of the measurement of the $1~\Omega/100~\Omega$ and $\mathrm{10~k\Omega/1~M\Omega}$ resistance ratios. Preliminary experiments have yet demonstrated that the measurement of the $1~\Omega/100~\Omega$ resistance ratio with the current dividers inserted in the primary circuit works. Measurement of the $\mathrm{10~k\Omega/1~M\Omega}$ resistance ratio will require a more adapted null detector than the EMN 11 or EMN 31. The bias current noise of these devices is indeed too large. A battery-powered higher-impedance amplifier is therefore under development. Moreover, an efficient rejection of leakage currents is essential. We therefore plan to test not only the connection of the ground in position B but also the implementation of a virtual ground, as described in [@Delahaye1991]. Pictures of resistance bridge components ======================================== ![Pictures (front a) and b), top c)) of the primary and secondary current sources, each one being placed in an independent box.[]{data-label="FigAppendix-Sources"}](FigAppendix-Sources.pdf){width="3.5in"} ![Pictures of the standard current divider (top) and of the quadrature current divider (bottom), each one being placed in an independent box.[]{data-label="FigAppendix-SCD"}](FigAppendix-SCD-reduc.pdf){width="3.5in"} Acknowledgment {#acknowledgment .unnumbered} ============== The authors would like to thank Guillaume Spengler and Laetitia Soukiassian for their work during the first stage of the development of the resistance measurement bridge that started ten years ago as well as François Piquemal for useful comments on the manuscript. The authors would like also to thank Carlos Sanchez from NRC for useful discussions about the resistance bridge operation with a ground connected to the secondary winding. [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} *The International System of units (SI)*.1em plus 0.5em minus 0.4emhttp://www.bipm.org/en/si/, Sèvres. *Mise en pratique for the definition of the ampere and other electric units in the SI*.1em plus 0.5em minus 0.4emSI Brochure-9th edition (2019)-Appendix 2, https://www.bipm.org/utils/en/pdf/si-mep/SI-App2-ampere.pdf. K. von Klitzing, G. Dorda, and M. Pepper, “New method for high-accuracy determination of the fine-structure constant based on quantized hall resistance,” *Phys. Rev. Lett.*, vol. 45, p. 494, 1980. W. Poirier, S. Djordjevic, F. Schopfer, and O. Thévenot, “The ampere and the electrical units in the quantum era,” *C. R. Physique*, vol. 20, p. 92, 2019. W. Poirier and F. Schopfer, “Resistance metrology based on the quantum hall effect,” *Eur. Phys. J. Spec. Top.*, vol. 172, p. 207, 2009. R. Ribeiro-Palau, F. Lafont, J. Brun-Picard, D. Kazazis, A. Michon, F. Cheynis, O. Couturaud, C. Consejo, B. Jouault, W. Poirier, and F. Schopfer, “Quantum hall resistance standard in graphene devices under relaxed experimental conditions,” *Nature Nano.*, vol. 10, pp. 965–971, 2015. I. K. Harvey, “A precise low temperature dc ratio transformer,” *Rev. Sci. Instrum.*, vol. 43, no. 11, pp. 1626–1629, 1972. J. Gallop and F. Piquemal, “Squid handbook vol. ii,” pp. 95–137, 2006. F. Delahaye and D. Reymann, “Progress in resistance ratio measurements using a cryogenic current comparator at lcie,” *IEEE Trans. Instrum. Meas.*, vol. 34, p. 316, 1985. J. M. Williams and A. Hartland, “An automated cryogenic current comparator resistance ratio bridge,” *IEEE Trans. Instrum. Meas.*, vol. 40, p. 267, 1991. A. Hartland, “The quantum hall effect and resistance standards,” *Metrologia*, vol. 29, pp. 175–190, 1992. R. F. Dziuba and R. E. Elmquist, “Improvements in resistance scaling at nist using cryogenic current comparators,” *IEEE Trans. Instrum. Meas.*, vol. 42, p. 126, 1993. F. Delahaye, “An ac-bridge for low-frequency measurements of the quantized hall resistance,” *IEEE Trans. Instrum. Meas.*, vol. 40, p. 883, 1991. H. Seppa and A. Satrapinski, “Ac resistance bridge based on the cryogenic current comparator,” *IEEE Trans. Instrum. Meas.*, vol. 46, p. 463, 1997. F. Delahaye and D. Bournaud, “Accurate ac measurements of standard resistors between 1 and 20 hz,” *IEEE Trans. Instrum. Meas.*, vol. 42, p. 287, 1993. A. Satrapinski, M. Gotz, E. Pesel, N. Fletcher, P. Gournay, and B. Rolland, “New generation of low-frequency current comparators operated at room temperature,” *IEEE Trans. Instrum. Meas.*, vol. 66, p. 1417, 2017. BIPM, “The bipm key comparison database (kcdb), key and supplementary comparisons (appendix b), comparison bipm.em-k12. https://kcdb.bipm.org.” F. Delahaye, T. Witt, F. Piquemal, and G. Genevès, “Comparison of quantum hall effect resistance standards of the bnm/lcie and the bipm,” *IEEE Trans. Instrum. Meas.*, vol. 44, p. 258, 1995. C. A. Sanchez, B. M. Wood, and A. D. Inglis, “Ccc bridge with digitally controlled current sources,” *IEEE Trans. Instrum. Meas.*, vol. 58, p. 1202, 2009. D. Drung, M. G[ö]{}tz, E. Pesel, J.-H. Storm, C. Assmann, M. Peters, and T. Schurig, “Improving the stability of cryogenic current comparator setups,” *Supercond. Sci. Technol.*, vol. 22, p. 114004, 2009. M. G[ö]{}tz, D. Drung, E. Pesel, H. Barthelmess, C. Hinnrichs, C. Assmann, M. Peters, H. Scherer, B. Schumacher, and T. Schurig, “Improved cryogenic current comparator setup with digital current sources,” *IEEE Trans. Instrum. Meas.*, vol. 58, p. 1176, 2009. J. M. Williams, T. J. B. M. Janssen, G. Rietveld, and E. Houtzager, “An automated cryogenic current comparator resistance ratio bridge for routine resistance measurements,” *Metrologia*, vol. 47, p. 167, 2010. L. Soukiassian, G. Spengler, D. Leprat, F. Schopfer, and W. Poirier, “New cryogenic current comparator-based resistance comparison bridge at lne,” in *2010 CPEM Digest*, Y. S. Song and J.-S. Kang, Eds.1em plus 0.5em minus 0.4emIEEE, June 2010, p. 761, dOI:10.1109/CPEM.2010.5544176. J. Sesé, F. Lera, A. Camon, and C. Rillo, “Calculation of effective inductances of superconducting devices. application to the cryogenic current comparator,” *IEEE Trans. Appl. Supercon.*, vol. 9, no. 1, pp. 58–62, 1999. J. Sesé, E. Bartolomé, J. Flokstra, G. Rietveld, A. Camon, and C. Rillo, “Simplified calculus for the design of a cryogenic current comparator,” *IEEE Trans. Instrum. Meas.*, vol. 52, p. 612, 2003. D. Drung, M. G[ö]{}tz, E. Pesel, H. J. Barthelmess, and C. Hinnrichs, “Aspects of application and calibration of a binary compensation unit for cryogenic current comparator setups,” *IEEE Trans. Instrum. Meas.*, vol. 62, p. 2820, 2013. M. G[ö]{}tz and D. Drung, “Stability and performance of the binary compensation unit for cryogenic current comparator bridges,” *IEEE Trans. Instrum. Meas.*, vol. 66, p. 1467, 2017. D. Allan, “Should the classical variance be used as a basic measure in standards metrology ?” *IEEE Trans. Instrum. Meas.*, vol. 36, no. 2, pp. 646–654, 1987. T. Witt, “Allan variances and spectral densities for dc voltage measurements with polarity reversals,” *IEEE Trans. Instrum. Meas.*, vol. 54, p. 550, 2005. J. Brun-Picard, S. Djordjevic, D. Leprat, F. Schopfer, and W. Poirier, “Practical quantum realization of the ampere from the elementary charge,” *Phys. Rev. X*, vol. 6, p. 041051, 2016. F. Lafont, R. Ribeiro-Palau, D. Kazazis, A. Michon, O. Couturaud, C. Consejo, T. Chassagne, M. Zielinski, M. Portail, B. Jouault, F. Schopfer, and W. Poirier, “Quantum hall resistance standards from graphene grown by chemical vapour deposition on silicon carbide,” *Nature Communications*, vol. 6, p. 6806, 2015. [Wilfrid Poirier]{} is a graduate of the Ecole Supérieure de Physique et de Chimie Industrielles de Paris (ESPCI). He then completed his thesis on quantum electronic transport at the CEA-SPEC and received his doctorate in solid state physics in 1997. He joined LCIE in 1998 and then LNE in 2001 as head of studies on quantum resistance standards. Since then, he has devoted his research to quantum electrical metrology. These included the development of a graphene quantum resistance standard, of GaAs-based quantum Hall arrays and the realization of universality tests of the quantum Hall effect. He has also been involved in the development of precision quantum instrumentation based on SQUID technology. More recently, he has proposed and developed a quantum current generator to achieve the new definition of ampere. He obtained his Habilitation to Direct Research from the University of Paris-SUD in 2017. [Dominique Leprat]{} worked in LNE from 2001 to 2017 as a technician in the electrical metrology department. He was first involved in the activity of traceability of alternating voltage based on thermal transfer. In 2007, he joined the quantum Hall effect team to develop metrology instrumentation. [Félicien Schopfer]{} obtained an engineer’s degree from École Nationale Supérieure de Physique - Grenoble INP in 2001, a master’s degree in condensed matter physics from the University of Grenoble the same year. He received a PhD in Physics from the University of Grenoble in 2005, for quantum electronic transport experiments in nanostructures carried out at CNRS. He was appointed at Laboratoire national de métrologie et d’essais – LNE in 2005 to advance research in quantum electrical metrology. His research has mainly focused on the quantum Hall effect (QHE) for applications in fundamental metrology. He worked on quantum Hall arrays in GaAs/AlGaAs, co-authored reproducibility and universality tests of the quantum Hall effect with record uncertainties, and is strongly involved in graphene research, notably with important results for the development of the quantum Hall resistance standard operating under relaxed experimental conditions. [^1]: Authors are with the Department of Fundamental Electrical Metrology, Laboratoire national de métrologie et d’essais, 78197 Trappes, France; e-mail: wilfrid.poirier@lne.fr.
{ "pile_set_name": "ArXiv" }
--- abstract: 'We studied theoretically coherent phenomena in the multimode dynamics of single section semiconductor ring lasers with Quantum Dots (QDs) active region. In the unidirectional ring configuration our simulations show the occurrence of self-mode-locking in the system leading to ultra-short pulses (sub-picoseconds) with a THz repetition rate. As confirmed by the Linear Stability Analysis (LSA) of the Traveling Wave (TW) Solutions this phenomenon is triggered by the analogous of the Risken-Nummedal-Graham-Haken (RNGH) instability affecting the multimode dynamics of two-level lasers.' address: | $^1$ Dipartimento di Elettronica e Telecomunicazioni, Politecnico di Torino, Corso Duca degli Abruzzi 24, Torino, IT-10129, Italy\ $^2$ Consiglio Nazionale delle Ricerche, CNR-IFN, via Amendola 173, Bari, IT-70126, Italy author: - 'Lorenzo Luigi Columbo,$^{1,2*}$, Paolo Bardella,$^1$ and Mariangela Gioannini$^{1}$' title: 'Self-pulsing in single section ring lasers based on Quantum Dot materials: theory and simulations' --- [10]{} K. Sato, “100 [GH]{}z optical pulse generation using [F]{}abry-[P]{}erot laser under continuous wave operation,” Electron. Lett. **37**, 763–764(1) (2001). F. Lelarge, B. Dagens, J. Renaudier, R. Brenot, A. Accard, F. v. Dijk, D. Make, O. L. Gouezigou, J. G. Provost, F. Poingt, J. Landreau, O. Drisse, E. Derouin, B. Rousseau, F. Pommereau, and G. H. Duan, “Recent advances on [I]{}n[A]{}s/[I]{}n[P]{} quantum dash based semiconductor lasers and optical amplifiers operating at 1.55 $\mu$m,” IEEE Journal of Selected Topics in Quantum Electronics **13**, 111–124 (2007). J. Liu, Z. Lu, S. Raymond, P. J. Poole, P. J. Barrios, and D. Poitras, “Dual-wavelength 92.5 [GHz]{} self-mode-locked [InP]{}-based quantum dot laser,” Opt. Lett. **33**, 1702–1704 (2008). T. W. Hänsch, “Nobel lecture: Passion for precision,” Rev. Mod. Phys. **78**, 1297–1309 (2006). J. Faist, G. Villares, G. Scalari, M. R[ö]{}sch, C. Bonzon, A. Hugi, and M. Beck, “Quantum cascade laser frequency combs,” Nanophotonics **5**, 272–291 (2016). T. J. Kippenberg, R. Holzwarth, and S. A. Diddams, “Microresonator-based optical frequency combs,” Science **332**, 555–559 (2011). P. J. Delfyett, S. Gee, M.-T. Choi, H. Izadpanah, W. Lee, S. Ozharar, F. Quinlan, and T. Yilmaz, “Optical frequency combs from semiconductor lasers and applications in ultrawideband signal processing and communications,” IEEE J. Lightwave Technol. **24**, 2701–2719 (2006). C.-H. Chen, M. A. Seyedi, M. Fiorentino, D. Livshits, A. Gubenko, S. Mikhrin, V. Mikhrin, and R. G. Beausoleil, “A comb laser-driven dwdm silicon photonic transmitter based on microring modulators,” Opt. Express **23**, 21541–21548 (2015). N. Eiselt, H. Griesser, M. H. Eiselt, W. Kaiser, S. Aramideh, J. J. V. Olmos, I. T. Monroy, and J.-P. Elbers, “Real-time 200 [Gb/s]{} (4x56.25 [Gb/s]{}) [PAM-4]{} transmission over 80 km [SSMF]{} using quantum-dot laser and silicon ring-modulator,” in “Optical Fiber Communication Conference,” (2017), p. W4D.3. S. M. Link, D. J. H. C. Maas, D. Waldburger, and U. Keller, “Dual-comb spectroscopy of water vapor with a free-running semiconductor disk laser,” Science **356**, 1164–1168 (2017). C. Gosset, K. Merghem, A. Martinez, G. Moreau, G. Patriarche, G. Aubin, A. Ramdane, J. Landreau, and F. Lelarge, “Subpicosecond pulse generation at 134 [GHz]{} using a quantum-dash-based [Fabry-Perot]{} laser emitting at 1.56$\mu$m,” Appl. Phys. Lett. **88**, 241105 (2006). Z. Lu, J. Liu, P. Poole, S. Raymond, P. Barrios, D. Poitras, G. Pakulski, P. Grant, and D. Roy-Guay, “An l-band monolithic inas/inp quantum dot mode-locked laser with femtosecond pulses,” Opt. Express **17**, 13609–13614 (2009). P. Bardella, L. L. Columbo, and M. Gioannini, “Self-generation of optical frequency comb in single section quantum dot fabry-perot lasers: a theoretical study,” Opt. Express **25**, 26234–26252 (2017). J. Faist, *Quantum Cascade Lasers*, EBSCO ebook academic collection (OUP Oxford, 2013). N. Vukovic, J. Radovanovic, V. Milanovic, and D. L. Boiko, “Analytical expression for [R]{}isken-[N]{}ummedal-[G]{}raham-[H]{}aken instability threshold in quantum cascade lasers,” Opt. Express **24**, 26911–26929 (2016). A. Gordon, C. Y. Wang, L. Diehl, F. X. K[ä]{}rtner, A. Belyanin, D. Bour, S. Corzine, G. H[ö]{}fler, H. C. Liu, H. Schneider, T. Maier, M. Troccoli, J. Faist, and F. Capasso, “[Multimode regimes in quantum cascade lasers: From coherent instabilities to spatial hole burning]{},” Phys. Rev. A **77**, 1–18 (2008). L. Lugiato, F. Prati, and M. Brambilla, *Nonlinear Optical Systems* (Cambridge University, 2015). H. Risken and K. Nummedal, “Self pulsating in lasers,” J. App. Phys. **39**, 4662–4672 (1968). H. Choi, V.-M. Gkortsas, L. Diehl, D. Bour, S. Corzine, J. Zhu, G. H?fler, F. Capasso, F. X. K?rtner, and T. B. Norris, “Ultrafast rabi flopping and coherent pulse propagation in a quantum cascade laser,” Nature Photonics **4**, 706–711 (2010). M. Kolarczik, N. Owschimikow, J. Korn, B. Lingnau, Y. Kaptan, D. Bimberg, E. Scholl, K. L'’[u]{}dge, and U. Woggon, “Quantum coherence induces pulse shape modification in a semiconductor optical amplifier at room temperature,” Nat. Commun. **4** (2013). A. C. O. Karni, G. Eisenstein, V. Sichkovskyi, V. Ivanov, and J. P. Reithmaier, “Coherent control in a semiconductor optical amplifier operating at room temperature,” Nat. Commun. **5** (2014). L. Liu, R. Kumar, K. Huybrechts, T. Spuesens, G. Roelkens, E.-J. Geluk, T. de Vries, P. Regreny, D. Van Thourhout, R. Baets, and G. Morthier, “An ultra-small, low-power, all-optical flip-flop memory on a silicon chip,” Nature Photonics **4**, 182–187 (2010). S. Longhi and L. Feng, “Unidirectional lasing in semiconductor microring lasers at an exceptional point \[invited\],” Photon. Res. **5**, B1–B6 (2017). C. C. Nshii, C. N. Ironside, M. Sorel, T. J. Slight, S. Y. Zhang, D. G. Revin, and J. W. Cockburn, “A unidirectional quantum cascade ring laser,” Applied Physics Letters **97**, 231107 (2010). E. B. Y. O. M. S. Y. Barbarin, S. Anantathanasarn and R. N'’[o]{}tzel, “Inas/inp quantum dot fabry-p?rot and ring lasers in the 1.55 ?m range using deeply etched ridge waveguides,” Proceedings Symposium IEEE/LEOS Benelux Chapter, 2006, Eindhoven pp. 137–140 (2006). M. Rossetti, P. Bardella, and I. Montrosset, “Time-domain travelling-wave model for quantum dot passively mode-locked lasers,” IEEE J. Quantum Electron. **47**, 139–150 (2011). M. Gioannini, P. Bardella, and I. Montrosset, “Time-domain traveling-wave analysis of the multimode dynamics of quantum dot [Fabry-Perot]{} lasers,” IEEE J. Sel. Topics Quantum Electron. **21**, 698–708 (2015). S. Koenig, D. Lopez-Diaz, J. Antes, F. Boes, R. Henneberger, A. Leuther, A. Tessmann, R. Schmogrow, D. Hillerkuss, R. Palmer, T. Zwick, C. Koos, W. Freude, O. Ambacher, J. Leuthold, and I. Kallfass, “Wireless sub-[THz]{} communication system with high data rate,” Nat. Photonics **7**, 977–981 (2013). S. Latkowski, F. Surre, and P. Landais, “Terahertz wave generation from a dc-biased multimode laser,” Applied Physics Letters **92**, 081109 (2008). S. Latkowski, J. Parra-Cetina, R. Maldonado-Basilio, P. Landais, G. Ducournau, A. Beck, E. Peytavit, T. Akalin, and J.-F. Lampin, “Analysis of a narrowband terahertz signal generated by a unitravelling carrier photodiode coupled with a dual-mode semiconductor [F]{}abry???[P]{}érot laser,” Applied Physics Letters **96**, 241106 (2010). Introduction ============ Semiconductor lasers operating in self-pulsing (SP) regime are valid alternatives to passive and active mode-locked devices for the generation of ultra-short, high-repetition rate optical pulses for applications to optical information encoding and time resolved measurements of e.g fast molecular dynamics [@Sato; @Rev01; @Liu]. In the frequency domain the SP regime corresponds to an Optical Frequency Comb (OFC), i.e a light emission characterised by equally spaced optical lines with low phase noise and low mode partition noise [@Hansch; @faistreview]. These quite simple self-locked sources have attracted an impressive interest for a number of applications in astronomy, spettroscopy, waveform generation, optical clocks and in the rapidly growing field of high-capacity DWDM optical interconnection where the OFC laser diode feeds the silicon photonics optical modulators to realize a compact and low cost transmitter [@Kippenberg555; @Delfyett; @Chen; @Eiselt; @link2017; @faistreview].\ Respect to conventional Quantum Well (QW) based semiconductor lasers, active devices based on low dimensional materials as QDs and Quantum Dashes (QDashes) draw interest because of their broad spectral gain, small nonlinear dispersion, low threshold current, fast gain recovery time and small integration spatial scale. Experimental evidences of SP in single section Fabry-Perot (FP) lasers based on QDashes [@Gosset] and QDs [@Lu09] active materials have been reported. In the case of FP configuration we have recently demonstrated [@Bardella2017] that the carrier grating induced by the standing wave pattern (not washed out by diffusion in the QDs case) can explain the broad multi-wavelength optical spectra typically observed in QDs lasers, whereas FWM allows the self-locking of the modes when the laser output power it is high enough. When SP occurs, the pulse repetition rate depends on the FP longitudinal cavity mode separation and for typical FP laser length it stays in the tens of GHz range.\ The question remains on what happens if the standing wave pattern is not present, as for example in a ring laser configuration where only the clockwise (or counter clockwise) mode propagates. In this case, as shown by some recents works on Quantum Cascade Lasers (QCLs) that share with QDs laser similar dynamical features [@FaistBook; @Boiko; @Gordon], multi-wavelength emission and self-pulsation should be triggered by a RNGH instability of the TW that consists in the parametric amplification of the cavity modes resonant with the frequency of the Rabi oscillations (Rabi frequency, $\nu_{R}$) of the system [@Lugiato; @Risken]. The RNGH instability can be considered the epitome of the self-mode-locking in a two-level laser. As happen for unipolar laser such as QCLs where lasing action involves intersubband transitions, in low dimensional active material such QDs and QDashes intraband lasing transitions between discrete levels are associated with quite narrow and symmetric gain linewidth that makes this class of emitters behave quite similarly to two-level lasers and thus it allows for the observation of coherent effects like Rabi oscillations [@caNat; @Kolarczik; @Capua]. On the contrary, in more conventional bipolar semiconductor lasers based on QW the Rabi oscillations and the consequently RNGH are usually hindered due to the quite broad and asymmetric resonance and, to the best of our knowledge, not a single observation of these phenomena have been reported so far .\ We note that ring lasers and passive resonators are nowadays key elements for the realization of photonic integrated circuits [@NatLiu] and unidirectional propagation can be easily obtained [@Longhi17; @Sorel]. The details of fabrication and characterisation of QDs ring lasers emitting at $1.5$ $\mu m$ are reported for example in [@Proc2006].\ We propose here an unidirectional QDs ring laser of few millimeters length where multimode emission leads to SP as a result of a RNGH instability of the TW solutions. We also show that this phenomenon in unidirectional ring QDs lasers is reliable on a wide range of bias currents and device lengths. As demonstrated in Section 3 as a consequence of the RNGH instability the SP repetition rate is in the hundred of GHz or few THz range, even if the ring cavity FSR is tens of GHz as in the standard FP laser configuration.\ In order to simulate the multimode dynamics of the QD ring laser by properly taking into account coherent radiation-matter interaction we extended the Time Domain Travelling Wave (TDTW) model described in [@Rossetti; @Gioannini] to include the temporal evolution of the medium polarization. We calculated the TW solutions of the system and we studied their stability against spatio-temporal perturbations with a standard Linear Stability Analysis (LSA) technique. As expected, the results of LSA show that the TW instability is associated with the amplification of the Rabi frequency in the QD active material that behaves as ensemble of artificial two-level atoms, thus having a RNGH character. Our numerical simulations show that the system spontaneously evolves towards a multimode solutions that corresponds to ultra-short pulses (hundreds of ) at repetition rate, close to $\nu_{R}$.\ The numerics also reveal that an increase of the inhomogenous broadening, that may represents an additional incoherent effect in the multimode competition [@Lugiato], reduces the intervals of the bias current where SP is observed.\ We finally observe that SP and the consequent OFC with THz or sub-THz optical line spacing can be desirable for a number of applications among which we mention the photonic generation of THz or sub-THz signals by illuminating a fast photodetector with the SP optical signal. This simple ring source could be therefore a valid alternative to the mixing of comb lines used nowadays to generate the THz signal [@Koenig; @Rev04THz; @Rev05THz].\ The paper is organised as follows: in Section 2, we describe the TDTW model used for simulating the multimode dynamics of the ring cavity single section QD laser. In Section 3, we present and discuss the results of LSA of the TW solutions and the numerical simulations for standard unidirectional InAs/GaAs QD laser. Finally we draw our conclusions in Section 4. Multi-populations Time Domain Travelling Wave Model =================================================== We consider a single section Quantum-Dots-in-a-Well (DWELL) InAs/GaAs ring laser emitting from the ground state (GS) around [@Gioannini]. The length of the laser cavity ($L$) is a few hundreds of microns. The laser structure with the coordinate system is sketched in Fig.\[fig:3\].a, whereas the QDs states, electron dynamics and gain line shapes for different inhomogeneous broadening are shown in Fig. \[fig:3\].b.\ We sketch in Fig. \[fig:3\] the electron dynamics as taken in our model. The main material and device parameters are summarised in Table \[Tableparam\]. The coherent interaction between QDs inhomogeneous broadened gain medium and the intracavity electric field is described trough a set of coupled traveling wave equations for the slowly varying envelops of the fundamental TE electric field $E(z,t)$ and of the slowly varying envelop of the microscopic polarizations $p_{i}(z,t)$, coupled with the evolution equations for the electron occupation probabilities of ground state $\rho_{i}$ in each dot group and in the wetting layer (WL) $\rho_{WL}$ [@Gioannini]. $$\begin{aligned} \frac{\partial E (z,t)}{\partial t} &=&\gamma_{p} \left(-\frac{\partial E}{\partial z}-\frac{\alpha_{wg}L}{2}E-C\sum_{i=-N}^{N} \bar{G}_{i} p_{i} \right)\label{fieldfastc}\\ \frac{\partial p_{i}(z,t)}{\partial t}&=&\left [(j\delta_{i}/ \Gamma-1)p_{i}-D(2\rho_{i}-1)E\right ] \label{Pfast1b}\\ \frac{\partial \rho_{i}(z,t)}{\partial t}&=& -\rho_{i}\gamma_{e}(1-\rho_{WL}) +F \rho_{WL} \gamma_{C}(1-\rho_{i}) \\ &-&\gamma_{sp}\rho_{i}^{2} -\gamma_{nr}^{GS}\rho_{i} + H \, Re\left(E^{*}p_{i} \right) \label{pop1fastb}\\ \frac{\partial \rho_{WL}(z,t)}{\partial t}&=& \Lambda \tau_{d}-\gamma_{nr}^{WL}\rho_{WL} + \sum_{i=-N}^{N}\left[-\bar{G}_{i} \rho_{WL}\gamma_{C}(1-\rho_{i}) + \frac{\bar{G}_{i}}{F} \rho_{i}\gamma_{e}(1-\rho_{WL}) \right] \label{pop2fastb}\end{aligned}$$ In the convenient adimensional formulation provided by Eqs. (\[fieldfastc\])-(\[pop2fastb\]) we scaled time to the fastest time scale in the system represented by the dipole dephasing time $\tau_{d}$ and the longitudinal coordinate to the cavity length $L$. The complex dynamical variables are linked to the corresponding physical quantities by the relations: $$E\longrightarrow E \sqrt{\frac{\eta}{\Gamma_{xy}}} \frac{d_{GS}}{\hbar \Gamma}, \quad p_{0,i\, GS} \longrightarrow j\, p_{0, i\, GS} \sqrt{\frac{\eta}{\Gamma_{xy}}} \frac{N_{D}|d_{GS}|^{2}}{\epsilon_{0}\hbar \Gamma h_{QD}}$$ where $d_{GS}$ is the dipole matrix element associated to the optical transition from ground level, $\eta$ is the effective refractive index, $\Gamma_{xy}$ is the transverse optical confinement factor in the total QD active region, $\Gamma=1/\tau_{d}$, $h_{QD}$ is the QDs layer thickness, $N_{D}$ is the number of QDs per unit area. The adimensional parameters $C$, $D$, $F$, $H$ have the following expressions: $$C= \frac{\omega_{0}L\Gamma_{xy}\mu}{2c\eta}, \, D=\frac{|d_{GS}|^{2}N_{D}}{\epsilon_{0}\hbar \Gamma h_{QD}}, \, F=\frac{D_{WL}}{\mu N_{D}}, \, \\ H=\frac{\tau_{sp}\Gamma^{2} \omega_{0} \Gamma_{xy} \hbar \epsilon_{0} h_{QD}}{\eta \omega_{i \, GS} |d_{GS}|^2 N_{D}}$$ where $\omega_{0}$ is our reference angular frequency coincident with the cold cavity mode closest to the GS gain peak, $\mu$ is the degeneracy of the ground state, $\omega_{i \, GS}$ is the transition frequency of the $i$ group so that $\delta_{i}=\omega_{i \,GS}-\omega_{0}$, $D_{WL}$ is the number of WL level per unit area per QDs layer and $\tau_{sp}$ is the spontaneous electrons decay time from the GS state. Moreover in the previous equations $\alpha_{wg}$ represents the wave guide losses, $\gamma_{p}=\tau_{d}v_{g}/L$ is the normalized photon decay rate, $\gamma_{e,C}=\tau_{d}/\tau_{e,C}$ are the normalized escape and capture rates, $\gamma_{sp}=\tau_{d}/\tau_{sp}$, $\gamma_{nr}^{WL,GS}=\tau_{d}/\tau_{nr}^{WL,GS}$ represent the normalized nonradiative decay rates and $\lambda$ is the carriers injection probability per unit time. Finally $\bar{G}_{i}$ is the probability that a QD belong to the sub-group $i$ and it follows a Gaussian distribution.\ [ccc]{} Symbol & Description & Values\ \ $\eta$&Effective refractive index &\ $\mu$&Confined states degeneracy &\ $1/\Gamma$&Dipole dephasing time&\ $d_{GS}$&Dipole matrix element for GS&\ $\tau_{C}$& Electron capture times &\ $\tau_{e}$& Electron escape times &\ $\tau_{nr}^{WL}$& Electron non-radiative decay times &\ $\tau_{nr}^{GS}$& Electron non-radiative decay times &\ $\tau_{sp}$& Electron spontaneous emission time&\ \ $w$&Ridge width&\ $n_L$&Number of QD layers&\ $N_D$&QD surface density&\ $D_{WL}$&Wetting layer electron levels surface density&\ $h_{QD}$&QD layer thickness&\ $\alpha_{wg}$&Intrinsic waveguide losses&\ $r$ & Coupler reflectivity&\ $L$&Device length&\ $\Gamma_{xy}$& Transverse optical confinement factor&\ \[Tableparam\] In the unidirectional ring configuration the field envelope satisfies the boundary condition: $$E(0,t)=\sqrt{1-k^{2}}E(L,t), $$ where $k$ is the output coupling coefficient between the ring and the coupled waveguide (see Fig. \[fig:3\].a). Considering our normalization and the physical constants, the output power, expressed in , can be obtained by multiplying $|E(z,t)|^{2}$ by a factor of about $35$. Finally, for sake of simplicity, we consider in this work the case where emission only occurs from the ground state [@Gioannini]. Risken-Nummedal-Graham-Haken instability ======================================== To study the character of the TW instability we performed as reported in this section a semi-analytical linear stability analysis. This analysis requires first the calculation of the TW solutions, i.e. the single frequency solutions of Eqs. (\[fieldfastc\])-(\[pop2fastb\]) (as detailed in paragraph 3.1) and then the evaluation of the stability against spatio-temporal perturbations of this solutions by calculating the perturbations parametric gain (as detailed in paragraph 3.2). TW solutions ------------ We looked for the single frequency solution of Eqs. (\[fieldfastc\])-(\[pop2fastb\]) detuned in general of a quantity $\delta \omega$ from the gain peak $\omega_0$ in the form $$E=\overline{E}e^{j (\delta \omega/\Gamma \,t - \delta k \,L z)}, \quad p_{i}= \overline{p_{i}}e^{j(\delta \omega/\Gamma \, t- \delta k \, L z)}$$ $$\quad \rho_{i}= \overline{\rho_{i}},\quad \rho_{WL}= \overline{\rho_{WL}}$$ where we set $\delta k = \delta \omega /v_{g}$ and we introduced the group velocity $v_{g}=c/\eta$. This led to: $$\begin{aligned} \overline{p_{i}}&=&\frac{ \left[D(2\overline{\rho_{i}}-1)\overline{E}\right] }{j\delta_{i}/ \Gamma-1-j\delta \omega /\Gamma } \label{Pfastaz}\\ \overline{\rho_{WL}}&=&\frac{\Lambda \tau_{d}+\frac{1}{F}\sum_{i=-N}^{N}\bar{G}_{i} \overline{\rho_{i}}\gamma_{e}}{\gamma_{nr}^{WL}+\sum_{i=-N}^{N}\bar{G}_{i}\gamma_{C}(1-\overline{\rho_{i}}) + \frac{1}{F} \sum_{i=-N}^{N} \bar{G}_{i}\overline{\rho_{i}}\gamma_{e}}\label{pop2fbstaz}\\ 0 &=& \overline{E} \left(\frac{\alpha_{wg}L}{2}+C \, D \sum_{i=-N}^{N} \bar{G}_{i} \frac{(2\overline{\rho_{i}},-1)}{j\delta_{i}/ \Gamma-1- j\delta \omega /\Gamma}\right)\label{fieldstaz11}\\ 0&=&-\overline{\rho_{i}}\gamma_{e}(1- \overline{\rho_{WL}})+F\overline{\rho_{WL}} \gamma_{C}(1-\overline{\rho_{i}}) - \gamma_{sp} \overline{\rho_{i}}^{2} + H \, D Re\left( \frac{|\overline{E}|^2(2\overline{\rho_{i}}-1)}{j \delta_{i}/\Gamma -1-j\delta \omega /\Gamma} \right) \label{pop1fstaz}\end{aligned}$$ While in presence of in presence of inhomogeneously broadened gain and therefore multiple populations the TW solution can be found only by numerically solving the implicit nonlinear equations (\[Pfastaz\])-(\[pop1fstaz\]), in case of perfect homogeneous medium (i.e. only one population considered, $i=1$) the TW equations (\[Pfastaz\])-(\[pop1fstaz\]) have an analytical solution. Linear Stability Analysis of the TW solutions --------------------------------------------- The LSA of Eqs. (\[fieldfastc\])-(\[pop2fastb\]) around the TW solutions is carried out in detail in Appendix A. The parametric gain, i.e. the maximum of the real part of the perturbation eigenvalue $\lambda$ at a frequency $\nu_{z}=\omega_{z}/2 \pi=k_{z}\,v_{g}/2 \pi$ relative to the TW frequency treated as continuous variable, is plotted for e.g. in Fig. \[fig0\]. The cold cavity modes are those indicated by the dashed lines. We observe that the TW is unstable for $I\ge 55mA$ where a positive parametric gain favour the exponential growing of the modes with $k_{\mp 3}=\mp 3 \times 2 \pi \, /L$. For lower currents in fact the parametric gain for all the cold cavity modes is negative. ![Results of the LSA of the TW solutions for different bias currents. Plot of the parametric gain for each value of the frequency $\nu_{z}=\omega_{z}/2 \pi=k_{z} \, v_{g}/2 \pi$ treated as continuous variable. Dashed lines indicate the first values of $k_{z}$ compatible with the periodic boundary conditions. We consider $3$ QDs populations resonant with the lasing light associated with central angular frequencies $0$, $1.0$$THz$ and $-1.0$$THz$ that lead to a FWHM of the effective inhomogeneous broaden gain linewidth of $\simeq$ ($\simeq$ ) (see Fig. \[fig:3\]). The other parameters are those used in [@Gioannini]. The symbols indicate the position of the Rabi frequencies estimated using Eq. (\[rabi1eq\]) closest to the parametric gain peak.[]{data-label="fig0"}](PBLSA1tris.png "fig:"){width="8.cm"}\ ![Bifurcation diagram of the TW solutions: the maxima and minima in the intensity time traces are reported against the bias current as control parameter. Red lines correspond to the TW solutions calculated using Eqs. (\[Pfastaz\])-(\[pop1fstaz\]).[]{data-label="fig1"}](bif1new1.png){width="70.00000%"} As shown in Fig. \[fig0\] we verified that the instability starts by developing Rabi sidebands around the TW lasing frequency and it can be seen as an amplification of the Rabi oscillations, in this sense it can be interpreted as a RNGH instability affecting the TW solutions in multimode two-level atoms [@Lugiato]. The calculation of the Rabi frequency $\nu_{R}=\omega_{R}/2 \pi$, associated to periodic exchange of energy between light and matter of the system, is reported in the following paragraph and is based on the very well justified hypothesis that the QDs active is to analogous to an ensemble of artificial two-level atoms.\ Using the standard method described for example in [@Lugiato] we calculate the Rabi frequencies associated with the each group of QDs of the multi-population ensemble: $$\nu_{R,i}=\left(\gamma_{sp} H \, D \, 2 |E|^{2}+((\delta_{i}-\delta \omega)/\Gamma)^2\right)^{0.5}/2 \pi \label{rabi1eq}$$ The large value of the $\nu_{R}$, that turns out to be of the order of the inverse of the coherence time ($1/\Gamma$) may also explain the recent experimental observations of Rabi oscillations effect in intense pulse propagation in QDs based SOA at room temperature [@Kolarczik; @Capua]. Results of dynamical simulations ================================ ![Temporal evolution of the output power (a,b), optical spectrum (c) and RF spectrum (d) obtained for a value of bias current of . In panel (b) we report a space-time representation of the pulse dynamics for . A long time trace is divided in intervals corresponding to the cold cavity round trip time $\tau=L\eta/c$. These segments are then stacked on top of each other so that the horizontal axis is equivalent to space inside the cavity while the vertical dimension describes the evolution in units of round trips. The lowest frequency dashed line in panel (d) corresponds to the ring resonator FSR (equal to $\simeq$ ), while the other two dashed lines are integer multiple of it. The other parameters are those used in Fig. \[fig0\].[]{data-label="num2"}](pulsetris_map2.jpg){width="80.00000%"} ![Temporal evolution of the output power (a,b), optical spectrum (c) and RF spectrum (d) obtained for a value of bias current of . Dashed lines in panel (d) denote the first cold cavity modes. The other parameters are those used in Fig. \[num2\].[]{data-label="num2b"}](pulsetris_map_3pop_95mA2.jpg){width="80.00000%"} We integrate the TDTW model equations using a finite difference algorithm as described in [@Gioannini; @Bardella2017].\ For the parameters in Table \[Tableparam\] we obtained the bifurcation diagram in Fig. \[fig1\] where the maxima and minima in the intensity time traces are reported versus the bias current as control parameter. Red lines correspond to the TW solutions calculated using Eqs. (\[Pfastaz\])-(\[pop1fstaz\]). As predicted by the linear stability analysis, for $I\ge 55mA$ the TW solution becomes unstable. In particular the multimode competition gives rise to regular intensity oscillations. The number of excited modes and the pulse contrast both increase with bias current. In Fig. \[num2\] we report for e.g. the temporal evolution of the output power, the optical spectrum and RF spectrum for $I=75 mA$. In this case the first unstable mode has a distance of approximately $3$ times the cavity FSR ($\simeq$ ) with respect to the TW emission frequency as shown in Fig. \[num2\].d and, in perfect agreement with the results of the LSA in Fig. \[fig0\], it corresponds to the first lasing mode with a positive parametric gain. The side mode suppression ratio (SMSR) defined here as the ratio between the maximum RF peak power to that of the highest adjacent longitudinal modes is around $60$ $dB$. Moreover, because of the amplitude character of RNGH instability [@Lugiato], the phases of the first excited longitudinal modes are locked with equal phase difference between adjacent modes. Once the first side modes are activated, a cascaded Four Wave Mixing (FWM) mechanism comes into play in fixing the frequency and the phase of the parametrically generated modes, thus yielding to the emission of ultra-short pulses at THz emission rate (see Fig. \[num2\].a). In the useful spatio-temporal representation in Fig. \[num2\].b three pulses are associated with a single cold cavity round trip time $\tau=L\eta/c$. As expected by the results of the LSA and shown for e.g. in Fig. \[num2b\], the increase of the bias current allows us to partially tune the pulse repetition rate by changing the parametric gain peak position, or equivalently, the Rabi frequency of the system. In order to demonstrate the robustness of the SP phenomenon against ring length and current variation we run a set of systematic simulations. Our results might be conveniently summarised in Fig. \[num4\] as a function of the cavity length $L$ and the bias current $I$. In Fig. \[num4\].a we map the frequency of the RF peak (that turns to be always close to the Rabi frequency $\nu_{R}$). In Fig. \[num4\].b, in order to evaluate the spectral purity of the pulsed THz signal, we report the ratio between the power of the RF peak and the RF power of the competing adjacent RF lines corresponding to the ring modes not triggered by the RNGH instability. We define this ratio in the RF spectrum as SMSR (see Fig. 4d). It is possible to identify at least four different dynamical behaviour. In the great part of region A we have no pulsing with only one lasing line and CW power (TW stable). In the region denoted by the letter B the QD ring laser shows SP at a frequency in the THz range close to Rabi resonance, in the region denoted by the letter C phase-locking still induces regular oscillations although the emergence of side modes introduces pulse over-modulation, and finally in the region denoted by the letter D the multimode dynamics leads to irregular oscillations. Our simulations also show that an increase of the inhomogeneous broadening in the model reduces the intervals of bias current where SP is found. We might expect in fact that excited longitudinal modes with different frequencies interact with different populations thus reducing the degree of coherence in the system. In Fig. \[num3\] we plot for e.g. the results obtained by considering $11$ populations whose central emission frequencies are separated by $1THz$ and that correspond to a FWHM of the effective inhomogeneous broaden gain linewidth of $\simeq$ ($\simeq$ ) (see Fig.\[fig:3\]), while keeping the other parameters as those in Fig. \[num2\]. In this case we observe an irregular temporal evolution of the field intensity that corresponds to a much higher differential phase and amplitude noise (see Fig. \[num2c\]). At the same time multimode emission experience a reduced threshold and a larger bandwidth because the material gain for non resonant (with the gain peak) modes gets higher [@Lugiato]. ![Temporal evolution of the output power (a,b), optical spectrum (c) and RF spectrum (d) obtained for a value of bias current of . We consider $11$ QDs populations whose central emission frequencies are again separated by a . The other parameters are those used in Fig. \[num2\].[]{data-label="num3"}](pulsetris_map_11pop.png){width="80.00000%"} ![Average values of the modal amplitude and of the standard deviation of the modal phase. The errors bars denote the standard deviation of their temporal fluctuations in the case of $3$ (a), (c) and $11$ populations (b), (d). The other parameters are those used in Fig. \[num2\].[]{data-label="num2c"}](pulsetris_map_11_3_popnew1.pdf){width="80.00000%"} Bidirectional ring ------------------ We finally observe that in the bidirectional configuration, the standing wave pattern due to the interference between forward and backward fields generates a grating in the carrier density that cannot be washed out by diffusion. Equations for the first Fourier components of the spatial grating are added following the procedure described in [@Bardella2017]. Spatial hole burning takes place, letting the TW instability threshold decrease from several times the lasing threshold down to a few percents above the lasing threshold. This emerges for e.g. from inspection of Fig. \[num2e\] where, focusing on the simple case of a single QD population (homogeneous gain broadening), we report the linear stability analysis of the TW solutions for a bidirectional configuration (panel a) and a unidirectional one (panel b). The latter has been carried on via a calculations analogous to that described in [@Boiko; @Gordon] in the case of a QCLs. ![Results of the LSA of the TW solutions for $I=60 mA$ for a bidirectional ring configuration (a) and an unidirectional one (b). Plot of the parametric gain for each value of the frequency $\nu_{z}=\omega_{z}/2 \pi=k_{z}\,v_{g}/2 \pi$ treated as continuous variable. The other parameters are those used in Fig. \[num2\].[]{data-label="num2e"}](LSA_bidiuni_hom.png){width="80.00000%"} In the unidirectional configuration only the mode at has positive parametric gain (see Fig. \[num2e\].a); all the others are suppressed and only by increasing current the two relative maxima at $\simeq$ $\pm$ will experience positive parametric gain and they will allow the lasing of the modes closer to these two maxima. Instead, in the bidirectional ring configuration all the cavity modes in the frequency range of few experience a positive parametric gain (see Fig. \[num2e\].a) which is turned-on by the spatial hole burning effect. The TW resonant with the gain peak is unstable very close to the lasing threshold and by increasing the bias current we generally observe an alternation between regimes of irregular oscillations and a regular dynamical behavior as recently reported in [@Bardella2017]. Coherent dynamics leading to self-generation of OFCs with lasing lines spaced of the ring FSR is found for sizeable intervals of the bias current. It does not correspond to the emission of optical pulses (since the phase difference between adjacent modes is not equal), although is normally associated to the emission of a broader and flatter optical spectrum.\ Conclusions =========== We studied self-mode-locking and in particular self-pulsing in single section ring QDs lasers. In unidirectional emission regime ultra-short pulses at THz repetition rate are triggered by RNGH multi-wavelengths instability of the TW solutions that consists in the amplification of the Rabi frequency of the system. The latter has been calculated in the very well verified hypothesis that radiation coherently interacts with QDs material as with an ensemble of artificial two level atoms. In bidirectional cavities, spatial hole burning makes the TW instability threshold occurring for much lower bias current, but only self-generation of OFCs is reported. Our results let envisage very timely applications such as the high-data rate optical information encoding and transmission or the generation of THz or sub-THz signals via combination of photonics and electronics. Appendix A ========== We study the stability of the TW emission respect to spatio-temporal perturbations looking for solutions of Eqs. (\[fieldfastc\])-(\[pop2fastb\]) in the form: $$E=(\overline{E}+\delta E)e^{j(\delta \omega/\Gamma \, t- \delta k \, L z)} \quad p_{i}=(\overline{p_{i}}+\delta p_{i})e^{j(\delta \omega/\Gamma \, t- \delta k \, L z)}$$$$\rho_{WL}(z,t)=\overline{\rho_{WL}}+\delta \rho_{WL} \quad \rho_{i}(z,t)=\overline{\rho_{i}}+ \delta \rho_{i}$$ This gives the following set of linear equations for the perturbations: $$\begin{aligned} \frac{\partial \delta E}{\partial t} + \gamma_{p}\frac{\partial \delta E}{\partial z}&=&\gamma_{p}\left(-\frac{\alpha_{wg}L}{2}\delta E-C\sum_{i=-N}^{N} \bar{G}_{i} \delta p_{i} \right)\label{deltafieldfastc}\\ \frac{\partial \delta p_{i}(z,t)}{\partial t}&=&(j\delta_{i}/ \Gamma-1-j \delta \omega /\Gamma) \delta p_{i}-D(2\delta \rho_{i})E-D(2\rho_{i}-1) \delta E \label{deltaPfast1b}\\ \frac{\partial \delta \rho_{i}(z,t)}{\partial t}&=& -\delta \rho_{i}\gamma_{e}(1-\rho_{WL})+ \rho_{i} \delta \rho_{WL}\gamma_{e} -F \delta \rho_{i} \rho_{WL} \gamma_{C} \nonumber \\ &+& F (1-\rho_{i})\delta \rho_{WL} \gamma_{C} - 2 \rho_{i} \delta \rho_{i}+ H \, Re\left(\delta E^{*}p_{i}+ E^{*}\delta p_{i}\right) \label{deltapop1fastb}\\ \frac{\partial \delta \rho_{WL}(z,t)}{\partial t}&=&-\delta \rho_{WL} \gamma_{nr}^{WL}+\sum_{i=-N}^{N}\left[-\bar{G}_{i} \delta \rho_{WL}\gamma_{C}(1-\rho_{i}) \right.\nonumber\\ &+& \left. \bar{G}_{i} \rho_{WL}\gamma_{C} \delta \rho_{i}+\frac{\bar{G}_{i}}{F} \delta \rho_{i}\gamma_{e}(1-\rho_{WL}) - \frac{\bar{G}_{i}}{F}\rho_{i}\gamma_{e}\delta \rho_{WL} \right] \label{deltawettingpop2fastb} $$ Projecting on the spatial Fourier basis the perturbations we derive for each perturbation wave vector $k_{n}$ a set of ODE for the temporal evolution of the corresponding Fourier component. The maximum of the real parts of the eigenvalues $\lambda$ of the associated Jacobian matrix (Lyapunov exponents), representing the parametric gain of the considered mode, thus give a direct information about the TW stability.
{ "pile_set_name": "ArXiv" }
--- abstract: | We investigate the stationary entanglement and stationary nonlocality of two qubits collectively interacting with a common thermal environment. We assume two qubits are initially in Werner state or Werner-like state, and find that thermal environment can make two qubits become stationary nonlocality. The analytical relations among average thermal photon number of the environment, entanglement and nonlocality of two qubits are given in details. It is shown that the fraction of Bell singlet state plays a key role in the phenomenon that the common thermal reservoir can enhance the entanglement of two qubits. Moreover, we find that the collective decay of two qubits in a thermal reservoir at zero-temperature can generate a stationary maximally entangled mixed state if only the fraction of Bell singlet state in the initial state is not smaller than $\frac{2}{3}$. It provides us a feasible way to prepare the maximally entangled mixed state in various physical systems such as the trapped ions, quantum dots or Josephson Junctions. For the case in which two qutrits collectively coupled with a common thermal reservoir at zero-temperature, we find that the collective decay can induce the entanglement of two qutrits initially in the maximally mixed state. The collective decay of two qutrits can also induce distillable entanglement from the initial conjectured negative partial transpose bound entangled states.\ PACS numbers: 03.65.Ud, 03.67.-a, 05.40.Ca author: - 'Shang-Bin Li$^{1,2}$' - 'Jing-Bo Xu$^{1}$' bibliography: - 'refmich.bib' - 'refjono.bib' title: 'Stationary entanglement and nonlocality of two qubits or qutrits collectively interacting with the thermal environment: The role of Bell singlet state' --- , I. INTRODUCTION {#i.-introduction .unnumbered} =============== Quantum entanglement plays an important role in quantum information. It has been recognized as a useful resource in various quantum information processes [@Shor1995]. While entanglement can be destroyed by the interaction between the system of interest and its surrounding environment in most situations, there have been many works showing that the collective interaction with a common thermal environment can cause the entanglement of qubits [@Beige2000; @Braun2002; @Kim2002; @Schneider2002; @Pleniohue2002; @Kraus2004; @Clark2003; @Duan2003; @Zanardi2001; @Benatti2003]. Beige *et al.* have analyzed ways in which entanglement could be established within a dissipative environments [@Beige2000] and shown that one could even utilize a strong interaction of the system with its environment to produce entanglement. Braun has also shown that two qubits with degenerate energy levels can be entangled via interaction with a common heat bath [@Braun2002]. Schneider and Milburn have studied the pairwise entanglement in the steady state of the Dicke model and revealed how the steady state of the ion trap with all ions driven simultaneously and coupled collectively to a heat bath could exhibit quantum entanglement [@Schneider2002]. Kim *et al.* have investigated the interaction of the thermal field and a quantum system composed of two qubits and found that such a chaotic field with minimal information could entangle qubits that were prepared initially in a separable state [@Kim2002]. Kraus and Cirac have shown how one could entangle distant atoms by using squeezed light [@Kraus2004]. Clark and Parkins [@Clark2003] have proposed a scheme to controllably entangle the internal states of two atoms trapped in a high-finesse optical cavity by employing quantum-reservoir engineering. For generating multipartite entanglement, Duan and Kimble have proposed an efficient scheme to engineer multi-atom entanglement by detecting cavity decay through single-photon detectors [@Duan2003]. More recently, it has been shown that white noise may play a constructive role in generating the controllable entanglement in some specific situations [@Pleniohue2002; @Li2005]. In this paper, we investigate the system of two qubits or two qutrits collectively interacting with a common thermal reservoir. For two-qubit case, we analyze the role of the fraction of Bell singlet state and the average photon number of thermal reservoir in the stationary state entanglement and Bell violation. It is shown that the fraction of Bell singlet state in the initial state is a key fact determining whether the common thermal reservoir can enhance the entanglement or Bell violation of two qubits or not. For two-qutrit case, we find that two qutrits initially in the conjectured bound entangled Werner state can become distillable due to the collective decay caused by the common thermal reservoir at zero-temperature. Even if two qutrits are initially in the maximally mixed state, they can evolve into a stationary entangled state under the collective decay. The distinct aspect of collective decay of two qutrits is that a pure Bell singlet state may be generated from an initial mixed state. In the last year, much attention has been paid to the preparation of the maximally entangled mixed state [@Peters2004; @Barbieri2004]. The properties of maximally entangled mixed state have been studied by many authors [@Ishizaka2000; @Verstraete2001; @Wei2003] Here, we show that collective decay of two qubits initially in the standard Werner state in a common thermal reservoir at zero-temperature can generate a stationary maximally entangled mixed state if only the fraction of Bell singlet state in the initial state is not smaller than $\frac{2}{3}$. It is found that stationary state $\rho_1$ of two qubits initially in the standard Werner state in the common thermal reservoir builds a bridge across the Werner state with $r\geq\frac{5}{9}$ and the maximally entangled mixed state if the temperature of the reservoir can be adiabatically varied from zero to infinite or vice versa. This paper is organized as follows: In Sec.II we investigate the stationary state entanglement of two qubits collectively interacting with a common thermal reservoir and find that the fraction of the Bell singlet state in the initial state plays a key role in the question whether the common thermal reservoir can enhance the entanglement of two qubits or not. In Sec.III, the Bell violation of the stationary state of two qubits is investigated and it is shown that, in certain situation, the common thermal reservoir may drive two qubits initially satisfying Bell-CHSH inequality into a stationary state which violates the Bell-CHSH inequality. In Sec.IV, we investigate the concurrence versus the linear entropy of the stationary state and find that the common thermal reservoir at zero-temperature can make two qubits initially in the standard Werner state become a maximally entangled mixed state if only the fraction of the Bell singlet state in the initial state is not smaller than $\frac{2}{3}$. In Sec.V, we turn to consider the case in which two qutrits collectively interacting with a common thermal reservoir at zero-temperature and find that the initial maximally mixed state of two qutrits can become a stationary entangled state. Furthermore, we show that two qutrits initially in the conjectured bound entangled Werner state can become free entangled due to the collective decay caused by the common thermal reservoir at zero-temperature. In Sec.VI, there are some concluding remarks. II. THE STATIONARY ENTANGLEMENT OF TWO QUBITS COLLECTIVELY INTERACTING WITH A THERMAL RESERVOIR {#ii.-the-stationary-entanglement-of-two-qubits-collectively-interacting-with-a-thermal-reservoir .unnumbered} =============================================================================================== Up to date, much attention has been paid to the environment-induced entanglement [@Beige2000; @Braun2002; @Kim2002; @Schneider2002; @Pleniohue2002; @Kraus2004; @Clark2003; @Benatti2003]. Here, we consider such a situation in which two qubits collectively interacting with a common thermal reservoir. Two qubits are assumed initially in the Werner state or Werner-like states. Under the Markovian approximation, the dynamical behavior of two qubits in this case can be described by the following master equation =(2\_[-]{}\_[+]{}-\_[+]{}\_[-]{}-\_[+]{}\_[-]{})\    +(2\_[+]{}\_[-]{}-\_[-]{}\_[+]{}-\_[-]{}\_[+]{}), where $\gamma$ characterizes the coupling strength between two qubits and the thermal reservoir. $N$ is the mean phonon number of the thermal environment. $\hat{J}_{\pm}$ are the collective atomic operators defined by \_&=&\^[2]{}\_[i=1]{}\^[(i)]{}\_,\ \^[(i)]{}\_[+]{}&=&|1\_i0\_i|,   \^[(i)]{}\_[-]{}=|0\_i1\_i|, where $|1_i\rangle$ and $|0_i\rangle$ are up and down states of the $i$th qubit, respectively. Recently, the Werner or Werner-like states [@Werner1989; @Munro2001; @Ghosh2001; @Wei2003] has intrigued many interests for the applications in quantum information processes. Lee and Kim have discussed the entanglement teleportation via the Werner states [@Lee2000]. Hiroshima and Ishizaka have studied the entanglement of the so-called Werner derivative, which is the state transformed by nonlocal unitary-local or nonlocal-operations from a Werner state [@Hiroshima2000]. Miranowicz has examined the Bell violation and entanglement of Werner states of two qubits in independent decay channels [@Miranowicz2004]. The experimental preparation and characterization of the Werner states have also been reported. An experiment for preparing a Werner state via spontaneous parametric down-conversion has been put forward [@Zhang2002]. Barbieri *et al.* have presented a novel technique for generating and characterizing two-photon polarization Werner states [@Barbieri2004], which is based on the peculiar spatial characteristics of a high brilliance source of entangled pairs. If the two qubits are initially prepared in the Werner state or Werner-like state, how does the external common thermal reservoir affect their entanglement and nonlocality properties? Here, we address this question and show that both the stationary state entanglement and nonlocality heavily depend on the fraction of Bell singlet state in the initial state and the intensity of the thermal reservoir. The standard two-qubit Werner state is defined by [@Werner1989] \_W=r|\^[-]{}\^[-]{}|+I, where $r\in[0,1]$, and $|\Phi^{-}\rangle$ is the singlet state of two qubits. $I$ is the identity operator of a single qubit. Recently, definition (3) is generalized to include the following states of two qubits [@Munro2001; @Ghosh2001; @Wei2003] \^[’]{}\_W=r|\^[+]{}|+I, where $|\Phi^{\pm}\rangle=\frac{\sqrt{2}}{2}(|10\rangle\pm|01\rangle)$. Both the Werner state (3) and the Werner-like state (4) are very important in quantum information. The Werner state (3) is highly symmetric and $SU(2)\otimes{S}U(2)$ invariant [@Bennett1996; @Werner1989]. The mixedness, entanglement and nonlocality of both the Werner state and the Werner-like state are uniquely determined by the parameter $r$. In the following, we consider two different cases. In the case 1, two qubits are initially in the state ($r\in[0,1]$) $r|\Phi^{-}\rangle\langle\Phi^{-}|+\frac{1-r}{4}I\otimes{I}$; In the case 2, two qubits are initially $r|\Phi^{+}\rangle\langle\Phi^{+}|+\frac{1-r}{4}I\otimes{I}$. The fractions of the Bell singlet state defined by ${\mathrm{Tr}}(|\Phi^{-}\rangle\langle\Phi^{-}|\rho)$ in both cases are $f_1(r)=\frac{1+3r}{4}$ and $f_2(r)=\frac{1-r}{4}$ respectively. We will show that the fraction of the Bell singlet state plays a crucial role in the stationary entanglement and stationary Bell violation. In the case 1, as the time $t\rightarrow\infty$, the stationary state of the master equation (1) can be obtained as follows: \_1=a\_1|1111|+a\_2|1010|+a\_3|0101| +a\_4|0000|\ +a\_5|1001|+a\^\_5|0110|, where a\_1&=&,\ a\_2&=&a\_3=,\ a\_4&=&1-a\_1-a\_2-a\_3,\ a\_5&=&,\ L&=&1+3N(N+1). In the case 2, the stationary state of the master equation (1) can be obtained as follows: \_2=b\_1|1111|+b\_2|1010|+b\_3|0101| +b\_4|0000|\ +b\_5|1001|+b\^\_5|0110|, where b\_1&=&,\ b\_2&=&b\_3=,\ b\_4&=&1-b\_1-b\_2-b\_3,\ b\_5&=&. In order to quantify the degree of entanglement, we adopt the concurrence $C$ defined by Wooters [@Woo1998]. The concurrence varies from $C=0$ for an unentangled state to $C=1$ for a maximally entangled state. For two qubits, in the “Standard” eigenbasis: $|1\rangle\equiv|11\rangle$, $|2\rangle\equiv|10\rangle$, $|3\rangle\equiv|01\rangle$, $|4\rangle\equiv|00\rangle$, the concurrence may be calculated explicitly from the following: C={\_1-\_2-\_3-\_4,0}, where the $\lambda_{i}$($i=1,2,3,4$) are the square roots of the eigenvalues *in decreasing order of magnitude* of the “spin-flipped” density matrix operator $R=\rho_s(\sigma^{y}\otimes\sigma^{y})\rho^{\ast}_s(\sigma^{y}\otimes\sigma^{y})$, where the asterisk indicates complex conjugation. It is straightforward to compute analytically the concurrence $C_1$ and $C_2$ for the density matrices $\rho_1$ and $\rho_2$, respectively, C\_1=, and C\_2=, which implies the stationary state $\rho_1$ is entangled for the case with a fixed $N$ if and only if $r>\frac{6N^2+6N-1}{18N^2+18N+3}$. Meanwhile, the stationary state $\rho_2$ is entangled if and only if two inequalities $0\leq{r}<\frac{1-6N-6N^2}{1+6N+6N^2}$ and $0\leq{N}<\frac{\sqrt{15}-3}{6}$ are simultaneously satisfied. In Fig.1, the concurrences $C_1$ is plotted as the function of the parameter $r$ of the initial Werner state and the average phonon number $N$ of the thermal reservoir. It is shown that, if two qubits are initially in the standard Werner state, the stationary entanglement of two qubits increases with the fraction of the Bell singlet state in their initial state, and decreases with $N$. In Fig.2, we plot the concurrence $C_2$ as the function of the parameter $r$ of the initial Werner-like state in Eq.(4) and the average photon number $N$ of the thermal reservoir. We can see that the stationary state entanglement decreases both with the increase of $r$ and $N$, which implies the higher initial entanglement does not result in the higher stationary state entanglement. The reason is that the fraction of the Bell singlet state in the Werner-like state $r|\Phi^{+}\rangle\langle\Phi^{+}|+\frac{1-r}{4}I\otimes{I}$ decreases with $r$. From Fig.1 and Fig.2, we can observe that, in the case with $N=0$, namely two qubits collectively interact with a thermal reservoir at zero-temperature, the stationary state is always entangled if only the initial state of two qubits is not absolutely symmetric, i.e. the fraction of Bell singlet state in the initial state is not zero. When $r\leq\frac{1}{3}$, the initial Werner state or Werner-like state are separable. Surprisingly, the external thermal reservoir can enhance the entanglement of two qubits even if two qubits are initially in a separable state. For example, when $r=0$, two qubits is initially in the maximally mixed state. However, the collective interaction between two qubits and the low-temperature thermal reservoir can drive two initial maximally mixed qubits into a stationary entangled mixed state. Comparing the stationary state entanglement with the entanglement of the initial Werner or Werner-like states, we can obtain the condition that the entanglement can be enhanced by external common thermal environment. This condition closely relates to the fraction $F$ of Bell singlet state and the average photon number of the thermal environment, which can be expressed by the following inequalities: 1&gt;F&gt;(,). or &gt;. The increment of entanglement between the stationary state entanglement and its initial entanglement of the Werner state can be obtained as follows: =   ,\ =   .In Fig.3, we plot the increment of entanglement between the stationary state entanglement and its initial entanglement of the Werner state as the function of $N$ and $r$. It is shown that the thermal reservoir can enhance the entanglement of two qubits in the range indicated by the inequalities (12) and (13). When the parameter $r$ of the initial Werner state is larger than $\frac{1}{3}$, the increment of entanglement $\Delta{C}$ decreases both with the increase of $r$ and $N$. When $r\leq\frac{1}{3}$, the initial state is separable, nevertheless, the stationary state may be entangled if only $r>\frac{6N^2+6N-1}{18N^2+18N+3}$. From Eq.(10) or Eq.(14), we can get a conclusion that the common thermal reservoir with any large intensity can enhance the entanglement of two qubits collectively coupled with the reservoir if only the fraction $F$ of Bell singlet state in the initial Werner state is smaller than 1 and not smaller than $\frac{1}{2}$. In Fig.4, the increment of entanglement between the stationary state entanglement and its initial entanglement of the Werner-like state in Eq.(4) is plotted as the function of $N$ and $r$. We can see that, in this case, the increment of entanglement decreases both with $r$ and $N$. In the following section, we shall investigate how a common thermal reservoir affects the Bell violation of two qubits initially in the Werner state. ![The concurrence $C_1$ of the stationary states $\rho_1$ is plotted as the function of the parameter $r$ of the initial Werner state $r|\Phi^{-}\rangle\langle\Phi^{-}|+\frac{1-r}{4}I\otimes{I}$ and the intensity $N$ of the external common thermal environment.](Fig1.eps){width="2.5in"} ![The concurrence $C_2$ of the stationary states $\rho_2$ is plotted as the function of the parameter $r$ of the initial Werner-like state $r|\Phi^{+}\rangle\langle\Phi^{+}|+\frac{1-r}{4}I\otimes{I}$ and the intensity $N$ of the external common thermal environment.](Fig2.eps){width="2.5in"} ![The increment $\Delta{C}$ of entanglement between the stationary state entanglement and its initial entanglement of the Werner state $r|\Phi^{-}\rangle\langle\Phi^{-}|+\frac{1-r}{4}I\otimes{I}$ is plotted as the function of $N$ and $r$.](Fig3.eps){width="2.5in"} ![The increment $\Delta{C}$ of entanglement between the stationary state entanglement and its initial entanglement of the Werner-like state $r|\Phi^{+}\rangle\langle\Phi^{+}|+\frac{1-r}{4}I\otimes{I}$ is plotted as the function of $N$ and $r$.](Fig4.eps){width="2.5in"} III. BELL VIOLATION OF TWO QUBITS INTERACTING WITH A COMMON THERMAL ENVIRONMENT {#iii.-bell-violation-of-two-qubits-interacting-with-a-common-thermal-environment .unnumbered} =============================================================================== In this section, we attempt to discuss the nonlocality of two qubits in their stationary states. The nonlocal property of two qubits can be characterized by the maximal violation of Bell inequality. Recently, it has been argued that entanglement and nonlocality of two qubits are different resources [@Brunner2005]. We also find that the stochastic-resonance-like behavior of entanglement can not be observed in the Bell violation of two qubits during the evolution [@Li2005]. The concurrence, one of the good entanglement measures, is not monotonic function of the maximal violation of Bell inequality for some entangled mixed states of two qubits [@Vers2002]. So it is interesting to investigate how the collective decay of two qubits can affect their maximal violation of Bell inequality. The most commonly discussed Bell inequality is the CHSH inequality [@Bell1965; @CHSH]. The CHSH operator reads =(+) +(-), where $\vec{a},\vec{a^{\prime}},\vec{b},\vec{b^{\prime}}$ are unit vectors. In the above notation, the Bell inequality reads ||2. The maximal amount of Bell violation of a state $\rho$ is given by [@Horo1995] =2, where $\lambda$ and $\tilde{\lambda}$ are the two largest eigenvalues of $T^{\dagger}_{\rho}T_{\rho}$. The matrix $T_{\rho}$ is determined completely by the correlation functions being a $3\times3$ matrix whose elements are $(T_{\rho})_{nm}={\mathrm{Tr}}(\rho\sigma_{n}\otimes\sigma_{m})$. Here, $\sigma_1\equiv\sigma_x$, $\sigma_2\equiv\sigma_y$, and $\sigma_3\equiv\sigma_z$ denote the usual Pauli matrices. We call the quantity $\mathcal{B}$ the maximal violation measure, which indicates the Bell violation when ${\mathcal{B}}>2$ and the maximal violation when ${\mathcal{B}}=2\sqrt{2}$. In what follows, we focus our attention on the role of the common thermal reservoir on the Bell violation of two qubits. If two qubits are initially in the Werner or Werner-like states, we find that the Bell violation of stationary state of two qubits heavily depends on the fraction of the Bell singlet state in the initial state. ![Maximal Bell violation of the stationary state $\rho_1$ is plotted as the function of the parameter $r$ of the initial Werner state $r|\Phi^{-}\rangle\langle\Phi^{-}|+\frac{1-r}{4}I\otimes{I}$ and the intensity $N$ of the external common thermal environment.](Fig5.eps){width="2.5in"} ![Concurrence versus mixedness for the stationary states $\rho_1$ is depicted for any possible values of $N$ and different values of $r$. The short dash line and the short dot line represent the Werner state and the maximally entangled mixed state (the frontier of the concurrence versus the linear entropy), respectively.](Fig6.eps){width="2.5in"} For the density operator $\rho_1$ in Eqs.(5-6) and $\rho_2$ in Eqs.(7-8) characterizing the two stationary states of two qubits governed by the master equation (1) corresponding to two kinds of different initial states, the maximal Bell violation $|B_1|_{max}$ and $|B_2|_{max}$ can be written as follows |B\_1|\_[max]{}=2 and |B\_2|\_[max]{}=2 The sufficient and necessary condition for $|B_1|_{max}>2$ can be derived as follows: r&gt;, and $|B_2|_{max}$ can be easily verified that it can not be larger than $2$. Since the initial standard Werner state can not violate any Bell-CHSH inequality when $r\leq\frac{\sqrt{2}}{2}$, it is interesting that the corresponding stationary state may achieve the Bell violation even if $\frac{\sqrt{2}}{2}\geq{r}>\frac{2\sqrt{2}-1+6\sqrt{2}N(N+1)}{3+12N(N+1)}$, which means the common thermal reservoir can induce the stationary Bell violation of two qubits. This may be important for the experimental verification of Bell violation in the quantum dots in which the collective decay may be caused by the common thermal phonon background. In Fig.5, we plot the maximal value of the Bell violation of the stationary state as the function of $r$ and $N$ for the case in which two qubits are initially in the standard Werner state. It is shown that Bell violation of the stationary state decreases with the decrease of the parameter $r$. The Bell violation also decreases with $N$. However, if the initial standard Werner state is very pure, the Bell violation of its stationary state is robust against the collective decay. ![Concurrence versus mixedness for the stationary states $\rho_2$ is depicted for any possible values of $N$ and different values of $r$. The dash dot dot line represents the Werner state.](Fig7.eps){width="2.5in"} IV. CONCURRENCE VERSUS MIXEDNESS OF TWO QUBITS INTERACTING WITH A COMMON THERMAL ENVIRONMENT {#iv.-concurrence-versus-mixedness-of-two-qubits-interacting-with-a-common-thermal-environment .unnumbered} ============================================================================================ In this section, we pay our attention to investigate how the common thermal reservoir affects the relation between the concurrence and the mixedness of the stationary state. We find the common thermal reservoir can drive two qubits to exceed the curve of the initial standard Werner state in the figure labelled by the concurrence and the linear entropy if only the fraction of Bell singlet state is not smaller than a threshold value. Ordinarily, the mixedness of a state can be characterized by the linear entropy which is defined by $M=\frac{4}{3}(1-{\mathrm{Tr}}\rho^2)$. For the stationary states $\rho_1$ and $\rho_2$ in Eqs.(5-6) and Eqs.(7-8) respectively, the mixedness can be calculated as follows: M\_1(\_1)=(1-\^4\_[i=1]{}a\^2\_i-2a\^2\_5),\ M\_2(\_2)=(1-\^4\_[i=1]{}b\^2\_i-2b\^2\_5). In Fig.6, we display the concurrence and the mixedness of the stationary state $\rho_1$ of two qubits which is initially in the standard Werner state. From Fig.6, it can be observed that in the situations with $r\geq0.4$, the corresponding stationary state can go beyond the curve of the concurrence and linear entropy of the original Werner state. When the intensity of the common thermal reservoir decreases, the concurrence of the corresponding stationary state $\rho_1$ increases and its mixedness characterized by the linear entropy decreases. This implies that the common thermal reservoir not only can enhance the concurrence of the mixed state initially with large fraction of Bell singlet state but also can decrease the mixedness. This counterintuitive phenomenon may be helpful for the entanglement purification or distillation. From Eqs.(5-6), we can immediately know that the stationary state is as the same as the initial Werner state in the case of infinite high temperature thermal reservoir, i.e. $N\rightarrow\infty$. In the case with $N=0$, i.e. a thermal reservoir at zero-temperature, the corresponding stationary states $\rho_1$ with $r\geq\frac{5}{9}$ become the maximally entangled mixed state, i.e. the frontier of the concurrence versus the linear entropy of two qubits [@Wei2003]. So we can achieve a conclusion that the common thermal reservoir at zero-temperature can make two qubits initially in the standard Werner state become a maximally entangled mixed state if only the fraction of the Bell singlet state in the initial state is not smaller than $\frac{2}{3}$. It provides us a feasible way to prepare the maximally entangled mixed state in various physical systems such as the trapped ions, quantum dots or Josephson Junctions. In these systems, the collective decay has been extensively studied both theoretically and experimentally. One may see that stationary state $\rho_1$ of two qubits in the common thermal reservoir builds a bridge across the Werner state with $r\geq\frac{5}{9}$ and the maximally entangled mixed state if the temperature of the reservoir can be adiabatically varied. We conjecture that this property is closely related to the fact that the frontier of the concurrence versus the linear entropy of two qubits contains two different branches [@Wei2003]. Interestingly, If we adopt negativity to measure the entanglement of two qubits, the Werner state becomes the frontier in the sense that these states have the maximal negativity for a given linear entropy or Von-Neumann entropy [@Wei2003]. So roughly speaking, the stationary states $\rho_1$ with $r\geq5/9$ of two qubits in the common thermal reservoir in two extreme situations, i.e. the zero temperature and the infinite high temperature, become part of the frontier of the concurrence versus linear entropy and the whole frontier of the negativity versus linear entropy, respectively. Therefore, it is very necessary to study the relation among the negativity of the stationary state, the intensity of the common reservoir and the fraction of Bell singlet state. The negativity for a bipartite state $\rho$ is defined as ()=2\_i|\_i|, where $\mu_i$ is the negative eigenvalues of partial transpose $\rho^{\Gamma}$ of the density matrix $\rho$. We can easily obtain the negativity ${\mathcal{N}}_1$ and ${\mathcal{N}}_2$ of the stationary state $\rho_1$ in Eqs.(5-6) and $\rho_2$ in Eqs.(7-8) respectively as follows: \_1&=&|a\_1+a\_4-|\ &&-(a\_1+a\_4-), and \_2&=&|b\_1+b\_4-|\ &&-(b\_1+b\_4-), We find that the negativity ${\mathcal{N}}_1$ decreases with $N$ and increases with $r$, and the negativity ${\mathcal{N}}_2$ decreases with $N$ and $r$. In Fig.7, we display the concurrence and the mixedness of the stationary state $\rho_2$ of two qubits. It is shown that, the entanglement of the stationary state for a given value of mixedness is much smaller than the entanglement of the Werner state with the same value of mixedness. V. THE COLLECTIVE DECAY OF TWO QUTRITS {#v.-the-collective-decay-of-two-qutrits .unnumbered} ====================================== In this section, we turn to consider the collective decay of two qutrits in the common thermal reservoir at zero-temperature. Under the Markovian approximation, the collective decay of two qutrits can be described by the following master equation: =(2L\_[-]{}\_[+]{}-L\_[+]{}L\_[-]{}-\_[+]{}L\_[-]{}),where $L_{\pm}\equiv\sum^{2}_{i=1}J^{(i)}_{\pm}$, and $J^{(i)}_{-}$ ($J^{(i)}_{+}$) is the qutrit down (up) operator of the $i$th qutrit. The representation of $J_{\pm}$ in the space spanned by the three orthogonal vector $\{|1\rangle,|2\rangle,|3\rangle\}$ of a qutrit can be written as J\_[-]{}=|12|+|23|\ J\_[+]{}=|21|+|32|.If we assume that two qutrits are initially in the maximally mixed state, i.e. $\rho_0=\frac{1}{9}\sum^{3}_{i,j=1}|i,j\rangle\langle{i,j}|$, then the corresponding stationary state of the master equation (24) can be obtained as \_s=|1,11,1|+(|1,2-|2,1)(1,2|-2,1|)\ +(|3,1+|1,3-|2,2)(3,1|+1,3|-2,2|).It is easy to verify that the $\rho_s$ is an entangled state of two qutrits and its negativity defined by Eq.(22) can be calculated as $\frac{\sqrt{97}-8}{27}$. In what follows, we further show that two qutrits initially in some conjectured negative partial transpose bound entangled states can become distillable. In Ref.[@DiVincenzo2000; @Dur2000], the authors presented the following conjecture: Given is the class of Werner state in ${\mathcal{H}}_3\otimes{\mathcal{H}}_3$ \_W()=(-\^[3]{}\_[i,j=1]{}|i,j|) where $I_3$ is the identity operator in ${\mathcal{H}}_3$. The state $\lim_{\eta\rightarrow\infty}\rho_W(\eta)$ is separable and for any finite $\eta\geq\frac{1}{2}$ $\rho_W(\eta)$ is entangled and violates the Peres-Horodecki criterion [@Peres1996; @Horodecki1996]. It has been shown that there is convincing evidence in support of the conjecture that for all $\eta\geq2$ the state $\rho_W(\eta)$ is undistillable. We will show that two qutrits initially in $\rho_W(\eta)$ can become a stationary free entangled state. Substituting $\rho_W(\eta)$ into Eq.(25), and the corresponding stationary state can be easily obtained \^[(s)]{}\_W()=\[(10-5)|1,11,1|+(2-1)|S\_1\_1|\ +(12+3)|A\_1\_1|\],where $|S_1\rangle=\frac{\sqrt{3}}{3}(|3,1\rangle+|1,3\rangle-|2,2\rangle)$, and $|A_1\rangle=\frac{\sqrt{2}}{2}(|1,2\rangle-|2,1\rangle)$. The negativity of the stationary state $\rho^{(s)}_W(\eta)$ can be calculated as N=2(-\_3)+2||,where $\kappa$ is the negative root of the polynomial x\^3-(\_1+\_2)x\^2+(\_1\_2-\^2\_2-\^2\_3)x+\^3\_2=0, and \_1=,\ \_2=,\ \_3=. ![The negativity of the stationary state $\rho^{(s)}_W(\eta)$ is plotted as the function of the parameter $\eta$.](Fig8.eps){width="2.5in"} In Fig.8, we plot the negativity $N$ of the stationary state $\rho^{(s)}_W(\eta)$ as the function of $\eta$. It is shown that the negativity decreases with $\eta$ and eventually converges to about 0.21. In the case with $\eta=\frac{1}{2}$, we can see that the stationary state $\rho^{(s)}_W(\frac{1}{2})$ is a maximally entangled state $|A_1\rangle\langle{A}_1|$ in the space spanned by $\{|1,1\rangle,|1,2\rangle,|2,1\rangle,|2,2\rangle\}$. This implies that the common zero-temperature thermal reservoir plays a similar role with the entanglement purifier in this case. In the limit $\eta\rightarrow\infty$, $\rho^{(s)}_W(\infty)$ is also entangled. We will demonstrate that $\rho^{(s)}_W(\eta)$ ($\eta\in[2,\infty)$) is distillable even though the initial state $\rho_W(\eta)$ is conjectured to be undistillable in this region. By applying the local projection $\Pi_{1,2}\otimes\Pi_{1,2}\equiv(|1\rangle\langle1|+|2\rangle\langle2|)\otimes(|1\rangle\langle1|+|2\rangle\langle2|)$ on $\rho^{(s)}_W(\eta)$, we can immediately obtain the resulted density operator \^[(r)]{}\_W()=\ =|2,22,2|+|1,11,1|\ +|A\_1\_1|,where ${\mathrm{Tr}}(\Pi_{1,2}\otimes\Pi_{1,2} \rho^{(s)}_W(\eta)\Pi_{1,2}\otimes\Pi_{1,2})=\frac{68\eta-7}{72\eta-9}$ is the probability of obtaining the state $\rho^{(r)}_W(\eta)$. It is easy to verify that the partial transpose of $\rho^{(r)}_W(\eta)$ is not positive for any values of $\eta\in[\frac{1}{2},\infty)$. So $\rho^{(r)}_W(\eta)$ is always distillable for any values of $\eta\in[\frac{1}{2},\infty)$ according to the fact that the nonpositivity of a partial transpose is necessary and sufficient for distillability of $2\times2$ systems [@Horodecki1997]. Since the state $\rho^{(r)}_W(\eta)$ comes from the local projection of the state $\rho^{(s)}_W(\eta)$, the distillability of $\rho^{(r)}_W(\eta)$ indicates that $\rho^{(s)}_W(\eta)$ is also distillable in the region $\eta\in[\frac{1}{2},\infty)$. This kind of environment-assisted distillation may be very important for quantum information processes based on qutrits. It is possible to generalize the above results to two qudits with the higher dimension or two spins $\frac{d-1}{2}$ with $d>3$. Those results will be discussed elsewhere. VI. CONCLUSION {#vi.-conclusion .unnumbered} ============== In this paper, we have investigated the systems of two qubits or qutrits collectively interacting with a common thermal reservoir. It is shown that the fraction of Bell singlet state in the initial state is a key fact determining whether the collective decay can enhance the stationary state entanglement or Bell violation of two qubits or not. We have also found that collective decay of two qubits or two qutrits can induce stationary entanglement from their initial maximally mixed state. The detailed analytical relations among average thermal phonon number of the thermal reservoir, entanglement and Bell violation of two qubits have been obtained. The common thermal reservoir with any large intensity can enhance the entanglement of two qubits initially in a mixed Werner state collectively coupled with the reservoir if only the fraction $F$ of Bell singlet state in the initial state is not smaller than $\frac{1}{2}$. If the fraction $F$ of Bell singlet state in the initial Werner state is not smaller than $\frac{2}{3}$, two qubits in a common zero-temperature thermal reservoir can evolve into a stationary maximally entangled mixed state as the time $t\rightarrow\infty$. The corresponding stationary states of two qubits initially in the standard Werner state with $r\geq5/9$ in the common thermal reservoirs with two extreme situations, i.e. the zero temperature and the infinite high temperature, become part of the frontier of the concurrence versus linear entropy and the whole frontier of the negativity versus linear entropy, respectively. For two-qutrit case, we have found that two qutrits initially in the conjectured bound entangled Werner state can become distillable due to the collective decay caused by the common zero-temperature thermal reservoir. In addition, we obtained a more striking result that a pure stationary Bell singlet state in the $2\times2$ subspace of two qutrits may be generated from an initial mixed state in such a decoherence process. This kind of environment-induced superselection [@Zurek2003] may provide us a very useful quantum channel which is very desirable in quantum communication or quantum computation. ACKNOWLEDGMENT {#acknowledgment .unnumbered} ============== This project was supported by the National Natural Science Foundation of China (Project NO. 10174066). [16]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (); , ****, (); , ****, (); , , ****, (); , , ** (Cambridge University Press, Cambridge, ). *et al.*, ****, (). , ****, (). , , ****, (). , , , , ****, (R) (). , , ****, (). , , ****, (). , , ****, (). , ****, (). , , , ****, (); ****, (). , , ****, (). *et al.*, ****, (). *et al.*, ****, (). , ****, (). , , , ****, (). *et al.*, ****, (). , ****, (). , , , , ****, (R) (). *et al.*, ****, (). , , ****, (). , , ****, (). , ****, (). *et al.*, ****, (). *et al.*, ****, (). , ****, (). , , , ****, (). , , ****, (). ,  (N. Y.) ****, (). , , , , ****, (). , , , ****, (). , ****, (). *et al.*, ****, (). *et al.*, ****, (). , ****, (). , , ****, (). , , ****, (). , ****, ().
{ "pile_set_name": "ArXiv" }